Indian Express Financial Express The Week Times Now Hindustan Times NDTV Business Standard
Animesh Mukherjee, Associate Professor at the Computer Science and Engineering, IIT Kharagpur has been selected for the Ethics in AI Research Award in the ‘operationalizing ethics’ category for his project “Targeted Bias in Indian Media Outlets”.
The Award is part of Facebook’s initiative to encourage research on AI ethics and address intricate challenges and complex ethical questions in the AI domain. The award-winning project is part of collaborative work with Prof. Pawan Goyal and Ph.D. student Souvic Chakraborty.
This award-winning project deals with the burning issue of fake news. The scope of the project involves leveraging available information on online published media to predict fake news or identify bias in the news media articles. One of the major challenges in this work is formulating something as abstract as “bias” in a quantifiable manner.
Fake news has become a major point of concern in India owing to the explosive growth in the total number of smartphone users and a massive increase in the number of overall Internet users. Also, the emergence of social media and other forms of digital media make it difficult to identify the actual source of fake news among the huge number of secondary or even primary sources. In India, most of the laws being of the pre-internet era addressing this issue remains a legal complication. Rise of data analytics is adding to the issue with targeted content creation. When conventional media platforms propagate such bias they end up shaping the view of readers. During elections, such bias could lead to violation of EC regulations through its sparse, abstract and non-quantifiable nature. The situation could lead to narrowing down the voices of individuals and groups who do associate with targeted campaigns making use of such bias.
There has been little research to identify bias automatically in news media apart from manual studies done by independent journalists. As India crosses the half a billion smartphone users mark, it is more important than ever to characterize information available online through automatic algorithms and auto-updating crowd-sourced knowledge bases to restrict the spread of falsehood.
Prof. Mukherjee, who has been working in the areas of AI, ML, big data analytics and information retrieval, has identified the solution in leveraging available information on online published media to predict fake news or identify bias in the news media articles.
“One of the major challenges in this work is formulating something as abstract as “bias” in a quantifiable manner,” he remarked.
The team has collected 20-years’ data of three national media outlets and quantified bias on three metrics – coverage bias, word choice bias and topic choice bias. The team further plans to extend the reach of their study to local and digital media outlets.
Explaining the methodology, Mukherjee said, ”for a study with two datasets, the coverage bias was formulated as the ratio of the number of mentions of terms pertaining to two datasets. In case of more number of datasets, the researchers propose inverse of entropy in the distribution of words pertaining to each dataset. Word choice bias was formulated as the ratio of the positive and the negative words for each dataset. Topic choice coverage was formulated as the divergence score from the aggregate topic distribution for each data set.”
Talking about the future roadmap of this project Mukherjee confirmed they have plans to develop browser extensions to show bias score in real-time for identifiable sources.
On June 17, 2019, Facebook launched an India specific request for proposal seeking projects on Ethics in AI Research under three themes — a) operationalizing ethics, explainability and fairness b) governance and c) cultural diversity. The objective of the initiative is to help support thoughtful and groundbreaking academic research in the field of AI Ethics that takes into account different regional perspectives the three selected thematic areas. Synergy with the line of research that the TUM Institute for Ethics in AI was a key parameter.
A statement by the company says, “AI technological developments pose intricate and complex ethical questions that the industry alone cannot answer. Important research questions in the application of AI should be dealt with not only by companies building and deploying the technology, but also by independent academic research institutions. The latter are well equipped to pursue interdisciplinary research that will benefit society.”