Listen to the article
Generative AI’s Dual Role in the Fight Against Misinformation Revealed in New Research
Generative artificial intelligence presents both powerful opportunities and serious threats in combating misinformation, according to a comprehensive new study by researchers from Uppsala University, University of Cambridge, and University of Western Australia.
The research identifies seven distinct roles that AI can play in the information environment, each with its own set of strengths, weaknesses, opportunities, and risks.
“One important point is that generative AI has not just one but several functions in combating misinformation,” explains Professor Thomas Nygren of Uppsala University, who led the study. “The technology can be anything from information support and educational resource to a powerful influencer. We therefore need to identify and discuss the opportunities, risks and responsibilities associated with AI and create more effective policies.”
The research team conducted an extensive review of recent studies on how generative AI functions across the information landscape. Rather than making broad generalizations about AI being either “good” or “dangerous,” they employed a SWOT framework to provide a nuanced analysis of each role.
This approach recognizes that the same AI system can be simultaneously helpful in one context and harmful in another. The researchers believe this detailed analysis offers decision-makers, educational institutions, and platforms a more practical foundation for developing targeted measures to address specific risks.
Through their analysis, the team identified seven key roles that generative AI can play: informer, guardian, persuader, integrator, collaborator, teacher, and playmaker. Each role represents a distinct function in obtaining information, detecting problems, influencing people, supporting collaboration, facilitating learning, or designing interactive environments.
As an informer, AI can make complex information more accessible and provide quick overviews of vast data sets. However, it can also produce “hallucinations” – convincing but fabricated information – and perpetuate biases present in its training data.
In its guardian role, AI can efficiently detect suspicious content at scale and identify coordinated misinformation campaigns. Yet this same capability comes with risks of false positives that might flag legitimate content or false negatives that miss subtle forms of misinformation.
Perhaps most concerning is AI’s role as a persuader. While it can help correct misconceptions through personalized explanations, this same persuasive power can be weaponized for manipulation and the mass production of convincing yet misleading content.
“We show how generative AI can produce dubious content yet can also detect and counteract misinformation on a large scale,” Nygren notes. “However, risks such as hallucinations, reinforcement of prejudices and misunderstandings, and deliberate manipulation mean that the technology has to be implemented responsibly.”
The study emphasizes that the rapid pace of AI development necessitates ongoing critical evaluation of all seven roles. The researchers highlight four key areas requiring immediate attention: establishing clear regulatory frameworks for AI use in sensitive information environments; ensuring transparency about AI-generated content and its limitations; maintaining human oversight where AI is used for decisions or advice; and promoting AI literacy among users.
In educational settings, the research suggests both promise and peril. AI can provide valuable tools for teaching critical information literacy skills essential to democracy, but overreliance could potentially undermine knowledge development and make users more vulnerable to misinformation.
“There is a risk that excessive use could be detrimental for the development of knowledge and make us lazy and ignorant and therefore more easily fooled,” warns Nygren.
The study comes at a critical time when generative AI tools like ChatGPT and similar systems are becoming increasingly sophisticated and accessible to the general public, raising urgent questions about their impact on information integrity and democratic discourse.
As societies grapple with the implications of these powerful technologies, this research provides a structured framework for understanding both the opportunities and challenges they present in the ongoing battle against misinformation.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


14 Comments
This is an important step in understanding how AI can be leveraged to combat misinformation, while also being mindful of the risks. Looking forward to seeing the full study.
Yes, the seven distinct roles highlight the complexity of the issue. Policymakers will need to weigh these factors as they develop regulations.
As someone who follows the energy and mining sectors closely, I’m particularly interested in how this research could inform efforts to combat misinformation in those industries.
Good point. Misinformation can have significant impacts on commodity markets, so this research could offer valuable insights for those sectors.
This is an important topic as misinformation remains a major challenge. I appreciate the nuanced analysis of how AI can be both a help and a hindrance in this domain.
Agreed. Developing effective policies and responsible use of AI will be critical to harness its potential while mitigating risks.
The idea of generative AI playing seven distinct roles is intriguing. I look forward to reading the full study to understand the various implications in more depth.
Yes, the complexity of this issue is clear. Balancing the opportunities and risks will require careful consideration by policymakers and the public.
As someone with an interest in the mining and commodities sectors, I’m curious to see how this research could apply to combating misinformation in those industries.
That’s a good point. Misinformation can certainly be a challenge in extractive industries as well. The research could offer valuable insights.
Generative AI seems to be a double-edged sword when it comes to misinformation. I’m glad to see researchers taking a nuanced approach to understanding its impacts.
Absolutely. The technology has the potential for both good and harm, so a careful, evidence-based analysis is crucial.
Interesting to see how generative AI can play a multifaceted role in combating misinformation. I’m curious to learn more about the specific use cases and potential tradeoffs discussed in the research.
Yes, the seven distinct roles are intriguing. It will be important to carefully weigh the pros and cons as this technology continues to evolve.