Listen to the article
Generative AI Plays Multiple Roles in Fighting Misinformation, Study Finds
Generative artificial intelligence has emerged as a multifaceted tool in the battle against misinformation, capable of serving as both ally and adversary, according to new research from Uppsala University, the University of Cambridge, and the University of Western Australia.
“One important point is that generative AI has not just one but several functions in combating misinformation,” explains Thomas Nygren, Professor at Uppsala University and lead researcher. “The technology can be anything from information support and educational resource to a powerful influencer. We therefore need to identify and discuss the opportunities, risks and responsibilities associated with AI and create more effective policies.”
The comprehensive study examined how generative AI functions across the information landscape, identifying seven distinct roles the technology can play: informer, guardian, persuader, integrator, collaborator, teacher, and playmaker. For each role, researchers conducted a SWOT analysis to assess strengths, weaknesses, opportunities, and threats.
This structured approach offers a more nuanced perspective than broad statements about AI being inherently beneficial or dangerous. “A system can be helpful in one role but also harmful in the same,” Nygren notes. “Analyzing each role using SWOT can help decision-makers, schools and platforms discuss the right measures for the right risk.”
The research team discovered that AI’s capacity to generate content represents both its greatest asset and liability. While generative AI can produce easy-to-understand information and detect suspicious content at scale, it can also fabricate “facts” through “hallucinations” and perpetuate biases present in its training data.
For example, in its role as “guardian,” AI can efficiently flag potential misinformation and identify coordinated disinformation campaigns. However, the same systems may struggle with detecting irony or legitimate controversies, potentially leading to false positives or negatives in content moderation.
Similarly, as a “persuader,” AI can help correct misconceptions through personalized explanations and educational interventions. Yet these persuasive capabilities could just as easily be weaponized to create convincing yet misleading content quickly and cheaply.
The study comes at a critical juncture as AI systems become increasingly sophisticated and widely accessible. Major tech companies including OpenAI, Google, and Anthropic have released increasingly powerful large language models capable of generating highly convincing text, images, and even video that can be difficult to distinguish from human-created content.
In educational settings, the researchers highlight both promise and peril. “Generative AI can be valuable for promoting important knowledge in school that is needed to uphold democracy and protect us from misinformation,” Nygren says. “But there is a risk that excessive use could be detrimental for the development of knowledge and make us lazy and ignorant and therefore more easily fooled.”
To address these challenges, the researchers emphasize four key recommendations. First, they call for clear regulations governing AI use in sensitive information environments. Second, they advocate for transparency regarding AI-generated content and its limitations. Third, they stress the importance of human oversight whenever AI is used for decisions, moderation, or advice. Finally, they highlight the need for improved AI literacy to help users better evaluate and question AI outputs.
The research provides a valuable framework for policymakers, educators, and technology platforms as they navigate the complex landscape of generative AI. By understanding the various roles AI can play and the specific risks associated with each, stakeholders can develop more targeted approaches to harnessing AI’s potential while mitigating its dangers.
“With the rapid pace of developments, it’s important to constantly scrutinize the roles of AI with a critical and constructive eye,” Nygren concludes, highlighting the need for ongoing assessment as the technology evolves.
The study, titled “The seven roles of generative AI: Potential & pitfalls in combatting misinformation,” was published in Behavioral Science & Policy.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


9 Comments
The seven distinct roles for generative AI outlined in this study present both opportunities and risks that deserve close examination. I’m curious to learn more about the potential pitfalls and unintended consequences associated with each function.
Glad to see leading academic institutions like Uppsala University researching the intersections of generative AI and misinformation. The findings on the technology’s multifaceted functions will be invaluable for developing robust, holistic solutions.
This research highlights the nuance and complexity involved in harnessing generative AI to address misinformation. The seven distinct roles show both the potential benefits and the risks that need to be carefully managed.
Agreed. Deploying generative AI as a tool against misinformation requires a sophisticated, multifaceted approach. The study’s insights can help guide the development of responsible, effective strategies.
Intriguing work from Uppsala University. The idea of generative AI serving as both an ‘ally and adversary’ in the fight against misinformation is a thought-provoking finding. I wonder what specific use cases exemplify these dual roles.
Good point. Understanding the contrasting capabilities and applications of generative AI will be crucial for policymakers and practitioners to navigate this complex landscape effectively.
This study reinforces the importance of a nuanced, multi-stakeholder approach to leveraging emerging technologies like generative AI to address complex societal challenges like misinformation. The seven roles identified provide a helpful framework for further exploration.
Fascinating insights on the multifaceted role of generative AI in combating misinformation. I’m curious to learn more about the specific strengths and weaknesses of each function identified by the study.
Yes, the SWOT analysis for each role must provide valuable context. I look forward to seeing how policymakers can leverage these findings to create more effective policies around generative AI.