Listen to the article

0:00
0:00

AI’s Dual Role in Fighting Misinformation: New Research Identifies Key Functions

Generative AI offers both powerful tools and significant risks in combating misinformation, according to new comprehensive research conducted by an international team led by Professor Thomas Nygren of Uppsala University, in collaboration with researchers from the University of Cambridge and the University of Western Australia.

“One important point is that generative AI has not just one but several functions in combating misinformation,” explains Nygren. “The technology can be anything from information support and educational resource to a powerful influencer. We therefore need to identify and discuss the opportunities, risks and responsibilities associated with AI and create more effective policies.”

The study provides a nuanced framework for understanding AI’s multifaceted role in the information ecosystem by reviewing the latest research across various disciplines. Rather than making broad generalizations about AI being inherently good or dangerous, the researchers employed a SWOT (Strengths, Weaknesses, Opportunities, Threats) framework to analyze different AI applications.

“The roles emerged from a process of analysis where we started out from the perception that generative AI is not a simple ‘solution’ but a technology that can serve several functions at the same time,” Nygren notes.

The research team identified seven distinct roles that generative AI can play in the information landscape, each with its own benefits and potential drawbacks:

As an Informer, AI can simplify complex information, translate content, and process large volumes of data quickly. However, it may produce inaccurate information (known as “hallucinations”), oversimplify topics, or reproduce biases present in its training data.

In its Guardian role, AI can detect suspicious content at scale and identify coordinated misinformation campaigns. Yet it struggles with nuance, often missing irony or context, and raises questions about responsibility and legal oversight in content moderation.

The Persuader function allows AI to correct misconceptions through personalized explanations but simultaneously creates powerful tools for manipulation and mass production of misleading content.

As an Integrator, AI can structure discussions and summarize arguments, though it risks creating false equivalencies between valid and invalid viewpoints or subtly controlling how problems are framed.

The Collaborator role enables AI to assist in analysis and writing but may lead to overconfidence and cognitive outsourcing when users fail to recognize the system’s limitations.

In its Teacher capacity, AI provides personalized feedback and training at scale while potentially spreading incorrect information or reducing the investigative nature of education.

Finally, as a Playmaker, AI can design interactive learning environments that build resilience against misinformation, though it may inadvertently reinforce stereotypes or reward problematic behaviors.

The researchers emphasize that this role-based analysis serves as a practical checklist for understanding how AI can both strengthen societal resilience against misinformation and introduce new vulnerabilities.

“We show how generative AI can produce dubious content yet can also detect and counteract misinformation on a large scale,” the researchers note. “However, risks such as hallucinations, reinforcement of prejudices and misunderstandings, and deliberate manipulation mean that the technology has to be implemented responsibly.”

The study concludes with several key recommendations, including the need for clear regulations governing AI use in sensitive information environments, transparency about AI-generated content and its limitations, human oversight in critical applications, and improved AI literacy among users.

Professor Nygren particularly highlights the educational implications: “Generative AI can be valuable for promoting important knowledge in school that is needed to uphold democracy and protect us from misinformation, but there is a risk that excessive use could be detrimental for knowledge development and make us lazy and ignorant—and therefore more easily fooled.”

As AI technology continues to evolve rapidly, the researchers stress the importance of ongoing critical evaluation across all seven identified roles to maximize benefits while minimizing potential harms.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

14 Comments

  1. Fascinating to see how generative AI is being deployed to combat disinformation. Careful analysis of its strengths, weaknesses, opportunities, and threats seems prudent to maximize its benefits while mitigating risks.

    • Agreed, a nuanced approach is essential. AI has great potential, but we must also be vigilant about its misuse and unintended consequences.

  2. Lucas G. Jones on

    Using generative AI to combat disinformation is a bold and innovative approach. I hope the research provides a solid foundation for effective and responsible deployment of this technology.

    • Agreed, the stakes are high, and getting the right balance between AI’s capabilities and oversight will be crucial.

  3. Jennifer Martinez on

    Deploying generative AI to combat disinformation is an ambitious and potentially impactful approach. I hope the research provides a solid framework to guide responsible development and deployment.

    • Agreed, a thoughtful and comprehensive strategy is essential. The potential risks and unintended consequences must be thoroughly evaluated.

  4. William Thompson on

    This is an intriguing development in the ongoing battle against disinformation. I’m curious to see how the researchers’ findings will be applied in real-world settings.

    • Lucas P. Martin on

      Yes, the translation from theory to practice will be key. Careful implementation and continuous evaluation will be essential.

  5. Using AI to fight disinformation is an intriguing idea. I’m curious to learn more about the specific applications and how they will be implemented in practice.

    • Olivia Williams on

      Yes, the details will be crucial. Balancing AI’s capabilities with appropriate oversight and accountability will be key.

  6. Liam Rodriguez on

    This is an important issue that requires careful consideration. I’m glad to see universities taking a leading role in studying the complex dynamics of AI and disinformation.

    • Yes, academic research can provide valuable insights to inform policymaking and industry practices in this critical area.

  7. Generative AI’s role in fighting disinformation is a double-edged sword. I look forward to learning more about the nuanced framework proposed by the researchers.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.