Listen to the article

0:00
0:00

The AI revolution in journalism has created an urgent crisis in the information ecosystem, with fake news proliferating at unprecedented rates and traditional media struggling to adapt. Having spent years reporting on artificial intelligence across four continents—interviewing researchers, ethicists, and those affected by these systems—I’ve witnessed firsthand how AI is reshaping our information landscape.

While misinformation isn’t new, AI has supercharged its spread and impact. According to NewsGuard, more than 1,200 AI-generated news and information sites now operate in 16 languages, a staggering 20-fold increase in just two years. These sites, bearing legitimate-sounding names like “iBusiness Day” and “Ireland Top News,” publish false claims with minimal human oversight.

The threat extends beyond fringe websites. Established news organizations including the Washington Post and The Guardian have unknowingly linked to chatbot-generated stories with questionable accuracy. Even legacy media outlets experimenting with AI have encountered problems—Bloomberg issued dozens of corrections for its AI-generated news summaries, while CNET and Gannett faced embarrassing errors in their AI-written content.

“It just transfers the responsibility to users who, in an already confusing information landscape, will be expected to check if information is true or not,” said Vincent Berthier, head of Reporters Without Borders’ technology and journalism desk, when discussing Apple’s approach to AI-generated news summaries that often contain fabricated details.

This surge in misinformation comes as traditional journalism—historically the most reliable counterweight to falsehoods—faces declining resources, shrinking newsrooms, and eroding public trust.

The World Economic Forum’s 2024 Global Risks Report identifies bad actors leveraging misinformation to widen societal divides as “the most severe global risk” in the years ahead. This raises a critical question: Is AI-enabled fake news primarily a problem of supply or demand?

Boston University economist Marshall Van Alstyne frames the issue through an environmental metaphor. “Picture the information sphere as the sky above a soot-stained mill town,” he suggests. Misleading content billows like pollution, yet instead of addressing the source, authorities hand out protective gear and hope everyone remembers to use it.

Van Alstyne and his team have been researching decentralized solutions that incentivize accuracy without censorship. “We should be able to reduce the flow of misinformation with no censorship at all and no central authority judging truth at all,” he explained. “This means not government; not a powerful individual like Musk or Zuckerberg; not even I as a designer could bias the solution.”

However, progress on misinformation solutions stalled in January 2025 when a Trump administration directive halted National Science Foundation funding for research projects aimed at combating mis/disinformation. Van Alstyne’s promising work lost its federal support: “We were empirically testing whether this works, and preliminary tests suggest that it does, but we need funding to continue.”

Some researchers challenge prevailing narratives about AI-enabled misinformation. A 2023 Nature study found that exposure to false content is often limited to motivated fringe groups rather than the general public. Princeton University researchers concluded that during the 2024 global elections—when over two billion people voted worldwide—AI-generated false content “did not fundamentally change the landscape of political misinformation.”

“People are not very gullible,” said Sasha Altay, co-author of a Harvard Kennedy School article about “overblown fears” regarding AI’s impact on misinformation. “They turn to mainstream news that they trust to learn about the world.”

Felix Simon, Altay’s co-author, points out a more fundamental issue: “The biggest problem when it comes to misinformation writ large is not necessarily AI or fake news websites. It’s very powerful people or politicians who willingly make false and misleading statements who are cited or given space in traditional news media.”

This perspective suggests that teaching media literacy might be more effective than battling each new AI-generated falsehood. The Knight Commission on the Information Needs of Communities recommends embedding digital and media literacy in school curricula and establishing libraries as hubs for adult learning.

Yet the problem operates as a feedback loop: each engagement with misinformation teaches algorithms to serve more of it, creating increasingly persuasive and personalized content. AI doesn’t merely respond to demand—it shapes it.

This parallels the climate crisis, where physical feedback loops (melting ice absorbing more heat) operate alongside cultural and psychological ones (denial hardening into social identity). Breaking these cycles requires addressing both supply and demand simultaneously.

As with climate action, addressing AI-enabled misinformation demands more than technical solutions. While regulation may help curb supply, the human hunger for confirming narratives must also be addressed. This means designing friction not just into technology but into human psychology—creating space for questions and doubt.

People don’t abandon misinformation merely because they’re corrected; they leave it when offered something better—a more coherent narrative, a more trustworthy source, or a place where identity and truth aren’t in conflict.

Our challenge extends beyond simply countering falsehoods. We must create an information ecosystem where truth isn’t just accessible, but preferable—where facts and human needs align rather than compete.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

7 Comments

  1. Patricia White on

    This is a complex issue without easy solutions. AI can streamline certain news tasks, but the potential for abuse is real. I hope to see robust safeguards and accountability measures put in place to protect the integrity of journalism.

  2. Michael Thomas on

    Fascinating and concerning topic. The rise of AI-generated content is a double-edged sword – it can increase efficiency but also risks undermining journalistic integrity if not carefully managed. Proper safeguards and transparency will be crucial to maintaining public trust.

  3. The proliferation of AI-generated news sites is alarming. It’s critical that readers are able to distinguish legitimate journalism from AI-created content with little human oversight. Rebuilding public trust in news media will be an ongoing challenge.

    • Noah J. Jackson on

      Agreed. Clear labeling and transparency around the use of AI in news production is essential. Responsible media organizations must lead the way in setting ethical standards and rebuilding credibility.

  4. Isabella Williams on

    While AI can enhance productivity in newsrooms, the risk of misinformation is deeply concerning. I hope industry leaders find ways to harness the benefits of this technology while upholding the highest standards of journalistic integrity.

  5. I’m curious to learn more about the specific AI systems being used in journalism and how they are being deployed. What kind of oversight and fact-checking processes are in place to ensure accuracy? This is a complex issue that deserves close scrutiny.

    • Elijah T. Williams on

      That’s a great question. Responsible use of AI in journalism will require robust editorial oversight, clear labeling of AI-generated content, and ongoing monitoring for potential issues or biases. Transparency from media organizations will be key.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved.