Listen to the article

0:00
0:00

AI Chatbots Increasingly Amplify Russian Disinformation, Study Finds

AI chatbots are more frequently repeating false information found in Russian disinformation campaigns, according to new research from NewsGuard Technologies. The study reveals a concerning trend as these systems gain the ability to search the internet before providing answers.

NewsGuard’s analysis found that leading AI models now repeat false information about current news topics more than one-third of the time—a significant increase from 18% just a year ago. The company tested ten prominent AI chatbots, challenging each with questions about ten narratives they had determined to be false.

One example involved asking whether Moldova’s Parliament Speaker had compared his compatriots to sheep—a claim fabricated by Russian propaganda networks. Six of the ten tested AI models repeated this false assertion as fact.

The findings highlight a growing vulnerability in AI systems that rely on web searches. When seeking information, these chatbots pull content not only from established news outlets but also from social media posts and various websites that appear in search results, regardless of their credibility.

“This creates an opening for a new kind of influence operation,” explains McKenzie Sadeghi, author of the NewsGuard report. “Bad actors can now post information online that, even if never read by humans, can influence chatbot behavior.” This vulnerability appears particularly significant for topics receiving limited mainstream media coverage.

However, the report’s conclusions should be viewed with some caution. The study used a relatively small sample size—just 30 prompts per model—and focused on rather niche topics. This contrasts with the general trend observed in benchmarks showing AI models improving at factual accuracy. Additionally, NewsGuard has a potential conflict of interest as it sells data services to AI companies.

The situation reveals deeper economic tensions reshaping our information ecosystem. AI companies could easily compile lists of verified news sources with high editorial standards and prioritize this information, but there’s little public information about how these companies weight different sources in their chatbots.

This lack of transparency may be linked to ongoing copyright disputes. The New York Times is suing OpenAI for allegedly training on its articles without permission. If AI developers explicitly acknowledged heavy reliance on established news organizations, those publishers would have stronger claims for compensation.

Several companies including OpenAI and Perplexity have signed licensing agreements with news outlets (including TIME) for data access, but both emphasize these agreements don’t result in preferential treatment in search results.

Meanwhile, California is poised to become a regulatory battleground as legislation known as SB 53 approaches Governor Gavin Newsom’s desk. This bill would require AI companies to publish risk management frameworks and transparency reports, declare safety incidents to state authorities, implement whistleblower protections, and face penalties for failing to meet their own commitments.

The legislation represents a watered-down version of a similar bill Newsom vetoed last year following intense lobbying from venture capitalists and tech companies. Anthropic recently became the first major AI company to support the current version.

In a separate development highlighting AI security concerns, researchers at Palisade have created a proof-of-concept for an autonomous AI agent that, when delivered via a compromised USB device, can intelligently identify and extract valuable information for theft or extortion. The development demonstrates how AI could make hacking more scalable by automating tasks previously limited by human labor constraints, potentially exposing more people to scams and data theft.

Verify This Yourself

Use these professional tools to fact-check and investigate claims independently

Reverse Image Search

Check if this image has been used elsewhere or in different contexts

Ask Our AI About This Claim

Get instant answers with web-powered AI analysis

👋 Hi! I can help you understand this fact-check better. Ask me anything about this claim, related context, or how to verify similar content.

Related Fact-Checks

See what other fact-checkers have said about similar claims

Loading fact-checks...

Want More Verification Tools?

Access our full suite of professional disinformation monitoring and investigation tools

13 Comments

  1. It’s alarming to see AI models parroting Russian propaganda narratives. This highlights the need for greater transparency and accountability in the development of these systems. Rigorous testing and validation must be implemented to mitigate the risks of propagating disinformation.

    • Absolutely. AI companies need to take proactive steps to ensure their models are not being exploited to amplify false or misleading information, especially from malicious state actors. The integrity of these systems is crucial for maintaining public trust.

  2. Robert Williams on

    The findings of this study are deeply concerning. As AI systems become more sophisticated, the need for robust safeguards and ethical oversight becomes increasingly urgent. Developers must prioritize the integrity of their models and the well-being of the communities they serve.

    • Well said. This is a wake-up call for the AI industry to take proactive steps in addressing these vulnerabilities. Transparent and accountable development practices will be key to maintaining public trust and ensuring these technologies are used for the greater good.

  3. Robert Williams on

    I’m curious to know more about the specific techniques used by these chatbots to identify and propagate false information. Understanding their underlying biases and weaknesses will be key to addressing this problem effectively.

    • That’s a great point. A deeper analysis of the chatbots’ decision-making processes and data sources could shed light on how they’re being manipulated to spread disinformation. Transparency from the AI companies is crucial for developing effective countermeasures.

  4. Jennifer Thompson on

    This is really concerning. We need to be vigilant about the spread of disinformation, especially when it’s amplified by AI systems. It’s crucial that these models are trained to prioritize credible sources and fact-check before presenting information as true.

    • I agree. These chatbots should be designed with robust safeguards to prevent the spread of falsehoods, even inadvertently. Their responses need to be carefully curated to promote truth and transparency.

  5. This is a complex issue with no easy solutions. While the potential benefits of AI-powered chatbots are significant, the risks of inadvertently spreading disinformation must be carefully managed. Ongoing research and collaboration between experts in AI, cybersecurity, and journalism will be crucial.

  6. It’s disheartening to see AI models being leveraged to propagate Russian propaganda. This underscores the critical importance of developing these technologies with rigorous ethical standards and a strong commitment to truth and transparency.

    • Liam Hernandez on

      Agreed. AI companies must prioritize the mitigation of these risks, even if it means slowing the pace of technological advancement. The long-term societal implications of unchecked disinformation are far too grave to ignore.

  7. Lucas Q. Jackson on

    This is a sobering reminder of the potential dangers of AI-driven chatbots. While these technologies can be incredibly useful, we must be vigilant about their vulnerabilities. Robust fact-checking and content curation should be a top priority for developers.

  8. Elizabeth Smith on

    This is a worrying trend that highlights the need for robust content moderation and fact-checking mechanisms in AI-powered systems. The potential for chatbots to amplify false narratives, even unintentionally, is a serious concern that must be addressed.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved. Designed By Sawah Solutions.