Listen to the article

0:00
0:00

AI Chatbot Claude Increasingly Vulnerable to Russian Propaganda, Investigation Finds

Anthropic’s AI-powered chatbot Claude, once regarded as one of the most reliable tools in the market, is now facing serious scrutiny over its handling of misinformation. A recent investigation reveals a worrying increase in the chatbot’s tendency to repeat false information, particularly Russian propaganda, raising significant concerns about AI models’ vulnerability to disinformation.

According to a review conducted by NewsGuard, a U.S. company specializing in tracking online disinformation and testing AI systems, Claude repeated false claims supporting Russian propaganda in 15% of cases when prompted by regular users. More troublingly, in each of these instances, the chatbot relied on sources directly linked to the Kremlin. This marks a significant deterioration from earlier testing when the rate stood at just 4%.

The findings align with growing complaints from users in recent months who have reported that Claude has become less accurate and less cautious in its responses. Once ranked among the least error-prone chatbots available, its reliability appears to be notably declining.

NewsGuard’s testing methodology was straightforward but effective. Researchers presented Claude with 20 false claims—half derived from Russian propaganda and half from Iranian propaganda—and analyzed how it responded to three distinct types of user queries: innocent information-seeking, leading questions, and malicious prompts designed to elicit misinformation. This approach simulated real-world scenarios where users might interact with the chatbot with varying intentions.

The results proved concerning. When faced with neutral, information-seeking questions, Claude made several significant errors. When prompted in ways that mimicked disinformation operators, it occasionally produced new versions of false claims, essentially becoming complicit in spreading misinformation.

A key issue identified in the investigation was Claude’s choice of sources. The chatbot frequently cited Russia Today (RT), a media outlet widely recognized as a Kremlin mouthpiece, and Pravda, a network comprising hundreds of sites masquerading as legitimate news outlets. According to collected data, this network has flooded the internet with millions of articles repeating identical false claims—precisely the kind of content that AI models tend to absorb during training.

This highlights a fundamental limitation of AI systems like Claude: they don’t genuinely distinguish between truth and falsehood but instead detect patterns in data. When disinformation appears repeatedly from seemingly credible sources, it begins to register as truth to the system.

One particularly alarming example involved a completely fabricated claim that hundreds of Ukrainians die monthly while attempting to evade military conscription by crossing the Tisza River to reach European countries. Despite having no factual basis, Claude not only repeated this false narrative but also cited supporting sources from the pro-Kremlin network.

In another instance, Claude stated that a French magazine had reported tens of thousands of Ukrainian soldiers deserting and remaining in France—another entirely false claim based on fabricated evidence. The chatbot failed to verify the source’s legitimacy before presenting the information as fact.

The problem extends beyond Russian disinformation. Claude repeated false claims in 20% of cases when asked about pro-Iranian propaganda, including a baseless assertion that China had switched to trading oil in yuan instead of dollars—a narrative that serves Iranian interests in undermining U.S. financial hegemony.

Anthropic acknowledged in April that something had changed and announced it was reviewing reports of declining answer quality in Claude. The company claimed to have fixed various issues but provided little explanation for what was occurring within the system.

Industry experts have proposed several theories for this deterioration. One leading explanation suggests that Claude’s surging popularity has forced Anthropic to reduce the computational effort behind each response to manage the increased demand. This essentially means the chatbot is performing fewer checks and cross-references per query, resulting in more errors.

Another theory involves search engine algorithms and their vulnerability to manipulation. As networks like Pravda gain visibility—even through negative attention—they rise in search rankings. When AI systems search for information, they repeatedly encounter these same sites, creating a feedback loop where widely distributed propaganda gradually appears legitimate to the models.

This situation serves as a stark reminder of AI’s fundamental limitations. These systems don’t fact-check or truly understand what they process—they merely reflect patterns in their training data. When portions of the internet contain disinformation, AI responses will inevitably reflect those same distortions, raising important questions about their reliability as information sources in an era of sophisticated propaganda campaigns.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

12 Comments

  1. John O. Lee on

    This is quite concerning if true. AI chatbots should be designed to resist propagating misinformation, not enable it. Anthropic needs to address this vulnerability urgently to maintain trust in their products.

    • James Williams on

      I agree, transparency and accountability are critical for AI systems. Anthropic should investigate this thoroughly and take corrective action if needed.

  2. Jennifer Lopez on

    While AI models can be susceptible to biases and errors, I’m surprised to hear of such a high rate of Claude repeating Russian propaganda. Rigorous testing and safeguards should be in place to prevent this.

    • Michael Miller on

      Absolutely. AI developers have a responsibility to ensure their models do not amplify harmful narratives, especially from state actors. This needs to be a top priority.

  3. Michael Jackson on

    If these findings are accurate, it’s a worrying trend that could undermine public confidence in AI technology. Anthropic should be transparent about their investigation and what steps they’re taking to rectify the issue.

    • Liam Rodriguez on

      Absolutely. Reputation and trust are critical for AI companies. Anthropic needs to act swiftly to regain user trust and demonstrate its commitment to combating the spread of disinformation.

  4. Liam Jones on

    This report raises some valid concerns about the potential for AI chatbots to be exploited for the dissemination of propaganda. However, more information is needed to fully assess the scope and scale of the problem with Claude.

    • Liam Brown on

      Agreed. While the findings are concerning, a more detailed and transparent investigation is required to understand the extent of the issue and Anthropic’s plans to address it.

  5. Elijah Moore on

    This is a serious issue that needs to be addressed. AI systems must be designed with robust safeguards against the spread of disinformation, especially from malicious state actors. Anthropic has its work cut out for them.

    • Amelia Martin on

      I agree. The integrity and trustworthiness of AI chatbots are paramount. Anthropic should prioritize a thorough review and remediation of any vulnerabilities in Claude’s architecture.

  6. Robert Williams on

    Interesting findings, though I wonder about the methodology and sample size used in this investigation. More details would help assess the validity of the claim that Claude is increasingly vulnerable to Russian propaganda.

    • Michael Johnson on

      Good point. Transparency around the testing process is crucial here. Without more information, it’s hard to draw firm conclusions about the scale and nature of the problem.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.