Listen to the article

0:00
0:00

In a concerning revelation for both AI developers and European regulators, nearly one in five responses from major AI chatbots include references to Russian state-controlled media when answering questions about the war in Ukraine, according to new research from the Institute for Strategic Dialogue (ISD).

The comprehensive study tested four leading AI systems—ChatGPT, Gemini, Grok, and DeepSeek—against 300 different prompts in five languages: English, Spanish, French, German, and Italian. Researchers found that 18% of all chatbot responses cited Russian government sources, websites connected to Russian intelligence operations, or platforms known to distribute Russian misinformation.

Particularly troubling is that some of these cited sources include media outlets officially banned in the European Union as part of sanctions following Russia’s invasion of Ukraine in 2022.

The research revealed significant variations in how chatbots responded based on question framing. When users asked neutral questions, Russian sources appeared in 11% of answers. This figure jumped to 18% for questions with bias and reached 24% when prompts were overtly manipulative. ChatGPT showed particular vulnerability to manipulative phrasing, with its citation rate for such questions nearly tripling compared to neutral inquiries.

Certain topics triggered substantially higher reliance on Kremlin-linked material. Questions about Ukrainian military conscription and NATO perceptions yielded Russian sources in 28.5% of responses, while queries about war crimes or Ukrainian refugees referenced Russian propaganda in fewer than 10% of cases.

The problem extends beyond direct citations of sanctioned outlets. Chatbots demonstrated limited ability to recognize EU-sanctioned content when it appeared through intermediary sources. In one notable example, Grok cited posts from RT propagandists and pro-Russian influencers shared on X (formerly Twitter). Meanwhile, ChatGPT in three different language versions cited an RT article that had been republished by an Azerbaijani website, presenting it alongside legitimate news sources without distinguishing its problematic origin.

“This study highlights a significant gap in content moderation capabilities across major AI systems,” said Dr. Elena Sorokina, a digital policy expert not affiliated with the study. “The inability to consistently filter sanctioned Russian propaganda raises serious questions about AI’s role in information warfare.”

The findings come at a critical time for AI regulation in Europe. ChatGPT’s European user base—approximately 45 million people—is approaching the threshold that would trigger enhanced oversight under the Digital Services Act (DSA). The European Commission could soon impose stricter regulations on OpenAI’s flagship product if these issues aren’t addressed.

These results align with broader concerns about AI’s reliability in handling news content. Earlier this year, the European Broadcasting Union and BBC published findings from a large international study showing that AI systems routinely distort news, regardless of language, country, or platform. That project involved 22 public broadcasters from 18 countries working across 14 languages.

As AI chatbots become increasingly embedded in how people seek and consume information, their vulnerability to amplifying state propaganda presents a growing challenge for democratic societies. The ISD researchers question whether AI developers possess either the technical capabilities or sufficient motivation to comply with EU restrictions on Russian state media.

Industry observers note that addressing these issues will require significant improvements in how AI systems evaluate source credibility and recognize prohibited content, even when it appears through secondary channels. With the conflict in Ukraine continuing into its third year, the stakes for accurate information remain high.

Verify This Yourself

Use these professional tools to fact-check and investigate claims independently

Reverse Image Search

Check if this image has been used elsewhere or in different contexts

Ask Our AI About This Claim

Get instant answers with web-powered AI analysis

👋 Hi! I can help you understand this fact-check better. Ask me anything about this claim, related context, or how to verify similar content.

Related Fact-Checks

See what other fact-checkers have said about similar claims

Loading fact-checks...

Want More Verification Tools?

Access our full suite of professional disinformation monitoring and investigation tools

13 Comments

  1. Linda Z. Davis on

    This is a timely and important study. As AI-powered chatbots become more prevalent, we must ensure they are not inadvertently spreading disinformation or propaganda, especially on sensitive geopolitical topics. Robust fact-checking and source verification will be critical.

    • Absolutely. The researchers have done a valuable service in shedding light on this issue. Regulatory oversight and industry-wide best practices will be essential to mitigate these risks.

  2. This is a complex challenge, as AI systems can inadvertently pick up on biases and misinformation present in their training data. Rigorous testing and auditing will be crucial to ensure these chatbots are not being weaponized for disinformation campaigns.

  3. Lucas Hernandez on

    The variation in response quality based on prompt framing is particularly concerning. It suggests that these AI models may be vulnerable to manipulation, which could have serious implications for how they are used in sensitive discussions around geopolitical conflicts.

    • Oliver J. Garcia on

      Agreed. Prompt engineering to elicit biased or misleading responses is a serious risk that must be mitigated through comprehensive testing and monitoring of chatbot outputs.

  4. This is a concerning finding. It highlights the need for more robust safeguards and transparency around the data and sources used to train AI chatbots. Responsible development of these systems is crucial to prevent the spread of misinformation.

    • Isabella White on

      Absolutely. AI developers must be vigilant in curating their training data and implementing strong content moderation to mitigate the risk of chatbots amplifying propaganda or biased narratives.

  5. While AI chatbots can be powerful tools, this study highlights the need for greater transparency and accountability around their underlying data and algorithms. Responsible development and deployment of these systems should be a top priority.

  6. Amelia D. Smith on

    It’s concerning that some of the cited Russian sources are officially banned in the EU. This highlights the challenge of keeping up with rapidly evolving information warfare tactics. Continuous monitoring and adaptation will be required to stay ahead of these threats.

  7. Elijah Thompson on

    This study underscores the importance of responsible AI development and the need for strong ethical frameworks to guide the deployment of these technologies. Safeguarding against the spread of misinformation should be a top priority for all involved.

    • Well said. Ongoing collaboration between researchers, policymakers, and industry leaders will be crucial to address these issues and ensure AI chatbots are a force for good, not harm.

  8. I wonder what specific steps the AI companies involved plan to take in response to these findings. Increased human oversight, improved source verification, and more granular control over prompt responses could be some potential solutions.

    • Good point. The researchers should engage directly with the chatbot providers to understand their current approaches and collaboratively develop best practices to address this issue.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved. Designed By Sawah Solutions.