Listen to the article

0:00
0:00

In a concerning development, several leading AI chatbots are amplifying Russian state propaganda when responding to queries about the Ukraine war, according to new research from the Institute of Strategic Dialogue (ISD).

The study found that OpenAI’s ChatGPT, Google’s Gemini, DeepSeek, and xAI’s Grok frequently cite sanctioned Russian media outlets and promote pro-Kremlin narratives in their responses to questions about the conflict.

Researchers discovered that almost one-fifth of all responses to Ukraine war-related inquiries across these four chatbots referenced Russian state-attributed sources. This revelation raises significant concerns about AI systems potentially undermining EU sanctions against Russian propaganda outlets.

“It raises questions regarding how chatbots should deal when referencing these sources, considering many of them are sanctioned in the EU,” explained Pablo Maristany de las Casas, the ISD analyst who led the research. The study highlights how Russian propaganda has targeted “data voids” – topics where legitimate sources provide limited information – to spread false and misleading content.

The ISD conducted a comprehensive analysis, posing 300 questions to the chatbots covering various aspects of the Ukraine conflict. The inquiries ranged from neutral to biased and even “malicious” questions about NATO perception, peace negotiations, Ukraine’s military recruitment, refugee situations, and alleged war crimes during Russia’s invasion.

To ensure thorough testing, researchers created separate accounts for each query and conducted tests in multiple languages including English, Spanish, French, German, and Italian. The experiment was initially conducted in July, but Maristany de las Casas confirmed that these propaganda issues persisted through October.

Since Russia’s full-scale invasion of Ukraine in February 2022, European officials have imposed sanctions on at least 27 Russian media sources for spreading disinformation as part of what they describe as Russia’s strategy to destabilize Europe and other nations.

Among the sanctioned sources cited by the chatbots were Sputnik Globe, Sputnik China, RT (formerly Russia Today), EADaily, the Strategic Culture Foundation, and the R-FBI. The research also found instances where the chatbots referenced Russian disinformation networks and pro-Kremlin journalists and influencers.

This isn’t the first time such issues have been identified. Previous studies have shown similar problems with popular chatbots echoing Russian narratives, suggesting a systemic issue in how AI systems source and present information about geopolitical conflicts.

The findings are particularly troubling as more users turn to AI chatbots as alternatives to traditional search engines. According to OpenAI data, ChatGPT search alone had approximately 120.4 million monthly active users in the European Union during the six-month period ending September 30, 2025.

When contacted about the findings, OpenAI spokesperson Kate Waters told WIRED that the company takes measures “to prevent people from using ChatGPT to spread false or misleading information, including such content linked to state-backed actors.” Waters acknowledged these are persistent challenges that OpenAI is working to address through improvements to both its underlying models and platforms.

The research highlights a growing challenge for AI developers: balancing the open retrieval of information with the need to respect international sanctions and prevent the spread of state-sponsored propaganda. As AI chatbots become increasingly embedded in how people access news and information, their potential to influence public perception of global conflicts raises serious questions about accountability and regulation in the AI industry.

For users of these technologies, the findings underscore the importance of approaching AI-generated information about sensitive geopolitical topics with heightened skepticism and cross-referencing with reliable news sources.

Verify This Yourself

Use these professional tools to fact-check and investigate claims independently

Reverse Image Search

Check if this image has been used elsewhere or in different contexts

Ask Our AI About This Claim

Get instant answers with web-powered AI analysis

👋 Hi! I can help you understand this fact-check better. Ask me anything about this claim, related context, or how to verify similar content.

Related Fact-Checks

See what other fact-checkers have said about similar claims

Loading fact-checks...

Want More Verification Tools?

Access our full suite of professional disinformation monitoring and investigation tools

9 Comments

  1. Elijah T. Martinez on

    I’m curious to learn more about the specific narratives and themes being pushed by these chatbots. What kind of pro-Kremlin content are they promoting, and how does it align with Russia’s broader information warfare strategy?

    • Isabella Miller on

      Good point. Understanding the exact nature of the propaganda being spread is crucial to developing effective countermeasures. Transparency and detailed analysis of these chatbot behaviors will be key.

  2. This is a concerning development, but not entirely surprising given how adept Russia has been at exploiting digital platforms and AI systems to advance its agenda. The challenge will be finding ways to mitigate these risks without undermining the benefits of conversational AI.

  3. It’s disappointing to see chatbots being used to circumvent sanctions and spread Russian disinformation. These systems need to be designed with robust safeguards to prevent their misuse for propaganda purposes. Responsible AI development is crucial here.

    • Absolutely. The onus is on the tech companies behind these chatbots to ensure they are not being weaponized to undermine geopolitical efforts to counter Russian aggression. Proactive measures are needed to address this issue.

  4. Jennifer White on

    This is very concerning. AI chatbots should not be amplifying Russian propaganda, especially when many of the cited sources are under EU sanctions. It undermines the purpose of those sanctions and allows the Kremlin to spread disinformation more widely.

    • Michael Garcia on

      You’re right, this raises serious questions about how chatbots handle references to sanctioned media outlets. They need robust safeguards to avoid becoming conduits for propaganda.

  5. While the findings of this study are alarming, I’m not entirely surprised. Malicious actors will always seek to exploit new technologies for their own gain. The key is for the AI community to stay vigilant and work closely with policymakers to develop appropriate guardrails.

  6. This is a complex issue without easy solutions. On one hand, we want chatbots to be as helpful and informative as possible. But on the other, we can’t allow them to become conduits for state-sponsored propaganda. Finding the right balance will require ongoing collaboration between technologists, researchers, and policymakers.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved. Designed By Sawah Solutions.