Listen to the article

0:00
0:00

Russia-linked content appearing in AI chatbot responses has raised serious concerns about disinformation risks, according to a new analysis from the British think tank Institute for Strategic Dialogue (ISD).

Researchers examined how four leading AI platforms—ChatGPT, Gemini, Grok, and DeepSeek—responded to questions about Russia’s invasion of Ukraine across five languages: English, Spanish, French, German, and Italian. Their findings reveal potentially troubling patterns in how artificial intelligence systems handle contentious geopolitical information.

Nearly one-fifth of all responses referenced Russian state sources, many of which are currently subject to European Union sanctions. The study found that questions framed with a pro-Russian bias were more likely to yield answers containing these sanctioned sources.

Particularly problematic areas included AI responses to queries about Ukraine’s military conscription policies and NATO’s role in the conflict. The ISD noted that the chatbots struggled most when trying to identify Russian state-affiliated content that had been republished or laundered through third-party websites or media outlets.

“The challenge of identifying and restricting sanctioned media sources isn’t new to tech companies,” an ISD spokesperson explained. “Google has faced similar scrutiny over search results related to complex topics, particularly since Russia’s full-scale invasion began and the EU imposed sanctions requiring restriction of state media content.”

This investigation raises fundamental questions about AI systems’ ability to comply with EU sanctions against Russian state media. As generative AI becomes increasingly integrated into information ecosystems, its vulnerability to manipulation poses growing concerns for policymakers, tech companies, and users alike.

The findings come amid broader warnings about Russia’s evolving information warfare capabilities. Intelligence agencies have documented Russia’s intensified efforts to influence American public opinion through an ecosystem of disinformation tactics, including deepfakes, fabricated websites, and now, potentially, manipulating AI systems to amplify Kremlin narratives.

Tech industry analysts point out that AI systems are only as reliable as their training data. If that data includes significant volumes of Russian state media or content influenced by it, those perspectives may be inadvertently reinforced in AI outputs.

“We’re seeing the next frontier of information warfare,” said Dr. Elena Mikhailova, a disinformation researcher not affiliated with the study. “Rather than simply creating fake content, hostile actors can now potentially engineer questions that extract biased or misleading information from widely used AI tools that many people trust implicitly.”

The European Commission has already signaled concerns about AI’s potential role in spreading disinformation. Internal Market Commissioner Thierry Breton previously warned tech companies about their obligations under the Digital Services Act to mitigate risks related to generative AI.

Ukrainian President Volodymyr Zelenskyy has called for restrictions on AI technology exports to Russia, recognizing the dual-use potential of these systems for both civilian and military applications. His administration has emphasized that Russia is rapidly incorporating AI capabilities into its information operations against Ukraine and Western democracies.

For everyday users, experts recommend approaching AI-generated content with healthy skepticism, particularly on contentious geopolitical topics. Cross-checking information with multiple reliable sources remains essential.

Tech companies have responded to the ISD report by reiterating their commitment to addressing these challenges, though specific remediation plans vary by platform. Several have promised to strengthen content filters and improve detection of sanctioned sources.

As AI systems continue to evolve and integrate more deeply into our information landscape, the battle against their potential weaponization in geopolitical conflicts presents a growing challenge for democratic societies worldwide.

Verify This Yourself

Use these professional tools to fact-check and investigate claims independently

Reverse Image Search

Check if this image has been used elsewhere or in different contexts

Ask Our AI About This Claim

Get instant answers with web-powered AI analysis

👋 Hi! I can help you understand this fact-check better. Ask me anything about this claim, related context, or how to verify similar content.

Related Fact-Checks

See what other fact-checkers have said about similar claims

Loading fact-checks...

Want More Verification Tools?

Access our full suite of professional disinformation monitoring and investigation tools

10 Comments

  1. It’s disappointing to see how AI chatbots can be weaponized to amplify Russian propaganda. This underscores the vital importance of robust content moderation and fact-checking mechanisms for these platforms.

  2. Elizabeth Martinez on

    I’m curious to learn more about the specific vulnerabilities in the AI systems that allowed Russian disinformation to slip through. Understanding the technical details could help inform more effective mitigation strategies.

  3. This is a concerning development. AI systems must be carefully designed and monitored to avoid amplifying disinformation, especially on sensitive geopolitical topics like the Ukraine war. Responsible AI principles like transparency and accountability will be crucial.

  4. While AI offers many benefits, this report highlights the need for greater safeguards to prevent it from being misused for malicious purposes. Developers must prioritize security, transparency, and accountability in their designs.

  5. This highlights the complex challenges in developing trustworthy AI systems that can navigate nuanced political issues. Ongoing research and multi-stakeholder collaboration will be crucial to address these risks.

  6. Amelia L. Brown on

    This is a sobering reminder that even advanced AI systems can be exploited to spread harmful narratives. Continuous improvement and adaptation will be essential to stay ahead of evolving disinformation tactics.

  7. Noah Hernandez on

    It’s alarming to see how easily AI chatbots can spread Russian propaganda and misinformation. Rigorous testing and content moderation are clearly needed to prevent these platforms from being exploited for influence operations.

    • Absolutely. Developers need to build in robust safeguards to detect and filter out state-sponsored propaganda and other malicious content.

  8. While AI offers many potential benefits, this report underscores the need for heightened vigilance around its use, especially for high-stakes applications involving sensitive geopolitical topics. Responsible development practices are paramount.

    • Oliver I. White on

      Agreed. Proactive measures to identify and counter malicious use cases should be a top priority for AI researchers and companies.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved. Designed By Sawah Solutions.