Listen to the article

0:00
0:00

In a concerning development for artificial intelligence technology, major chatbots are unwittingly spreading Russian propaganda when questioned about sensitive geopolitical issues, particularly Russia’s invasion of Ukraine. A recent Wired investigation found that leading AI systems—including OpenAI’s ChatGPT, Google’s Gemini, China’s DeepSeek, and xAI’s Grok—frequently cite sources connected to Russian state media and intelligence operations in their responses.

The Institute for Strategic Dialogue (ISD) discovered that nearly 20 percent of AI responses to questions about the Ukraine war referenced Russian state-attributed sources, including internationally sanctioned outlets like RT and Sputnik. When prompted about alleged Ukrainian atrocities, these AI tools often presented claims from pro-Kremlin websites as factual information without appropriate context or disclaimers.

This vulnerability stems from the chatbots’ foundational design—they’re trained on massive datasets scraped from the internet, where coordinated disinformation campaigns have deliberately flooded online spaces with misleading narratives. The result is an inadvertent circumvention of international sanctions designed to limit the reach and influence of Russian propaganda outlets.

Investigators have traced the issue to sophisticated Russian influence operations that exploit what experts call “data voids”—areas where reliable, real-time information is scarce online. The so-called Pravda operation has been particularly effective, publishing millions of articles across fake news websites that contaminate the knowledge base AI models rely upon.

According to a separate analysis by NewsGuard cited in the report, these efforts specifically target Western AI systems. Elon Musk’s Grok platform demonstrated a notable tendency to link to social media posts that echo Russian narratives, raising significant concerns about content moderation practices.

Performance varied among the major platforms. Google’s Gemini showed somewhat better safeguards by issuing safety warnings alongside questionable citations, though it still occasionally delivered problematic content. OpenAI and xAI platforms provided responses with minimal protective measures, highlighting potential gaps in their approach to moderating state-sponsored misinformation.

The implications extend far beyond immediate geopolitical concerns. The Bulletin of the Atomic Scientists has noted that Russian networks are deliberately corrupting large-language models to reproduce propaganda at scale, creating a multiplier effect for disinformation campaigns. This manipulation could influence public opinion on global conflicts and potentially impact democratic processes, including elections.

“This represents a fundamental challenge to AI’s trustworthiness as an information source,” said one industry observer familiar with the research. “When AI systems uncritically reproduce sanctioned content, they’re essentially laundering propaganda through technology that many users inherently trust.”

Regulatory pressure is mounting in response to these findings. The European Union has begun examining AI systems under its Digital Services Act framework, while U.S. officials have expressed similar concerns in reports published by Axios and other outlets.

The problem is compounded by the sheer volume of disinformation. Forbes reported that the Pravda network alone has published approximately 3.6 million articles in 2024, creating an overwhelming challenge for content moderation systems.

In response to growing scrutiny, companies like OpenAI have committed to refining their models with more stringent filters for sanctioned content. However, critics argue that self-regulation may prove insufficient, particularly as new players like DeepSeek enter the market from regions with different approaches to content censorship and information control.

The situation highlights AI’s paradoxical nature: technologies designed to democratize access to information can simultaneously become vectors for propaganda when their training data is compromised. For consumers and businesses increasingly relying on these tools for research and decision-making, the blurring line between reliable information and propaganda represents a significant concern.

Experts suggest that meaningful solutions will require coordinated efforts to improve training data quality, implement robust fact-checking mechanisms, and establish global standards for AI content moderation. Without such measures, the risk remains that advanced AI systems will continue to inadvertently amplify state-sponsored disinformation campaigns, undermining their utility as trusted information sources.

Verify This Yourself

Use these professional tools to fact-check and investigate claims independently

Reverse Image Search

Check if this image has been used elsewhere or in different contexts

Ask Our AI About This Claim

Get instant answers with web-powered AI analysis

👋 Hi! I can help you understand this fact-check better. Ask me anything about this claim, related context, or how to verify similar content.

Related Fact-Checks

See what other fact-checkers have said about similar claims

Loading fact-checks...

Want More Verification Tools?

Access our full suite of professional disinformation monitoring and investigation tools

11 Comments

  1. Elijah Jackson on

    While the scale and coordination of Russian disinformation campaigns are concerning, I’m hopeful that the AI research community can devise more robust solutions to detect and mitigate the spread of propaganda through chatbots and other language models.

  2. The ability of AI chatbots to inadvertently amplify Russian propaganda is a serious problem that undermines public trust in these emerging technologies. Significant improvements in data curation and algorithmic transparency are urgently needed.

  3. This is a concerning development. AI systems should be designed with robust safeguards against spreading propaganda, especially on sensitive geopolitical issues. The training data and algorithms need to be carefully curated to avoid amplifying disinformation.

  4. This incident highlights the importance of aligning AI systems with democratic values and human rights. Developers must prioritize ethics and accountability to prevent their creations from being exploited for authoritarian ends.

  5. It’s concerning to see leading AI chatbots being exploited to spread Russian propaganda about the war in Ukraine. This highlights the need for more rigorous testing and auditing of these systems before deployment.

  6. Elizabeth Williams on

    This is a stark reminder that AI systems are only as reliable as their underlying data and training. Developers need to be extra vigilant about vetting sources and implementing robust safeguards against the spread of misinformation and propaganda.

  7. Isabella Q. Thompson on

    Sanctions and content moderation efforts are clearly not enough to stop the spread of Russian disinformation online. AI developers must take proactive steps to prevent their systems from being weaponized for propaganda purposes.

    • Agreed. This issue requires a comprehensive, multi-stakeholder approach involving policymakers, tech companies, and the research community to develop effective countermeasures.

  8. Linda Thompson on

    As an AI enthusiast, I’m deeply troubled by the potential for chatbots to be used as vectors for state-sponsored propaganda. Rigorous testing and oversight are essential to ensure these technologies are not weaponized against the public.

  9. I’m curious to learn more about the specific techniques used by the AI chatbots to spread Russian propaganda. How are they able to bypass content moderation and sanctions on state media sources?

    • Good question. The article mentions the chatbots are trained on massive internet datasets, which can be polluted by coordinated disinformation campaigns. More transparency and oversight of AI training data and models is clearly needed.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved. Designed By Sawah Solutions.