Listen to the article

0:00
0:00

Russia’s “Poisoning” Strategy Threatens to Shape Future AI Models, Expert Warns

Russian propaganda efforts have evolved beyond creating fake soldiers or isolated disinformation campaigns, according to information and communications technology expert Yuriy Antoshchuk. In a recent social media analysis, he warns of a more insidious threat: the systematic “poisoning” of artificial intelligence training data with pro-Kremlin narratives.

“There is a more strategic and long-term AI threat, not yet as visible, but one that can be observed by analogy with China’s strategy as seen in DeepSeek,” Antoshchuk explained. “This is the so-called ‘poisoning’ of AI.”

The expert draws parallels with China’s approach to AI content moderation. DeepSeek, a Chinese generative AI model, overtly censors responses regarding sensitive topics like the Chinese Communist Party, Taiwan, Tibet, and the Dalai Lama—a direct result of government control over domestic AI companies.

Western AI models like ChatGPT, Gemini, and others currently operate with less direct government influence. However, Antoshchuk argues that Russia has developed sophisticated methods to indirectly shape these systems’ outputs by manipulating their training data sources.

According to his analysis, Russian operatives create convincing replicas of respected Western media outlets—including Fox News, The Washington Post, Die Welt, and Le Point—alongside fake versions of official international organization websites like NATO. These fraudulent sites publish anti-Ukrainian and pro-Kremlin content that is subsequently amplified through networks of fake social media accounts and paid advertising campaigns.

“Large AI language models learn from open sources, articles and news,” Antoshchuk noted. “Russia fills the Internet with millions of articles that AI parsers from companies like Google and OpenAI count as credible sources for training their models.”

The long-term implications are concerning. As major AI developers scrape the internet to train new generations of language models, they inadvertently incorporate this fabricated content. “As a result, future generations of chatbots will naturally reproduce Russian narratives about the war in Ukraine, including such theses as denazification or civil war,” warned Antoshchuk.

Evidence of this strategy’s effectiveness has already emerged. Antoshchuk pointed to instances where Elon Musk’s Grok AI has reproduced Russian propaganda narratives, suggesting the contamination of training data is already yielding results for the Kremlin.

The challenge is compounded by the growing proportion of AI-generated content online. “Already today around 50% (if not more) of content on the Internet is created not by humans,” Antoshchuk estimated. This creates a feedback loop where AI systems trained on internet data increasingly learn from other AI outputs, potentially amplifying embedded propaganda narratives.

Industry analysts note this problem extends beyond Russia, as various state and non-state actors recognize the strategic value of influencing AI training data. The phenomenon highlights vulnerabilities in how major AI companies source and verify their training materials.

To counter this threat, Antoshchuk emphasizes the importance of critical evaluation of information sources and rigorous monitoring of training data. AI developers face mounting pressure to implement more sophisticated content verification systems and transparency standards.

The issue raises broader questions about the future integrity of AI-powered information systems. As generative AI becomes increasingly embedded in search engines, content creation, and decision support tools, the potential societal impact of systematically biased AI responses grows significantly.

Experts in the field advocate for developing stronger content provenance tracking, enhanced detection of coordinated influence operations, and more transparent AI training practices to preserve the quality and objectivity of future artificial intelligence services.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

8 Comments

  1. Elizabeth Johnson on

    It’s alarming to see how authoritarian regimes are attempting to weaponize AI for their own political agendas. This underscores the urgent need for robust global governance frameworks to regulate the development and deployment of these powerful technologies.

  2. Michael Rodriguez on

    This is a wake-up call for the AI research community to prioritize the development of AI systems that are resilient to such manipulation. Increased transparency, external auditing, and collaborative efforts to identify and address vulnerabilities will be key.

  3. I appreciate the expert’s insights and the clear parallels drawn with China’s approach to AI content moderation. It’s crucial that we learn from these examples and proactively address the vulnerabilities before they are further exploited.

  4. This is a complex and multifaceted issue that requires a multidisciplinary approach. Policymakers, technologists, and civil society must work together to develop comprehensive solutions that safeguard against the malicious manipulation of AI systems.

  5. I’m curious to learn more about the specific techniques Russia is using to covertly influence AI systems. Detailed case studies and technical analysis would help the public and policymakers better understand this threat and devise appropriate countermeasures.

  6. Poisoning AI training data with propaganda narratives is a devious and insidious tactic. It’s crucial that we develop robust mechanisms to detect and mitigate such attempts at shaping the outputs of language models. The integrity of these systems is essential for maintaining an informed and democratic discourse.

  7. This is a concerning development. The ability of authoritarian regimes to manipulate the training data and outputs of AI systems poses a real threat to the integrity and objectivity of future language models. We need to be vigilant and find ways to safeguard against such malicious influence.

  8. As someone interested in the future of AI, I’m deeply concerned about the potential long-term impacts of these propaganda efforts. We must act now to protect the integrity of language models and ensure they remain reliable, objective, and beneficial to society.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved.