Listen to the article

0:00
0:00

Russia’s disinformation campaigns have found a new frontier in artificial intelligence, as the Kremlin actively works to manipulate large language models (LLMs) worldwide. Security experts have uncovered sophisticated efforts to embed false narratives into popular AI platforms like ChatGPT, effectively weaponizing these systems for information warfare.

According to a report by Euvsdisinfo, Russian operatives are systematically “grooming” LLMs by training them to replicate manipulative narratives and disinformation, particularly regarding the war in Ukraine. This strategy represents a significant evolution beyond traditional social media disinformation campaigns, targeting the AI systems increasingly used by millions for information retrieval.

French cybersecurity agency Viginum recently identified a network codenamed “Pravda” that creates vast quantities of low-quality content essentially rephrasing false statements from Russian media and pro-Kremlin sources. The operation specifically targets Ukraine, the United States, European nations, and several African countries with fabricated narratives designed to advance Russian interests.

“The sheer volume of generated content ensures that AI systems incorporate these Russian disinformation narratives when formulating responses to user queries,” explained one security researcher familiar with the investigation. “It’s a form of digital pollution designed to contaminate the information ecosystem.”

The effectiveness of these operations is becoming increasingly evident. NewsGuard’s Reality Check system found that six out of ten tested chatbots repeated false claims originating from the “Pravda” network. More alarming still, the proportion of Russian disinformation appearing in leading chatbots nearly doubled from 18% in 2024 to 35% in 2025, suggesting these tactics are gaining traction.

Intelligence agencies have linked these activities to the pro-Kremlin “Storm-1516” campaign, widely believed to be a rebranded continuation of the infamous “Internet Research Agency” that interfered in the 2016 U.S. presidential election. This connection underscores the strategic importance Russia places on these digital influence operations.

Even platforms generally considered reliable sources, such as Wikipedia, have not proven immune to infiltration. Experts have documented cases where “Pravda” network content has been inserted into reference materials that are then used to train AI systems, creating a troubling multiplier effect for false information.

“What makes these campaigns particularly dangerous is their scale and automation,” said a cybersecurity expert who requested anonymity due to the sensitivity of their work. “Traditional fact-checking methods simply can’t keep pace with the volume of AI-generated misinformation.”

The timing of this escalation coincides with the growing role of AI in fact-checking and information verification, creating a paradoxical situation where the very tools designed to combat misinformation become vectors for its spread.

Despite Moscow’s claims of leadership in artificial intelligence development, security researchers note that Russia’s capabilities largely rely on American and Chinese models. The Kremlin has shown pragmatism in leveraging imported software and technologies to expand its influence in the digital sphere.

For democratic societies, this new dimension of information warfare presents complex challenges. Unlike traditional propaganda, AI-embedded disinformation is difficult to trace to its source and can persist in systems long after initial detection. The automated nature of these operations allows them to adapt and evolve rapidly, making countermeasures difficult to implement.

As AI systems become more deeply integrated into everyday information seeking, the potential impact of these manipulations extends beyond geopolitical tensions to potentially influence public opinion on critical issues ranging from elections to public health crises.

Experts warn that without robust safeguards and increased transparency in how AI systems are trained and operated, this form of digital manipulation will likely continue to undermine trust in information systems and pose significant threats to democratic resilience worldwide.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

13 Comments

  1. Oliver Martinez on

    The report on Russia’s disinformation campaigns using AI and chatbots is a stark reminder of the evolving threats to our information landscape. We must redouble our efforts to counter these deceptive tactics.

    • Michael Jackson on

      Agreed. Safeguarding the integrity of our online discourse against such manipulative practices should be a top priority for policymakers, tech companies, and civil society.

  2. Linda Martinez on

    Disinformation campaigns leveraging advanced technologies like AI pose a significant challenge to democratic societies. We must remain vigilant and strengthen our defenses against these manipulative tactics.

  3. The report on Russia’s efforts to embed false narratives into popular AI platforms is deeply concerning. This represents a worrying escalation in the battle against online disinformation.

    • Michael Hernandez on

      Absolutely. The use of large language models to amplify and spread fabricated content is a concerning development that demands a robust and coordinated response from the international community.

  4. The use of AI and chatbots in disinformation campaigns is a worrying development that underscores the need for robust digital literacy and critical thinking skills among the public. We must be vigilant in the face of these deceptive tactics.

    • Oliver Martinez on

      Absolutely. Strengthening media literacy and fact-checking initiatives will be crucial in empowering citizens to navigate the increasingly complex and manipulative online landscape.

  5. William Martin on

    The revelation that Russian operatives are systematically training large language models to spread false narratives is deeply troubling. This represents a significant escalation in the global information war.

  6. The use of AI and chatbots in disinformation operations is a troubling evolution of information warfare. It’s crucial that we develop effective countermeasures to protect the integrity of online discourse.

    • Absolutely. The targeting of AI systems used by millions for information retrieval is particularly alarming and requires a coordinated global response to address this threat.

  7. Manipulating AI chatbots to advance geopolitical interests is a disturbing tactic. It highlights the need for greater transparency and accountability in the development and deployment of these powerful technologies.

  8. This is a concerning development. Weaponizing AI chatbots to spread disinformation is a sinister tactic that could have far-reaching consequences. We must be vigilant in identifying and combating these manipulative efforts.

    • Patricia Garcia on

      Agreed. The sheer volume of generated content is alarming and highlights the need for robust fact-checking and media literacy initiatives to counter these deceptive campaigns.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.