Listen to the article
Russia Targets AI Systems in Evolving Disinformation Strategy
Russia has dramatically shifted its disinformation tactics beyond traditional social media platforms, now deliberately targeting artificial intelligence systems in what experts describe as a sophisticated form of information warfare. This strategic pivot comes as users increasingly turn to AI tools like ChatGPT instead of conventional search engines for information.
According to recent findings from the French governmental agency Viginum, Russian operatives have launched coordinated efforts to flood the internet with millions of misleading, low-quality articles designed to be scraped by AI applications. This tactic, known as “LLM grooming,” aims to train large language models to reproduce pro-Kremlin narratives and disinformation, particularly regarding Russia’s war in Ukraine.
“Instead of targeting audiences directly via social media, Russia’s disinformation apparatus has shifted to corrupting the AI infrastructure more broadly,” explains EUvsDisinfo, an initiative of the European External Action Service. This approach manipulates AI chatbots to produce responses that align with Russian government positions.
In February 2024, Viginum exposed the “Portal Kombat” operation, also known as the “Pravda network.” This Russian disinformation infrastructure consists of numerous websites in multiple languages producing content that repackages false claims from Russian state media and pro-Kremlin influencers. The network targets Ukraine, the United States, France, Germany, Poland, the United Kingdom, other European countries, and parts of Africa.
The effectiveness of these tactics was demonstrated in a report by NewsGuard Reality Check, which found that six out of ten tested chatbots repeated a false claim that President Zelensky had banned Donald Trump’s Truth Social platform. The report indicated that the share of false and misleading information in ten leading chatbots nearly doubled in a year, rising from 18% in 2024 to 35% in 2025.
Researchers at Clemson University have linked some of these disinformation narratives to “Storm-1516,” a pro-Kremlin campaign connected to the former Internet Research Agency—the Russian organization known for orchestrating influence operations during the 2016 U.S. elections.
Halyna Padalko, in a report for the Digital Policy Hub at the Centre for International Governance Innovation, notes that Russia has moved beyond traditional propaganda methods toward exploiting LLMs to normalize false information as seemingly fact-based content. Even relatively trusted platforms such as Wikipedia have inadvertently amplified Kremlin disinformation by quoting sources within the Pravda network.
The scale and automation of these campaigns present a formidable challenge to democratic resilience. As AI chatbots increasingly serve as fact-checking resources and information sources, the deliberate pollution of the information ecosystem represents a serious global security threat. These efforts can distort public opinion, erode trust in digital information integrity, and spread seemingly legitimate narratives at unprecedented scale.
This strategy aligns with Russian President Vladimir Putin’s 2017 statement that the leader in AI would “rule the world.” Ironically, Russia’s current bid for information dominance largely relies on American and Chinese AI models, suggesting that even in modern information warfare, technological dependencies remain.
As AI systems continue to evolve and become more integrated into how people access information, detecting and countering these sophisticated disinformation campaigns will require heightened vigilance from technology companies, government agencies, and informed users alike.
Verify This Yourself
Use these professional tools to fact-check and investigate claims independently
Reverse Image Search
Check if this image has been used elsewhere or in different contexts
Ask Our AI About This Claim
Get instant answers with web-powered AI analysis
Related Fact-Checks
See what other fact-checkers have said about similar claims
Want More Verification Tools?
Access our full suite of professional disinformation monitoring and investigation tools


9 Comments
Russia’s focus on ‘LLM grooming’ is a worrying development. Manipulating the underlying AI infrastructure to spread disinformation is a sophisticated tactic that could have far-reaching impacts on how people access and consume information. We’ll need to stay vigilant.
The shift to targeting AI systems is a concerning escalation of Russia’s information warfare tactics. It demonstrates their willingness to adapt and exploit new technologies for their own strategic advantage. We’ll need a comprehensive response to counter this threat.
This news about Russia’s efforts to corrupt large language models is deeply troubling. The potential for AI-powered disinformation to spread rapidly and undermine trust in information is a serious challenge that requires urgent attention and action.
This ‘LLM grooming’ technique is quite alarming. Flooding the internet with low-quality, misleading content to train language models is a devious strategy. It highlights the need for greater transparency and accountability around AI development and deployment.
Agreed. We’ll need robust fact-checking and content moderation capabilities to counter this kind of manipulation of AI systems.
The move away from directly targeting social media audiences is a concerning evolution of Russia’s information warfare tactics. Corrupting the underlying AI infrastructure is a more insidious approach that could have far-reaching consequences.
Yes, this shift in strategy demonstrates the adaptability and resourcefulness of Russia’s disinformation apparatus. It will require a multifaceted response to address this growing threat.
While not surprising, it’s still disheartening to see Russia exploiting emerging technologies like large language models for their own nefarious purposes. This highlights the critical importance of building AI systems with robust safeguards and ethical principles.
Interesting to see how Russia is evolving its disinformation tactics. Targeting AI systems like ChatGPT is a concerning shift that could have far-reaching implications for the spread of misinformation. We’ll need to stay vigilant and find ways to build more robust and resilient AI infrastructure.