Listen to the article
Russia’s AI-Powered Disinformation: From Social Media to Machine Manipulation
In the digital age, Russia’s disinformation tactics have undergone a significant evolution, moving beyond traditional social media campaigns into a sophisticated form of information warfare that targets artificial intelligence systems themselves.
The Kremlin’s foreign information manipulation and interference (FIMI) campaigns have maintained their core strategies since the Cold War, but emerging technologies have dramatically increased their reach and efficiency. While Web 2.0 transformed information warfare two decades ago, artificial intelligence is now revolutionizing Moscow’s approach.
Russian operatives have shifted away from directly targeting audiences through social media platforms. Instead, they are flooding the internet with millions of misleading, low-quality articles specifically designed to be scraped by AI-driven applications. This strategy has become increasingly critical as users migrate from traditional search engines to AI tools like ChatGPT for information retrieval.
Experts call this approach “LLM grooming” – a deliberate attempt to corrupt large language models by training them to reproduce manipulative narratives and disinformation. Rather than simply spreading false information, this technique aims to compromise the AI infrastructure itself, embedding pro-Kremlin viewpoints into responses generated by popular chatbots.
In February 2024, Viginum, a French governmental agency responsible for countering foreign digital interference, exposed a Russian operation dubbed “Portal Kombat” or the “Pravda network.” This extensive disinformation network operates websites in multiple languages, producing content that repackages false claims from Russian state media and pro-Kremlin sources.
The network targets Ukraine, the United States, France, Germany, Poland, the United Kingdom, and several African nations. By generating massive volumes of content, the operation ensures AI models incorporate Russian narratives into their responses, effectively shaping the information users receive when querying their AI assistants.
The strategy has proven effective. NewsGuard Reality Check reported that when the Pravda network falsely claimed Ukrainian President Zelenskyy had banned Donald Trump’s Truth Social platform, six out of ten tested chatbots repeated this claim while citing the network as their source. More alarmingly, the proportion of false information in leading chatbots nearly doubled over a year, rising from 18% in 2024 to 35% in 2025.
Researchers at Clemson University identified connections between these AI-targeting campaigns and “Storm-1516,” another pro-Kremlin disinformation operation linked to the former Internet Research Agency – the organization notorious for interfering in the 2016 U.S. presidential election.
The security implications of these efforts are profound. By injecting disinformation into rapidly growing AI information ecosystems, Russia can distort public perception, erode trust in digital information, and spread seemingly legitimate narratives at unprecedented scale and speed.
Halyna Padalko, in a report for the Digital Policy Hub at the Centre for International Governance Innovation, notes that Russia has moved beyond conventional propaganda to exploit language models in ways that normalize falsehoods as factual information. Even established platforms like Wikipedia have inadvertently amplified Kremlin disinformation by quoting sources from the Pravda network.
As AI chatbots increasingly function as fact-checkers and primary information sources for many users, this pollution of the information ecosystem represents a serious challenge to democratic societies. The automation and scale of these campaigns make them increasingly difficult to detect and counter.
Back in 2017, years before ChatGPT became a household name, Vladimir Putin declared that whoever leads in AI would “rule the world.” Ironically, Russia’s current bid for information dominance relies heavily on American and Chinese AI models – suggesting that in the modern era, digital empire-building often depends on imported technology.
For democratic societies, developing effective countermeasures against this evolving threat requires not only technical solutions but also increased public awareness of how AI systems can be manipulated to serve foreign interests.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
This is a concerning development. Russia’s shift to targeting AI systems directly is a worrying escalation of their information warfare tactics. We need robust safeguards to prevent the corruption of large language models.
I’m curious to learn more about the specific techniques Russia is using to target LLMs. Understanding their methods will be key to developing effective countermeasures.
Flooding the internet with low-quality, misleading content is a sneaky way to undermine the reliability of AI-driven information retrieval. It’s crucial that we find ways to inoculate these models against such deliberate manipulation attempts.
Agreed. Maintaining the integrity of AI systems is essential as they become more integral to how people access information online.
The notion of ‘LLM grooming’ is quite chilling. It underscores the sophisticated, long-term nature of Russia’s disinformation efforts. We must stay one step ahead to safeguard the public’s trust in AI-powered information.
Kudos to the researchers and experts who are shedding light on this emerging threat. Their work is crucial for raising awareness and finding solutions to protect the integrity of AI systems.
This article highlights the complex, evolving nature of information warfare in the digital age. It’s a sobering reminder that we must remain vigilant against state-sponsored disinformation campaigns.
Absolutely. Combating these threats requires a multi-faceted, coordinated approach from tech companies, governments, and civil society.