Listen to the article
AI Algorithms: The New Battleground for Propaganda
Russian propaganda masters are increasingly targeting artificial intelligence algorithms as a powerful new vector for shaping public opinion, security experts warn. With over 100 million people now using AI chatbots daily—and OpenAI’s ChatGPT commanding more than half that market—the battlefield for information warfare is evolving rapidly.
A growing number of users are turning to these AI tools for news and analysis. According to a Pew Research Center survey conducted last August, approximately one in ten Americans now gets news from chatbots. Even more concerning, a quarter of users now prefer AI chatbots to traditional search engines, while 72 percent report using Google’s AI Overview feature that appears at the top of search results.
French authorities first sounded the alarm in February 2024 when Viginum, the government’s department for countering foreign digital interference, exposed a network of 193 websites pushing pro-Kremlin narratives across Europe. Dubbed “Portal Kombat,” the operation is run by TigerWeb, a Crimea-based firm headed by Yevgeny Shevchenko, who previously worked for Crimea Technologies—a company that maintains official regional government websites.
One year later, the American Sunlight Project published its own investigation into the same network, rebranding it the “Pravda Network.” Researchers discovered these sites produce approximately three million articles annually in multiple languages while attracting almost no genuine traffic or social media following. The content lacks originality, consisting mainly of aggregated and republished pro-government Russian blogs and media.
The conclusion drawn by both investigations is stark: the network’s primary purpose is not to persuade human readers but to manipulate AI systems by contaminating their training datasets with propaganda.
This technique has earned its own term—”LLM grooming.” By flooding the open web with coordinated falsehoods, actors gradually convince perpetually updating models that these falsehoods are factual and widely corroborated. The manipulation can occur both during initial pre-training and after public release as models continue to ingest new material from the internet.
To ensure their content ranks highly in retrieval-augmented systems, Pravda Network sites are aggressively SEO-optimized. Investigators have also documented systematic insertion of links to these domains into Wikipedia articles—an established technique that artificially inflates perceived authority in the eyes of web crawlers and language models alike.
Russia is not alone in this effort. Reports indicate that Chinese models like DeepSeek reliably echo Beijing’s official positions on sensitive topics while avoiding discussion of events like Tiananmen Square or Taiwan’s status. Unlike China, however, Russia has yet to release a globally competitive sovereign model, so Moscow’s primary strategy has been to poison the data ecosystem used by Western chatbots.
Testing conducted in March 2024 by U.S.-based disinformation monitor NewsGuard found that ten major chatbots frequently reproduced Pravda Network falsehoods. Across hundreds of prompts, the bots relayed disinformation from the network in 34 percent of cases, refused to answer in 18 percent, and debunked it in only 48 percent. More concerning, 56 out of 450 responses contained direct links to Pravda sites.
A follow-up study in late 2025 by researchers from universities in Manchester and Bern painted a somewhat less alarming picture. Testing four models (ChatGPT, Gemini, Copilot, and Grok), they recorded overt propaganda in just 5 percent of answers and links to Kremlin-aligned sources in 8 percent—with most models now flagging such domains as unreliable.
The discrepancy between studies likely stems from methodological differences. The European researchers found that chatbots were more likely to draw on material from pro-Kremlin sites when answering questions that had received little coverage elsewhere. This suggests propaganda is particularly effective at filling information voids—when reliable data is scarce, algorithms have little choice but to fall back on whatever is available, even if the sources are questionable.
The Institute for Strategic Dialogue (ISD) confirmed this vulnerability in its own investigation, finding that 18 percent of generated responses showed traces of pro-Kremlin narratives or direct links to relevant sources. Questions about NATO or Ukraine peace talks elicited links to pro-Kremlin resources far more often than queries about Ukrainian refugees.
Independent testing shows that even today, some AI systems remain vulnerable to such manipulation. When presented with a fabricated story claiming that jewelry stolen from the Louvre was found during an anti-corruption raid at the home of a Ukrainian presidential associate, responses varied widely. Some platforms uncritically “confirmed” the fabrication by citing propaganda sources, while others clearly identified it as likely disinformation.
As AI technologies advance rapidly, models will likely become better at filtering out unreliable sources. Recent responses from some platforms demonstrate this is technically feasible. However, LLM manipulation techniques will also continue evolving, creating an ongoing cat-and-mouse game between AI developers and those seeking to manipulate public opinion through these increasingly influential information channels.
Experts emphasize that until more robust safeguards are implemented, everyone using chatbots—whether professionally or in everyday life—should treat their responses with caution and healthy skepticism, particularly on politically sensitive topics.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
This article highlights the critical importance of media literacy and fact-checking, even when it comes to AI-generated content. We can’t afford to blindly trust these tools, especially on sensitive geopolitical topics.
Absolutely. AI chatbots should be used as a starting point for research, not as a sole source of truth. Verifying information from multiple reliable sources is essential.
It’s worrying to see how Russian propagandists are exploiting the growing reliance on AI for news and analysis. We must remain vigilant and develop robust strategies to detect and debunk this kind of manipulation.
The rapid rise of AI chatbots is certainly concerning from a national security perspective. Governments will need to devote significant resources to identifying and countering foreign disinformation campaigns targeting these platforms.
You’re right, this is a complex challenge. AI companies will also need to improve transparency and security measures to mitigate the risks of malicious exploitation.
The battle against foreign disinformation is evolving, and now it’s being waged on the digital frontier of AI. This is a concerning development that requires a coordinated, multi-stakeholder response.
Fascinating how Russia is trying to weaponize AI algorithms for propaganda. We need to stay vigilant and fact-check information from AI chatbots, especially on sensitive geopolitical topics.
Agreed. AI models can be susceptible to biased training data and manipulation. Fact-checking and critical analysis are key when relying on these tools.