Listen to the article
Russia has exploited ‘data voids’ to sneak falsehoods into AI-generated content about Ukraine, according to new research that raises significant concerns about the reliability of leading artificial intelligence platforms.
A comprehensive study by the Institute of Strategic Dialogue (ISD) reveals that major AI chatbots—including OpenAI’s ChatGPT, Google’s Gemini, Chinese-developed DeepSeek, and Elon Musk’s Grok—have been serving users pro-Kremlin narratives and quoting sanctioned Russian state media in their responses about the Ukraine invasion.
The research found that Russia has strategically exploited “data voids”—periods when fresh, reliable information is scarce in search results—to inject misinformation into AI systems. This tactic allows Kremlin-backed narratives to appear alongside legitimate sources, potentially misleading millions of users.
“It raises questions regarding how chatbots should deal when referencing these sources, considering many of them are sanctioned in the EU,” said Pablo Maristany de las Casas, the ISD analyst who led the study.
The researchers conducted a methodical investigation by testing 300 questions across a spectrum from neutral to what they termed “malicious”—queries designed to elicit responses supporting conspiracy-like assertions. Topics spanned NATO involvement, Ukrainian refugees, military recruitment, peace negotiations, and alleged war crimes.
To ensure comprehensive results, the team gathered responses in multiple languages—English, French, Spanish, German, and Italian—using fresh user accounts for each query to prevent any bias from personalization algorithms.
The findings revealed that nearly one in five AI responses cited sources connected to Russia’s state communications or intelligence-linked disinformation networks. These included Sputnik Globe, Sputnik China, RT (formerly Russia Today), EADaily, and the Strategic Culture Foundation—many of which are officially sanctioned by the European Union.
Even neutral queries triggered Russian state-linked citations in just over 10% of responses, while biased queries increased that rate to 18%, demonstrating how question framing can significantly influence AI outputs.
Each platform displayed distinct vulnerabilities. ChatGPT topped the leaderboard for most frequently citing Russian sources and showed the strongest sensitivity to biased phrasing. Musk’s Grok tended to amplify Kremlin narratives circulating on social media, while DeepSeek occasionally flooded users with Kremlin-attributed content. Google’s Gemini performed somewhat better, frequently displaying safety warnings, though it wasn’t entirely immune to the problem.
The timing of this research is particularly significant given the EU’s regulatory stance. Since the February 2022 invasion, the EU has sanctioned at least 27 Russian media entities, accusing them of distorting facts and attempting to destabilize Europe. Yet these same outlets continue to appear as sources in AI platforms widely used across the continent.
The scale of potential exposure is substantial. ChatGPT alone reached an estimated 120.4 million average monthly users in the EU from April through September 2023, potentially qualifying it for “Very Large Online Platform” status under EU digital regulations—a designation that brings additional oversight responsibilities.
When contacted about the findings, platform responses varied significantly. OpenAI spokesperson Kate Waters stated the company enforces guardrails to prevent disinformation, including “content linked to state-backed actors,” while suggesting the report focuses primarily on the system’s search functionality rather than evidence of model manipulation.
xAI, Musk’s company behind Grok, provided a notably dismissive three-word statement: “Legacy Media Lies.”
Meanwhile, a spokesperson for the Russian Embassy in London rejected the criticism, claiming Moscow “opposes any attempts to censor or restrict content on political grounds” and suggesting that efforts to limit Russian outlets deprive people of “independent opinions.”
The ISD’s findings align with multiple reports of a widespread Russian operation known as “Pravda” that has allegedly placed millions of propaganda items online with the explicit goal of “poisoning” large language models from within.
Maristany de las Casas argues that solutions must go beyond simply removing problematic sources. “It’s not only an issue of removal, it’s an issue of contextualizing further to help the user understand the sources they’re consuming, especially if these sources are appearing amongst trusted, verified sources,” he said.
The research highlights the growing challenge of maintaining information integrity in the rapidly evolving AI landscape, particularly around geopolitical conflicts where information warfare plays a crucial strategic role.
Verify This Yourself
Use these professional tools to fact-check and investigate claims independently
Reverse Image Search
Check if this image has been used elsewhere or in different contexts
Ask Our AI About This Claim
Get instant answers with web-powered AI analysis
Related Fact-Checks
See what other fact-checkers have said about similar claims
Want More Verification Tools?
Access our full suite of professional disinformation monitoring and investigation tools


8 Comments
I’m curious to learn more about the specific techniques Russia has used to infiltrate AI systems with its propaganda. Understanding their tactics will be crucial for developing effective countermeasures.
This is deeply concerning. If major AI chatbots are amplifying Kremlin propaganda, it could mislead countless people. We need robust safeguards to ensure AI systems don’t become vectors for disinformation, especially on sensitive geopolitical issues.
While the scale of this issue is worrying, I’m glad researchers are shining a light on it. Bringing transparency to AI’s vulnerabilities is the first step toward more robust and trustworthy language models.
Exactly. Proactive, multifaceted efforts by tech companies, researchers, and policymakers will be essential to prevent AI from being hijacked for the dissemination of disinformation.
It’s alarming that AI chatbots are quoting sanctioned Russian media sources. This highlights the urgent need for stronger content moderation and bias mitigation in large language models. Responsible AI development should be a top priority.
Agreed. Chatbots should be designed to distinguish reliable sources from propaganda, and be proactively prevented from amplifying disinformation, especially around high-stakes geopolitical events.
This is a sobering reminder of the potential downsides of AI if not carefully deployed. Responsible AI governance must be a priority to avoid these chatbots becoming unwitting conduits for hostile state narratives.
Exploiting ‘data voids’ to inject pro-Russia narratives is a very worrying tactic. I hope researchers and tech companies can find effective ways to detect and block the spread of such sanctioned propaganda through AI-generated content.