Listen to the article
Russian Propaganda Found in One-Fifth of AI Chatbot Responses on Ukraine, Study Reveals
Russian propaganda appears in approximately one in five artificial intelligence chatbot answers about the Ukraine conflict, according to a comprehensive new study by the British think tank Institute of Strategic Dialogue (ISD).
Researchers posed more than 300 questions to leading AI platforms—OpenAI’s ChatGPT, Google’s Gemini, xAI’s Grok, and Deepseek’s V3.2—in five different languages, using neutral, biased, and deliberately malicious phrasing to analyze response patterns.
The findings reveal a troubling tendency for AI systems to exhibit “confirmation bias,” with chatbots mirroring users’ language choices and subsequently drawing from questionable sources. Russian propaganda was particularly prevalent when questions contained loaded language or misleading premises.
“When users frame questions in certain ways, these AI systems appear primed to respond in kind, often by pulling from problematic information sources,” said one of the researchers involved in the study, whose name wasn’t specified in the report.
The investigation found that OpenAI’s ChatGPT provided three times more Russian sources when responding to biased or malicious prompts compared to neutral questions about the conflict. Meanwhile, Grok delivered the highest number of Russian-backed sources even when faced with neutrally worded queries.
Each platform demonstrated distinct vulnerabilities. In two specific instances, Deepseek provided four links to Russian-backed sources in a single response—the highest concentration observed in the study. These sources included VT Foreign Policy, which the report identifies as a distributor of content from known Russian propaganda operations such as Storm-1516 and the Foundation to Battle Injustice, alongside state media outlets Sputnik and Russia Today.
Grok showed a concerning tendency to quote Russia Today journalists directly, often linking to their social media posts as authoritative sources. The researchers noted this approach “blurs the lines between overt propaganda and personal opinion” and raises serious questions about AI systems’ ability to identify sanctioned state media content when it appears via third-party channels.
Google’s Gemini demonstrated the strongest safeguards, refusing to engage with some maliciously framed prompts and instead delivering warnings about “inappropriate or unsafe” content. However, researchers criticized Gemini for often failing to disclose its information sources, making verification challenging.
The study uncovered specific topic areas where Russian sources appeared most frequently. Questions about Ukraine’s military recruitment practices were particularly problematic, with 40 percent of Grok’s responses and over 28 percent of ChatGPT’s answers citing at least one Russian source. Both platforms provided Kremlin sources in 28.5 percent of their responses to various Ukraine-related queries.
Conversely, questions about war crimes and Ukrainian refugees triggered fewer Russian-backed sources across all four platforms, suggesting more robust information ecosystems around these topics.
The researchers theorize that AI systems gravitate toward Russian sources when confronting what experts call “data voids”—search terms lacking high-quality information from mainstream sources. As the American think tank Data and Society has documented, these voids typically emerge around obscure topics or breaking news situations where credible journalism hasn’t yet established a footprint.
The findings come amid growing concern about AI systems’ vulnerability to manipulation and the spread of disinformation, particularly regarding geopolitical conflicts. Technology companies face mounting pressure to implement more rigorous source verification and build stronger safeguards against the amplification of state propaganda.
The study underscores the complex challenges facing AI developers as they attempt to create systems that provide accurate information while remaining responsive to user queries—a balance that appears increasingly difficult to maintain in contentious geopolitical contexts.
Verify This Yourself
Use these professional tools to fact-check and investigate claims independently
Reverse Image Search
Check if this image has been used elsewhere or in different contexts
Ask Our AI About This Claim
Get instant answers with web-powered AI analysis
Related Fact-Checks
See what other fact-checkers have said about similar claims
Want More Verification Tools?
Access our full suite of professional disinformation monitoring and investigation tools


13 Comments
This is concerning, but not entirely surprising. AI models can be susceptible to reflect the biases present in their training data. Careful monitoring and oversight is crucial to ensure these systems don’t amplify harmful propaganda.
Agreed. Developers need to be vigilant about testing for and mitigating these issues during the training and deployment stages.
I wonder what steps the tech companies are taking to address this problem. Transparency around their models’ performance and mitigation strategies would be helpful for users to understand the risks.
That’s a good point. More public disclosure from AI providers on their efforts to counter the spread of misinformation would be welcome.
The use of AI to spread propaganda is a concerning development. Maintaining the integrity and trustworthiness of these systems should be a top priority for the tech industry.
This study highlights the importance of critical thinking when engaging with AI-generated content. Users should always verify information from authoritative sources, rather than blindly accepting chatbot responses.
Absolutely. AI should be viewed as a tool to assist research, not a sole source of truth. Discernment is key.
While the findings are disturbing, I’m curious to learn more about the specific methodologies and data sources used in the study. Robust academic scrutiny will be important to fully understand the scope of this issue.
I’m curious to see if this issue is limited to the chatbots examined in the study, or if it’s a more widespread problem across the AI landscape. Broader industry-wide assessments may be warranted.
Good point. The findings likely have broader implications that warrant further research and coordinated action by AI developers and regulators.
This study underscores the delicate balance between the benefits and risks of AI technology. While chatbots can be useful, their potential to amplify disinformation is troubling and deserves close attention.
It’s concerning that AI chatbots could be contributing to the spread of Russian propaganda. Developers need to prioritize safeguards to prevent their systems from being weaponized in this way.
Agreed. Proactive measures to detect and filter out misinformation should be a core part of the design and testing process for these AI models.