Listen to the article
Russian Disinformation Campaign Targets Ukrainian Army and Leadership Across Multiple Countries
A comprehensive monitoring report reveals Russia has orchestrated widespread disinformation campaigns targeting Ukraine across ten countries during the final quarter of 2025. The ATAFIMI project, which tracks foreign information manipulation and interference, documented systematic efforts to undermine Ukrainian military recruitment and discredit President Volodymyr Zelensky through false corruption allegations.
The most prominent narratives backed by disinformation amassed over 2 million views, with content specifically targeting the Ukrainian Army and foreign soldiers fighting for Ukraine. False claims against Zelensky appeared in 9 out of 10 monitored countries, with tweets accusing him of owning a Russian passport or buying Bill Cosby’s New York house receiving millions of views.
Researchers identified widespread use of AI-generated content to support disinformation narratives. While Facebook and occasionally TikTok labeled such content as manipulated, Twitter (now X) rarely implemented Community Notes, and Telegram provided no warnings whatsoever. Several accounts spreading disinformation on X carried “blue check” verification badges, potentially amplifying their reach through the platform’s subscription-based visibility algorithms.
“These narratives are strategically tailored to exploit regional sensitivities and concerns,” explained one of the researchers involved in the project. “In Eastern Europe, the content often plays on historical tensions, while in Latin America, it focuses on discouraging volunteers from joining the Ukrainian forces.”
In Bosnia and Herzegovina, Russian state news agency Sputnik exploited local ethnic sensitivities with false claims that Putin planned to unite Serbia and Republika Srpska, while in Colombia, disinformation focused on Colombian mercenaries allegedly being mistreated in Ukraine.
The report documents how Russian-backed disinformation employs increasingly sophisticated techniques to undermine Ukraine’s international support. One particularly effective narrative focuses on alleged human costs of war, featuring emotionally manipulative AI-generated videos of supposedly distressed Ukrainian soldiers and forced recruitment claims.
These videos, designed to generate emotional responses, were viewed millions of times across social platforms. On TikTok, videos showing “Ukrainian soldiers crying” or “expressing regret” surpassed 3 million views without warnings. Similar content circulated on Facebook and Telegram, with debunking labels inconsistently applied.
In Europe, the disinformation campaigns particularly targeted countries providing financial assistance to Ukraine. A Georgian Facebook post viewed nearly 2 million times featured AI-generated videos falsely depicting Ukrainian soldiers engaging in homosexual behavior, part of a narrative attempting to characterize Ukraine as incompatible with traditional values.
The ATAFIMI project involves ten organizations across Europe and Latin America, including Fundación Maldita.es (Spain), StopFake (Ukraine), Delfi (Lithuania), Myth Detector (Georgia), and others. Their collaborative methodology allows for identification of cross-border disinformation campaigns using a centralized repository of detected content.
Social media platforms’ inconsistent approach to labeling manipulated content represents a significant challenge in combating these influence operations. While Facebook frequently labeled false content through fact-checking partnerships, Telegram offered no moderation whatsoever, and X rarely applied Community Notes despite hosting some of the most viral disinformation.
The report highlights how Russian disinformation increasingly exploits AI technologies to create compelling false content while adapting narratives to resonate with specific regional audiences. With millions of views across multiple platforms and languages, the scale and sophistication of these operations demonstrate Russia’s continued investment in information warfare as a strategic tool in its conflict with Ukraine.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


12 Comments
While the scale of this disinformation campaign is alarming, I’m encouraged to see the ATAFIMI project’s efforts to track and document these activities. Rigorous monitoring and analysis are vital to countering such threats.
Agreed. Comprehensive reporting like this can help raise awareness and inform policymakers and platform leaders to develop more effective responses.
I’m curious to learn more about the specific tactics and techniques used in this disinformation campaign. Understanding the modus operandi can help develop more effective countermeasures.
Good point. The report’s mention of AI-generated content is particularly alarming. Deeper analysis of these methods could inform platform policies and digital literacy initiatives.
It’s disheartening to see how quickly false narratives can spread, especially when amplified by coordinated efforts. Building public resilience against such manipulation should be a top priority.
Absolutely. Effective fact-checking and public education campaigns are essential to empower citizens to critically evaluate information and spot malicious disinformation.
Concerning to see the scale and reach of this disinformation campaign targeting the Ukrainian military and leadership. It highlights the need for robust fact-checking and digital literacy efforts to counter such coordinated manipulation efforts.
You’re right, the use of AI-generated content to amplify false narratives is particularly worrisome. Platforms need to do more to identify and limit the spread of this type of manipulated content.
This report highlights the ongoing battle against foreign interference in domestic affairs. Maintaining the integrity of information ecosystems is crucial for protecting democracy and national security.
You’re right. Strengthening digital infrastructure and collaborative efforts between governments, tech platforms, and civil society will be key to mitigating these threats in the long run.
This report underscores the ongoing challenges in combating cross-border disinformation, especially when it’s backed by significant resources. Protecting democratic institutions and public discourse should remain a top priority.
Agreed. Robust international cooperation and information sharing will be crucial to effectively counter these types of coordinated influence operations in the future.