Listen to the article

0:00
0:00

Four Years of Russian Disinformation: From Ukraine Invasion to AI-Powered Propaganda

Since Russia’s invasion of Ukraine on February 24, 2022, disinformation campaigns have flooded European media landscapes with false narratives. According to the collaborative database #UkraineFacts, maintained by Maldita.es and over 100 fact-checking organizations worldwide, more than 3,000 examples of disinformation related to the conflict have been documented.

The evolution of Russian propaganda has been striking. Initial narratives focused on justifying the invasion, while more recent campaigns aim to discourage international support and volunteer recruitment for Ukrainian forces. Perhaps most concerning is the sophisticated integration of artificial intelligence into these efforts, creating increasingly convincing fake content.

“In the early days after the invasion, we saw primarily decontextualized images and video game footage being presented as war footage,” explains Ali Osman Arabaci, editor-in-chief of Turkish fact-checking organization Teyit. “Disinformers quickly flooded social media with these misleading visuals.”

Eight fact-checking organizations across three continents confirmed that recycling old footage was among the most prevalent tactics initially. One widely debunked video purported to show Ukrainian attacks with Molotov cocktails against Russian tanks. Another frequently fact-checked piece showed a protest in Vienna recorded 20 days before the invasion, falsely used to deny Ukrainian casualties.

The justification narratives that emerged in the first months were clear and consistent. They claimed Russia was preventing a planned Ukrainian attack, defending against NATO threats, stopping an alleged “genocide” in Donbas, or destroying supposed biological laboratories. These were frequently paired with characterizations of Ukraine as a “Nazi” and “Russophobic” state, a narrative that European External Action Service’s EUvsDisinfo traces back to the 2013-2014 Euromaidan protests.

Ukrainian refugees, who now number nearly 5.9 million according to UNHCR data, became targets of disinformation as well. Ukrainian organization Detector Media analyzed over 35,000 social media posts about refugees between February and September 2022, finding repeated false claims that they were “destroying Europe,” receiving special “privileges,” and “refusing to work.”

By late 2025, the focus of Russian disinformation shifted dramatically. A cross-border investigation led by Maldita.es revealed campaigns specifically designed to discourage volunteers from joining Ukrainian forces. These efforts spread false claims about massive Ukrainian casualties, forced mobilizations, and allegations that President Zelensky’s government refused to compensate families of fallen soldiers.

The campaigns have been geographically targeted, with particular emphasis on Colombia, which has contributed a significant number of foreign volunteers to Ukraine. “Every international narrative finds a ‘Colombian angle’ to gain traction locally,” explains Santiago Amaya from Colombian fact-checking organization La Silla Vacía.

This geographical focus varies by region. Daisuke Furuta of Japan Fact-Check Centre notes that “Japanese society shows less interest in the situation in Ukraine” compared to Europe. Similarly, in Turkey and Azerbaijan, public attention has largely shifted to other conflicts, including Israel-Palestine and tensions with Iran.

Throughout the conflict, Ukrainian President Volodymyr Zelensky has been a primary target of disinformation. False narratives have labeled him as a “Nazi,” “drunk,” “cocaine addict,” or someone who has “fled” the country. One persistent campaign claims Zelensky has purchased luxury properties worldwide with international aid money intended for Ukraine’s defense. This disinformation circulated in at least six languages, according to AFP analysis.

“Several narratives attempted to discredit President Zelensky by spreading false claims, including accusations of corruption, Nazism, or fabricated videos showing him involved in violence or illegal actions,” says Filipe Pardal from Portuguese fact-checker Polígrafo.

Perhaps most concerning is how artificial intelligence has transformed disinformation tactics. By 2024, according to Kharkiv University analysis, Russian information operations had fully integrated emerging technologies to enhance the credibility and reach of false content.

“AI is significantly accelerating and professionalizing disinformation related to the war by enabling faster content production and large-scale multilingual translations,” explains Ani Grigoryan from Armenian media outlet CivilNet. Alex Zamkovoi of StopFake adds that this technology allows disinformers to “clone voices, fake videos and imitate real media brands.”

A striking example emerged in late 2025 with AI-generated videos purporting to show Ukrainian soldiers crying for help, surrendering to Russian forces, or expressing regret for enlisting. One such video, showing a soldier with a Ukrainian flag pleading to avoid deployment, spread across Twitter (now X) in 13 languages. On TikTok, StopFake identified 14 AI-generated videos of fake military personnel arguing with supposed politicians that collectively garnered over 3.7 million views.

“AI facilitates the fast production of large amounts of content and its widespread dissemination, making it much more difficult for people to distinguish between what is true and what is not,” warns Pardal.

The dissemination infrastructure has evolved as well. Initially, Russian state agencies like RT spread messages denying Ukrainian nationhood and labeling Ukrainians as “Nazis.” After EU sanctions targeted these agencies, Russia deployed what Kharkiv University researchers call an “army” of social media communicators and “micro-influencer operations” to circumvent platform moderation.

Russian embassies worldwide have also amplified disinformation through their social media channels. The Russian embassy in Spain shared misleading images purporting to show humanitarian aid to Ukrainian civilians, while the embassy in South Africa spread false claims about Zelensky purchasing a house owned by King Charles III.

More recently, Russian diplomatic missions have targeted recruitment efforts. The Russian Embassy in Colombia publicly “regretted” that Colombians were believing “false promises of Ukrainian recruiters,” while the embassy in Argentina condemned a documentary about volunteers fighting for Ukraine as “blatant propaganda for mercenaryism.”

Four years after the invasion began, Russia continues its sophisticated information warfare campaign, adapting tactics and technologies to manipulate global public opinion about the conflict in Ukraine.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

15 Comments

  1. Elijah C. Hernandez on

    Fascinating how disinformation has evolved from misleading visuals to AI-powered propaganda. Concerning to see the integration of AI being used to create more convincing fakes. I wonder how fact-checkers are adapting their methods to stay ahead of these tactics.

    • Patricia Hernandez on

      Yes, the use of AI is a disturbing development. Fact-checkers will need to get increasingly sophisticated to combat these new forms of disinformation.

  2. This is a concerning trend. The integration of AI into disinformation campaigns makes the spread of false narratives even more challenging to combat. I hope fact-checkers can develop effective countermeasures to stay ahead of these evolving tactics.

  3. Disturbing to see the integration of AI into disinformation campaigns. The ability to create increasingly convincing fake content is a worrying trend. Kudos to the global fact-checking efforts to document and counter these false narratives. I hope they can stay ahead of these evolving tactics.

  4. Interesting to see how disinformation tactics have shifted from misleading visuals to more sophisticated AI-powered propaganda. Glad to see global coordination among fact-checkers to document and debunk these false narratives. Curious to learn more about their methods for identifying AI-generated fakes.

  5. The scale of disinformation related to the Ukraine conflict is staggering – over 3,000 examples documented. Glad to see global fact-checking efforts to counter these false narratives. Curious what other emerging AI-driven tactics we may see in the future.

    • Elijah X. Jackson on

      Agreed, the proliferation of disinformation is alarming. Fact-checkers will need robust tools and techniques to keep pace with AI-generated propaganda.

  6. James Hernandez on

    The scale of documented disinformation related to the Ukraine conflict is staggering. It’s worrying to see the integration of AI being used to create increasingly convincing fake content. Fact-checkers will need to stay one step ahead of these evolving tactics.

    • Absolutely, the use of AI in disinformation campaigns is a concerning development. Fact-checkers will need robust tools and techniques to effectively combat these new forms of propaganda.

  7. The evolution of Russian propaganda, from justifying the invasion to discouraging support for Ukraine, is concerning. The use of AI to generate convincing fake content is a troubling development. It’s good to see fact-checkers collaborating to document and debunk these false narratives.

    • Agreed, the shift towards AI-powered propaganda is a worrying sign. Fact-checkers will need to be extremely vigilant and develop sophisticated tools to keep pace with these evolving tactics.

  8. Lucas Hernandez on

    Fascinating to see how disinformation tactics have progressed from misleading visuals to AI-integrated propaganda. The scale of the problem, with over 3,000 documented cases, is truly alarming. I’m curious to learn more about the fact-checkers’ strategies for identifying and countering these AI-generated fakes.

  9. The integration of AI into disinformation campaigns is a concerning trend. Fact-checkers will need to stay one step ahead of these evolving tactics to effectively combat the spread of false narratives. Kudos to the global collaboration to document and debunk these AI-powered propaganda efforts.

    • Absolutely, the use of AI to create convincing fake content is a worrying development. Fact-checkers will need robust tools and techniques to identify and counter these new forms of disinformation.

  10. Fascinating to see how disinformation tactics have progressed from misleading visuals to AI-powered propaganda. The sheer volume of documented cases is alarming. I’m curious to learn more about the fact-checkers’ strategies for identifying and debunking these AI-generated fakes.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.