Listen to the article

0:00
0:00

Russia Deploys AI-Generated Deepfakes to Undermine Ukrainian Resolve

Artificial intelligence has emerged as a potent weapon in Russia’s information war against Ukraine, with dozens of fabricated videos depicting Ukrainian soldiers surrendering or in distress circulating widely across social media platforms in November, according to an investigation by AFP.

The synthetic videos, showing soldiers in Ukrainian uniforms crying, begging to avoid combat, or retreating from strategic locations like Pokrovsk, have garnered millions of views across TikTok, Instagram, Telegram, Facebook, and X before platform moderators could intervene.

Security experts view these deepfakes as part of a coordinated Russian effort to erode Ukrainian morale as Moscow struggles to make significant battlefield advances. The timing coincides with Russia’s intensified military campaign around Pokrovsk, where the Kremlin has concentrated one-third of all frontline clashes and half of its glide bomb strikes.

“It’s an old tactic, but the technology is new,” explained Ian Garner, a Russian propaganda specialist at the Pilecki Institute. “They’re chipping away at Ukrainian morale, saying: ‘Look, this is somebody just like you, it could be your brother, your father.'”

Despite their widespread distribution, many videos contain telltale signs of artificial generation. In one example, a supposed soldier walks normally while wearing a leg cast, and visual glitches show a stretcher levitating with disembodied legs fading in and out of the background—characteristic imperfections of current AI video technology.

Some videos openly display the logo of OpenAI’s Sora video generation tool, while others have misappropriated the identities of real individuals. Russian YouTube content creator Alexei Gubanov, now living in exile, discovered his likeness used in propaganda videos.

“Obviously it’s not me,” Gubanov stated in a YouTube response. “Unfortunately, a lot of people believe this… and that plays into the hands of Russian propaganda.”

The AFP investigation revealed a sophisticated cross-platform distribution network pushing these videos in multiple languages, including Greek, Romanian, Bulgarian, Czech, Polish, and French. Content appeared not only on social media but also on established media outlets, including a Russian weekly publication and a Serbian tabloid.

When contacted by AFP, TikTok confirmed it had removed accounts responsible for distributing the fake content. However, not before one video accumulated over 300,000 likes and several million views. Other platforms have taken similar actions, though often after significant exposure.

The European Digital Media Observatory, an EU-funded fact-checking initiative, has published more than 2,000 articles addressing Ukraine war disinformation since Russia’s full-scale invasion in 2022. The organization notes AI-generated content has become increasingly prevalent in recent months.

Pablo Maristany de las Casas from the Institute for Strategic Dialogue placed these videos within a “broader narrative that we’ve seen since the beginning of the invasion” that falsely portrays Ukrainian President Volodymyr Zelenskyy as forcibly conscripting civilians to fight.

Technology companies are struggling to stay ahead of these disinformation campaigns. An October study by the Institute for Strategic Dialogue found that nearly one-fifth of responses from popular AI chatbots cited Russian state-attributed sources, suggesting Kremlin narratives have already infiltrated AI systems themselves.

While OpenAI told AFP it had conducted an investigation into the misuse of its technology, it provided no specific details about countermeasures. Security analysts remain concerned that platform responses lag behind the evolving threat.

“The scale and impact of information warfare outpace the companies’ responses,” warned Maristany de las Casas.

Carole Grimaud, a researcher at Aix-Marseille University, emphasized that while measuring the precise impact of individual videos remains difficult, the cumulative effect of repeated exposure can gradually shift public perception.

“These videos instrumentalize uncertainty to sow doubt in public opinion,” Grimaud said. “When this message is repeated consistently, it’s possible that people’s perceptions change.”

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

26 Comments

  1. Interesting update on Russia Floods Social Media with AI-Generated Fake Videos of “Surrendering” Ukrainian Soldiers Near Pokrovsk. Curious how the grades will trend next quarter.

  2. James B. Hernandez on

    Interesting update on Russia Floods Social Media with AI-Generated Fake Videos of “Surrendering” Ukrainian Soldiers Near Pokrovsk. Curious how the grades will trend next quarter.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.