Listen to the article
Russian Propaganda Campaign Uses AI to Spread Divisive Content in Ukraine
A new wave of artificially generated videos spreading Russian propaganda has emerged on social media, according to Ukraine’s Center for Countering Disinformation (CCD). The center, which operates under the National Security and Defense Council, identified these sophisticated AI-generated videos targeting Ukrainian society.
The videos feature computer-generated characters purporting to be Ukrainian members of parliament, police officers, and military personnel. According to the CCD’s Facebook announcement, these digitally created personas are deliberately rendered as caricatured, overweight figures to mock Ukrainian officials and servicemembers.
“These AI-generated videos are designed to appear as satirical Ukrainian content, but their actual purpose is far more sinister,” a CCD representative explained. “They aim to sow division within Ukrainian society by targeting sensitive topics that could provoke hostility toward military personnel, law enforcement, and government officials.”
Security experts note that this represents an evolution in Russia’s information warfare tactics. Rather than creating obviously false news reports, these videos use humor and satire as vehicles to deliver propaganda messages, making them more likely to be shared organically through social networks.
The CCD warned that artificial intelligence technologies now enable Russia to produce such content at an unprecedented scale. This mass production capability allows propagandists to flood social media platforms with seemingly harmless entertainment that carries underlying divisive messages.
“What makes these videos particularly dangerous is how they’re packaged,” said Natalia Kostenko, a digital security analyst at Kyiv-based Digital Shield Initiative. “By disguising propaganda as humor, they can bypass people’s natural defenses against misinformation. Viewers might share what they think is just a funny video without recognizing the manipulative content embedded within it.”
This latest disinformation campaign appears designed to exploit existing tensions within Ukrainian society during wartime. By targeting relationships between civilians and military or government officials, the content aims to weaken internal cohesion at a time when national unity is crucial for Ukraine’s defense efforts.
The CCD emphasized that the ultimate goal of such content is twofold: to destabilize Ukraine internally and to demoralize Ukrainian society as it continues to resist Russian aggression.
This is not the first time Russian propagandists have deployed sophisticated technological tools in their information operations. The CCD also recently exposed a fake video circulating that falsely claimed British intelligence service MI6 had reported “80% of terrorist attacks in France over the past year were linked to Ukrainian refugees.” This fabricated claim appears designed to damage Ukraine’s international relationships, particularly with key European allies.
Digital security experts recommend that social media users exercise heightened vigilance when encountering humorous or satirical content related to sensitive political topics. Signs of potential AI-generated propaganda include unusually exaggerated physical features of subjects, narratives that provoke strong emotional reactions against specific groups, and content that seems designed to inflame existing social tensions.
Social media platforms have struggled to effectively moderate such content, particularly when it’s presented in Ukrainian or other languages with fewer content moderation resources than English-language material.
As AI technology continues to advance, disinformation campaigns are expected to become increasingly sophisticated, presenting significant challenges for organizations like the CCD that work to identify and counter such threats before they can significantly influence public opinion.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


10 Comments
The Russian government’s use of AI to spread divisive content in Ukraine is a disturbing development. We must be vigilant in identifying and debunking this kind of sophisticated misinformation.
Absolutely, these AI-generated videos are a worrying sign of the evolving tactics used in information warfare. Kudos to the CCD for working to expose this threat.
While the use of AI to create propaganda videos is alarming, I’m glad the Center for Countering Disinformation is actively working to expose and combat these tactics. Fact-checking and media literacy are key to combating this threat.
I agree, the CCD’s efforts to identify and call out these AI-generated propaganda videos are essential. Maintaining a well-informed public is crucial to resisting information warfare.
While the use of AI for propaganda is alarming, I’m glad to see the Ukrainian government taking steps to counter these tactics. Exposing and debunking this kind of content is crucial for preserving democracy.
I agree, the CCD’s efforts are commendable. Combating AI-generated disinformation requires a multi-pronged approach of public awareness, fact-checking, and strong defensive measures.
It’s concerning to see Russia’s continued efforts to spread disinformation through AI-generated propaganda videos. We must remain vigilant against these sophisticated tactics aimed at undermining Ukrainian society.
You’re right, these AI-generated videos are a worrying development. It’s crucial that Ukrainians are aware of this threat and can identify such manipulative content.
As someone with an interest in mining and energy, I’m concerned about how this AI-generated propaganda could impact those industries in Ukraine. It’s critical that the public has access to factual, reliable information.
You raise a good point. Disinformation could have serious consequences for Ukraine’s mining and energy sectors. Maintaining transparency and truth is essential for the health of those industries.