Listen to the article
A wave of sophisticated AI-generated videos depicting Ukrainian soldiers as demoralized and ready to surrender has spread across major social media platforms, marking a troubling advancement in war-related disinformation campaigns.
The deceptive videos, which have appeared on YouTube, TikTok, Facebook, and X (formerly Twitter), represent what experts describe as an increasingly sophisticated effort to manipulate public perception of Russia’s ongoing invasion of Ukraine. While the creators remain unidentified, the videos demonstrate the rapidly evolving capabilities of AI-generated content to deceive viewers.
“False claims created using Sora are much harder to detect and debunk. Even the best AI detectors sometimes struggle,” explained Alice Lee, a Russian influence analyst with NewsGuard, a platform that tracks online misinformation. “The fact that many videos have no visual inconsistencies means that members of the public might watch through and scroll past such videos on platforms like TikTok, with no idea that the video they’ve just seen is falsified.”
OpenAI’s Sora 2, released in October, represents one of the most advanced video generators currently available, capable of creating near-perfect simulations that routinely fool viewers. While OpenAI declined to comment specifically on Sora’s use in creating misleading war footage, the company acknowledged broader concerns, stating: “Sora 2’s ability to generate hyper realistic video and audio raises important concerns around likeness, misuse, and deception.”
The company claims to have implemented safeguards, noting that “while cinematic action is permitted, we do not allow graphic violence, extremist material, or deception.” However, a NewsGuard study found that Sora 2 “produced realistic videos advancing provably false claims 80 percent of the time (16 out of 20) when prompted to do so.” Five of these false claims originated from Russian disinformation operations.
Ukraine’s Center for Countering Disinformation reported a “significant increase in the volume of content created or manipulated using AI” over the past year, specifically designed to undermine public trust and international support for Ukraine. “This includes fabricated statements allegedly made on behalf of Ukrainian military personnel or command, as well as fake videos featuring ‘confessions,’ ‘scandals’ or fictional events,” the center stated.
The timing of these videos is notable, coinciding with stalled U.S.-backed peace talks between Ukraine and Russia. Recent polling shows that about 75% of Ukrainians categorically reject Russian proposals to end the war, with 62% willing to endure the conflict for as long as necessary, despite continuing deadly Russian strikes on Ukrainian cities including Kyiv.
Independent verification revealed that NBC News journalists were able to use Sora to generate videos showing Ukrainian soldiers crying, claiming forced military service, or surrendering with white flags—despite OpenAI’s stated prohibition on content showing “graphic violence.” One video bearing Sora’s watermark even depicted a Ukrainian soldier being shot in the head on the front line.
While AI video generators typically label or watermark their creations, many of the videos analyzed had these identifiers obscured or covered with text overlays. Numerous apps and websites now offer tools specifically designed to remove AI watermarks, further complicating authentication efforts.
The videos primarily circulate on TikTok and YouTube Shorts—platforms officially banned in Russia but readily accessible to audiences in Europe and the United States. Many include emotional subtitles in various languages to reach non-Russian or Ukrainian speakers.
Both TikTok and YouTube have policies prohibiting deceptive AI-generated content. A YouTube spokesperson confirmed removing one channel flagged by NBC News but allowed two other videos to remain with AI-generated labels. TikTok reported that “more than 99% of the violative content we removed was taken down before someone reported it to us,” though the videos continue to circulate as reposts on X and Facebook.
“Anyone consuming content online needs to realize that a lot of what we see today in video, photos, and text is indeed AI generated,” warned Nina Jankowicz, co-founder and CEO of the American Sunlight Project. “Even if Sora introduces [safety] guardrails, in this space there will be other companies, other apps, and other technologies that our adversaries build to try to infect our information space.”
This proliferation of AI-generated war disinformation comes as more people rely on social media videos as their primary news source for global events, creating a perfect storm of technology and information warfare that threatens to distort public understanding of one of the world’s most significant ongoing conflicts.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


9 Comments
This is a stark reminder of the need for robust media literacy efforts, so that people can better identify manipulated content, even if it appears convincingly realistic. Ongoing vigilance and collaboration will be crucial.
This is a worrying development in the information war surrounding the Russian invasion. Maintaining public trust in news and information sources will be critical to countering the spread of these AI-generated deceptions.
Sophisticated AI tools can enable the rapid spread of disinformation, which could undermine support for Ukraine. It’s crucial that fact-checkers and the public remain vigilant against such attempts to distort the reality on the ground.
Agreed. We must continue to call out these deceptive tactics and support initiatives to improve AI transparency and accountability.
The Russian invasion of Ukraine is a complex and rapidly evolving situation. While new technologies present challenges, I’m hopeful that the truth and facts will ultimately prevail, with the help of diligent reporting and public awareness.
The proliferation of AI-generated misinformation is a serious concern. I hope researchers and platforms can stay ahead of these evolving threats and equip the public with the tools to identify and reject manipulative content.
Yes, empowering the public to think critically about online content is key. Transparency around AI capabilities and limitations will be crucial for building that resilience.
This is concerning. Advances in AI-generated content pose serious risks for manipulating public opinion during conflicts. We need robust efforts to detect and debunk deceptive videos, especially on social media platforms.
While the technical capabilities of AI are advancing rapidly, I’m encouraged to see experts and fact-checkers working hard to expose these deceptive tactics. Maintaining a well-informed public is essential during times of conflict.