Listen to the article
AI-Generated Fake Videos Flood Social Media Amid Iran Conflict
A tidal wave of artificial intelligence-generated videos and images has inundated social media platforms during the initial weeks of the conflict in Iran, creating a chaotic information environment that blurs the line between reality and fiction.
The New York Times has identified more than 110 unique AI-generated visuals circulating online in just the past two weeks. These fabrications span every aspect of the hostilities, falsely depicting catastrophic scenes that never occurred: massive explosions in Israeli cities, decimated streets that were never attacked, and non-existent troops protesting the war.
These fake visuals have garnered millions of views across major platforms like X (formerly Twitter), TikTok, and Facebook, with countless more impressions through private messaging apps popular throughout the Middle East and globally.
“Even compared to when the Ukraine war broke out, things now are very different,” explains Marc Owen Jones, associate professor of media analytics at Northwestern University in Qatar. “We’re probably seeing far more AI-related content now than we ever have before.”
The Times identified these AI fabrications by scrutinizing content for telltale signs of artificial generation, including buildings that don’t exist in reality, garbled text elements, and movements that defy physical expectations. Investigators also checked for embedded digital watermarks and verified findings using multiple AI detection tools while cross-referencing with legitimate news coverage.
The flood of fake content has become particularly valuable to Iran as an information warfare tool. Tehran appears to be leveraging these fabrications to undermine public tolerance for the conflict by showcasing exaggerated scenes of devastation across the region. According to research by Cyabra, a social media intelligence firm, most AI videos related to the conflict promote pro-Iranian narratives, often falsely demonstrating Iranian military superiority.
“The use of AI images of places in the Gulf — being burnt or damaged — becomes more important in Iran’s playbook,” Jones noted, “because it allows them to give a sense that this war is more destructive and maybe more costly for America’s allies than it might actually be.”
One of the most widely shared fabrications depicts a shaky handheld video supposedly shot from a Tel Aviv apartment balcony, showing the skyline bombarded with missiles as an Israeli flag prominently hangs in the foreground. This video accumulated millions of views and was amplified by social media influencers and fringe news websites. Experts point to the flag’s inclusion as a classic sign of AI generation, as these tools typically add national symbols when asked to create content about specific countries.
The contrast between authentic and AI-generated war footage is striking. Genuine combat footage tends to be more subdued, often showing missile strikes from a distance as little more than distant lights against the night sky, with explosions appearing primarily as smoke plumes rather than dramatic fireballs. Bystander videos typically begin filming only after impacts occur.
By comparison, the AI fabrications portray combat like Hollywood action sequences, featuring enormous mushroom cloud explosions, sonic booms rippling across cityscapes, and fictional “hypersonic” missiles leaving luminous trails. Some manipulators have even enhanced actual footage using AI to make explosions appear more devastating.
The USS Abraham Lincoln aircraft carrier became a particular focus of this disinformation campaign. After Iran’s Islamic Revolutionary Guards Navy suggested on March 1 that they had successfully attacked the vessel, social media was flooded with AI-generated images showing the ship or similar vessels ablaze. Iranian users celebrated these fabrications as evidence their counteroffensive was successfully challenging the U.S.-Israeli alliance, despite later U.S. confirmation that the attack failed and the ship remained unharmed.
Some AI creations make no attempt to conceal their artificial nature, instead functioning as digital propaganda that visualizes political narratives. These include both flattering depictions of leaders as powerful figures and dehumanizing portrayals of opposition figures.
A particularly disturbing collection of AI videos reimagined the attack on Shajarah Tayyebeh elementary school, which was destroyed by an apparent errant U.S. missile strike on February 28, killing at least 175 people, mostly children, according to Iranian officials. These fabricated videos portrayed school girls playing outside before being attacked by American fighter jets.
Despite the growing problem, social media companies have taken limited action against this flood of fake content that began overwhelming platforms last year after OpenAI released Sora, a video-generating application that simplified the creation of realistic fabrications.
While many AI tools embed both visible and invisible watermarks labeling content as artificial, these markers are easily removed or obscured. The Times found that very few of the examined videos contained such identifiers.
X recently announced it would suspend accounts from receiving platform revenue for 90 days if they posted unlabeled AI-generated content depicting armed conflict, aiming to eliminate financial incentives for spreading falsehoods. However, research by Cyabra indicates that many Iranian-linked accounts appear more focused on strategic messaging than monetary gain.
“This is a natural front for Iran to try and exploit and it feels like this is one of the reasons it is so voluminous,” said Valerie Wirtschafter, a Brookings Institution fellow studying foreign policy and AI. “It’s actually a tool of war.”
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

