Listen to the article
AI Fakes Flood Social Media During Iran War, Experts Warn of Growing Threat
Social media platforms are awash with sophisticated fake videos and images purporting to show scenes from the ongoing conflict with Iran, marking a dangerous evolution in wartime disinformation that experts say is increasingly difficult to detect.
Unlike the crude fakes that circulated during Russia’s 2022 invasion of Ukraine—often simply mislabeled footage from video games or movies—today’s deceptive content leverages advanced artificial intelligence tools that weren’t widely available two years ago.
“Ten years ago, there’d be like one or two fake things out there; they’d get debunked pretty fast,” said Hany Farid, a University of California, Berkeley professor specializing in digital forensics. “Now you see hundreds of them, and they’re really realistic. It’s not just realistic, it’s landing—it’s landing hard. People believe it and they’re amplifying it.”
The rapid proliferation of these convincing fakes has generated tens of millions of views across social media platforms in the weeks since hostilities began. Shayan Sardarizadeh, a senior journalist with BBC Verify who specializes in debunking war-related misinformation, notes the significant technological shift that has occurred.
“What has changed in the last year or so is that generative AI has become much more widely accessible,” Sardarizadeh explained. “It’s now possible to create very believable videos and images appearing to show a significant war incident that is hard to detect to the untrained or naked eye.”
Among the most widely circulated fakes identified by experts are videos showing fictional Iranian missile barrages striking Tel Aviv, panicked civilians fleeing a nonexistent airport attack, and captured U.S. special forces being held at gunpoint by Iranian troops—none of which actually occurred.
Other AI-generated content includes purported security camera footage of Iranian military facilities being destroyed, fabricated images of burning U.S. diplomatic and military facilities in Saudi Arabia, Iraq, and Bahrain, and even staged scenes of Iranian Supreme Leader Ali Khamenei lying dead under rubble.
Several of these sophisticated fakes have been traced to pro-Iranian social media accounts engaged in propaganda efforts, though the motivations behind many others remain unclear—possibly created simply because the technology makes it easy, or to generate views and influence.
The problem is compounded by what experts describe as an increasingly difficult information environment. Media fragmentation, partisan polarization, and algorithmic echo chambers mean many users primarily see content shared by like-minded individuals. Meanwhile, major social media companies have scaled back content moderation efforts.
“The content is more realistic, the volume is higher, the penetration is deeper—this is our new reality. And it’s really messy,” Farid said.
Some platforms have announced limited countermeasures. X (formerly Twitter) stated last week that paid content creators who share AI-generated videos of armed conflicts without proper disclosure will face suspension from the platform’s payment program. However, this policy affects only a tiny fraction of users, and the platform’s “community notes” fact-checking system has proven inconsistent.
Compounding the problem, X’s own AI chatbot, Grok, has reportedly provided incorrect verification of fake content when users have sought fact checks. Neither TikTok nor Meta (parent company of Facebook and Instagram) responded to media requests regarding their approach to combating war-related AI fakes.
For consumers, distinguishing real from fake has become increasingly challenging. Traditional tell-tale signs of AI generation, such as extra fingers or misplaced limbs on human figures, have largely disappeared as the technology has improved.
“The best way to remain accurately informed is to make a choice to get your news from credible journalistic outlets instead of scrolling through posts from random accounts on social media,” Farid advised. “In moments of global conflict, this is not a place to get information.”
For those who do consume news through social media, experts recommend several verification strategies: examine content carefully for visual inconsistencies; search for assessments from reputable fact-checkers; consider skeptical user comments; and utilize AI-detection tools, though these are not foolproof.
Despite these precautions, Sardarizadeh offers a sobering assessment of where things are headed: “It is becoming extremely difficult to detect AI-generated content, and the trajectory appears to be heading in the direction of it becoming even more difficult soon.”
As AI technology continues to evolve, the challenges in distinguishing truth from fiction in wartime reporting are likely to grow more complex, demanding increased vigilance from both platforms and users alike.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


13 Comments
The explosion of AI-generated misinformation during this conflict is a disturbing development. Staying informed from reliable sources and being cautious about unverified claims online will be essential.
Agreed. The ability of these fakes to spread so quickly and gain traction is really worrying. We all need to be much more vigilant about the information we consume and share online.
Wartime is always a breeding ground for misinformation, and the rise of AI-powered fakes is taking that to a new level. Verifying sources and being skeptical of sensational claims will be crucial in this environment.
This is a really disturbing development. The proliferation of convincing AI-powered misinformation during conflicts like this is a serious threat to accurate information and public understanding.
The flood of AI-powered misinformation during this conflict is deeply concerning. Fact-checking, media literacy, and a critical eye towards online content will be essential in combating this threat.
The increasing sophistication of these AI-generated fakes is a worrying trend. Fact-checking, media literacy, and a healthy skepticism towards unverified claims will be essential going forward.
The rapid spread of AI-generated fakes during this conflict is a stark reminder of the challenges we face in the digital age. Staying vigilant, verifying sources, and relying on trusted media will be crucial.
The rapid evolution of AI tools is making it harder than ever to detect fake content online. This highlights the urgent need for better detection methods and public education around digital media verification.
Absolutely, this is a major challenge for social media platforms and the public. Staying vigilant and relying on trusted, fact-based sources will be key to navigating this landscape.
This is really concerning. The spread of AI-generated misinformation during conflicts is a growing threat we need to take seriously. Fact-checking and media literacy will be crucial to combating these sophisticated fakes.
This surge in AI-powered misinformation is a major challenge that will require a multi-pronged approach to address. Collaboration between platforms, fact-checkers, and the public will be crucial.
Absolutely. No single entity can solve this problem alone. It’s going to take a concerted, coordinated effort to develop effective detection tools and education campaigns to combat the spread of these sophisticated fakes.
It’s alarming to see how quickly these AI-generated fakes can spread and gain traction. Fact-checking and media literacy efforts will be critical to curbing the impact of this growing threat to accurate information.