Listen to the article
AI Deepfakes and Misinformation Flood Social Media Amid U.S.-Iran Conflict
In the aftermath of U.S.-Israeli military strikes against Iran, which included the devastating attack on Shajareh Tayyebeh school that claimed up to 168 lives, a wave of digital misinformation has flooded social media platforms.
Experts report that fabricated content related to the conflict has accumulated hundreds of millions of views in just days. Users are sharing clips from digital flight simulators passed off as real-time operations footage, while out-of-context images of battleships and old videos of missile attacks are being repurposed to create false narratives of Iranian military dominance.
The proliferation of AI-manipulated content has grown so alarming that X (formerly Twitter) announced a policy change yesterday. The platform will now suspend users from its Creator Revenue Sharing program if they post AI-generated content depicting armed conflict without proper labeling.
“These videos are posted by anonymous accounts that tend to report on geopolitical conflicts. These are accounts that are known to NewsGuard for spreading exaggerated claims, usually from a pro-Iran perspective,” explained Sofia Rubinson, senior editor at misinformation watchdog NewsGuard.
According to recent investigations by Wired, hundreds of posts across Elon Musk’s X platform have included misleading footage, AI-manipulated images, and false claims about the scale of attacks. Many of these posts appeared immediately following missile strikes, creating an information vacuum filled with misinformation.
One post that garnered over 4 million views claimed to show ballistic missiles over Dubai but actually depicted an Iranian attack on Tel Aviv from October 2024. Another post with 375,000 impressions showed a fabricated before-and-after image of the compound belonging to assassinated Iranian leader Ali Hosseini Khamenei.
Almost all these posts came from premium subscriber accounts with blue checkmarks, including state-funded Iranian media outlets. The BBC has also identified completely AI-generated videos amassing nearly 100 million views, shared by what they describe as notorious “super-spreaders” of disinformation.
“It’s an attempt to fill this fog of war,” Rubinson noted. “It can be very overwhelming for people. They want to make sense of it, and visuals are a good way for us to process what is going on in war when we can’t comprehend the scale of these conflicts.”
A particularly troubling example occurred hours after initial reports of U.S. military strikes, when users on X began sharing an image of a sinking aircraft carrier, falsely claiming it showed a recent attack on the USS Abraham Lincoln in the Arabian Sea. The U.S. military’s Central Command quickly refuted the claim, and NewsGuard confirmed the image actually showed the intentional sinking of the USS Oriskany nearly 20 years ago. Despite this, the post by Kenyan parliamentary member Peter Salasya was viewed over 6 million times.
Fear-inducing domestic content has also emerged, including an unverified list of U.S. cities allegedly targeted by Iranian sleeper cells, appearing to have been created in Apple’s Notes app.
The acceleration of advanced generative AI tools, combined with relaxed content moderation across social media platforms, has exacerbated this crisis. Researchers have identified a pattern in which misinformation thrives in the brief window between breaking news reports and the arrival of verified images or video.
“People now have a shorter window for the lapse between an event occurring and authentic visuals coming out of the media,” Rubinson explained.
This information void becomes fertile ground for disinformation campaigns and engagement farmers. It also reinforces conspiratorial thinking, such as the belief that mainstream media outlets are withholding information from the public.
Particularly concerning is the growing reliance on AI chatbots as real-time fact-checkers. Nearly every X post analyzed by NewsGuard included replies asking “Grok is this true?” – referring to X’s AI assistant. However, these tools have proven unreliable for breaking news verification. The BBC found that Grok erroneously verified AI-generated images depicting Iranian military movements.
Google’s AI-powered Search Summaries have also repeated misleading claims about the U.S.-Iran conflict. When NewsGuard researchers uploaded a frame from a video falsely showing the destruction of a CIA outpost in Dubai, Google’s AI summary incorrectly verified the story, despite the video actually depicting a 2015 residential fire in Sharjah.
As security experts sound alarms over these “AI information threats,” civilians and journalists in Iran are battling a near-total internet blackout. The Trump administration and Elon Musk have pushed to provide Starlink internet connections to those on the ground, while bad actors continue finding ways to circumvent blocks and spread misinformation online.
The combination of engagement-driven social platforms, advanced AI tools, and complex geopolitical conflicts has created a perfect storm for digital misinformation – one that threatens to undermine accurate public understanding of critical world events.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


16 Comments
The use of AI to create fake footage and misleading narratives around the US-Iran tensions is really concerning. We need to be extremely cautious about what we see and share on social media, and always verify information from reputable news sources.
Agreed. The ability to manipulate media so convincingly is a major threat to public discourse. Platforms have to take stronger action, but we as users also need to be more vigilant about critically evaluating content, especially around sensitive geopolitical issues.
Troubling to see the spread of disinformation around the US-Iran tensions. We need to be vigilant about fact-checking content, especially from anonymous accounts pushing questionable narratives. The use of AI to manipulate media and create fake footage is a serious concern.
Agreed, the lack of transparency and accountability around these manipulated videos is really worrying. Platforms need to do more to identify and label synthetic media, and crack down on malicious actors spreading it.
Interesting to see how the platform policy changes are trying to address the spread of AI-created content related to the US-Iran conflict. Hopefully this helps curb the worst of the disinformation, but we’ll need sustained efforts to really tackle this issue.
Yes, it’s a step in the right direction, but there’s still a lot of work to be done. Platforms need to be more proactive in identifying and removing synthetic media, while users need to be more discerning about the sources they trust.
The proliferation of AI-generated content related to the US-Iran conflict is really worrying. It’s so easy for false narratives to spread and gain traction on social media. Platforms have to do more to detect and label synthetic media, but we as users also need to be more discerning consumers of online information.
Absolutely. The ability to manipulate video and audio so convincingly is a serious threat to public discourse. Platforms need to invest heavily in AI-based detection tools, while also enforcing clear policies around labeling synthetic content. But we also have to be more critical in our consumption of online information, especially around sensitive geopolitical issues.
The flood of disinformation around the US-Iran tensions is really concerning. It’s clear that bad actors are exploiting social media to push false narratives and sow confusion. Platforms need to do more to detect and remove manipulated content, while also educating users on media literacy. We all have a responsibility to think critically about what we see online.
Absolutely. The ability to create fake footage and misleading narratives using AI is a serious threat to public discourse. Platforms have to invest heavily in detection tools and clear labeling policies. But we as users also need to be more discerning about the sources we trust, and always verify information from credible news outlets.
The flood of disinformation around the US-Iran conflict is really troubling. It’s clear that bad actors are exploiting social media to sow confusion and division. We need a coordinated effort to identify and remove manipulated content, while also educating the public on media literacy.
Agreed, this is a complex challenge that requires a multi-pronged approach. Platforms need to be more proactive, but we as users also have a responsibility to think critically about what we see online and verify information from credible sources.
Disturbing to see how quickly misinformation can spread online, especially when it’s driven by anonymous accounts pushing exaggerated or false narratives. This really highlights the need for better content moderation and fact-checking on social media platforms.
Absolutely. The prevalence of AI-generated deepfakes is making it increasingly difficult to distinguish truth from fiction. Platforms need to invest more in detection and labeling of synthetic media, and users need to be more discerning about the sources they trust.
It’s alarming how quickly misinformation can spread on social media, especially around sensitive geopolitical issues. We should be wary of any content that seems too dramatic or one-sided, and always verify information from reputable sources.
Absolutely. The proliferation of AI-generated deepfakes makes it even harder to discern truth from fiction. Platforms have to step up their efforts to detect and restrict this type of manipulated content.