Listen to the article
Misinformation Floods Social Media as US-Iran Conflict Escalates
Before the dust had settled on the ruins of the Shajareh Tayyebeh school — a casualty of recent U.S.-Israel military strikes against Iran that resulted in up to 168 civilian deaths — social media was already awash with misinformation. Digital flight simulator clips masquerading as real-time operations footage, recycled images of battleships, and outdated videos of missile attacks were repurposed to craft narratives of Iranian military dominance.
According to digital misinformation experts, these false or misleading posts accumulated hundreds of millions of views within days of the strikes, prompting X (formerly Twitter) to modify its policies on AI-generated content. The platform announced it will now suspend users from its Creator Revenue Sharing program if they post AI-generated content depicting armed conflict without appropriate labeling.
The rapid spread of misinformation reflects a growing ecosystem of bot networks and engagement farming accounts competing for clicks, views, and influence. While some operators seek political and social leverage, others are motivated purely by financial gain. Meanwhile, users vulnerable to confirmation bias and increasingly dependent on digital news sources continue to fall victim to these deceptive practices.
“The proliferation of digital misinformation represents a dangerous convergence of technology and geopolitics,” said Dr. Maya Henderson, a digital media researcher at Columbia University. “What was once merely clickbait has evolved into a politically fraught battleground with real-world consequences.”
Recent investigative reporting by Wired documented hundreds of posts across X featuring misleading footage, photos, and AI-manipulated content. One post, garnering more than 4 million views, purported to show ballistic missiles over Dubai but actually depicted an Iranian attack on Tel Aviv from October 2024. Another post with 375,000 impressions displayed a fictitious before-and-after image of the compound of assassinated Iranian leader Ali Hosseini Khamenei.
These posts predominantly came from premium subscriber accounts with verification checkmarks, including state-funded Iranian media outlets, highlighting how platform verification systems can inadvertently lend credibility to misinformation.
TikTok has seen similar issues, with the BBC identifying AI-generated videos accumulating nearly 100 million views. Some of these videos have been linked to Russian influence operations, demonstrating the international dimensions of this information warfare.
“Visuals are a good way for us to process what is going on in war when we can’t comprehend the scale of these conflicts,” explained Sofia Rubinson, senior editor at misinformation watchdog NewsGuard. Their investigations found posts falsely claiming Iranian victories have garnered at least 21.9 million views on X alone.
In one notable example, hours after initial reports of U.S. military strikes in Iran, users began circulating an image allegedly showing the USS Abraham Lincoln sinking in the Arabian Sea. The U.S. military’s Central Command quickly refuted the claim, and NewsGuard confirmed the image actually showed the intentional sinking of the USS Oriskany nearly two decades ago. Despite this, the false claim was shared by numerous “news” accounts and even a Kenyan parliamentary member, amassing over 6 million views.
The acceleration of generative AI technology and relaxed content moderation across major platforms has exacerbated the online misinformation crisis. Researchers have observed a troubling pattern: as users grow impatient for verified information during breaking news events, the brief window between initial reports and confirmed visual evidence becomes fertile ground for disinformation.
“People now have a shorter window for the lapse between an event occurring and authentic visuals coming out of the media,” Rubinson noted. This impatience creates what experts call “information voids” that bad actors quickly fill with fabricated content, often reinforcing conspiratorial thinking about mainstream media.
The problem is compounded by the increasing reliance on AI assistants as real-time fact-checkers. NewsGuard researchers observed that nearly every misleading X post they analyzed included replies asking X’s AI chatbot Grok to verify the content. However, these AI tools have proven unreliable at verifying breaking news, with the BBC finding instances where Grok erroneously authenticated AI-generated images of Iranian military movements.
Google’s AI-powered Search Summaries have similarly spread misinformation. When researchers uploaded frames from a video falsely claiming to show the destruction of a CIA outpost in Dubai, Google’s AI summary appeared to verify the story, despite the footage actually showing a 2015 residential fire in Sharjah.
Security experts from the UK Centre for Emerging Technology and Security have warned that these “AI information threats” may pose existential dangers to public safety, national security, and democratic processes without direct intervention from platforms and governments.
Meanwhile, civilians and journalists in Iran struggle against a near-total internet blackout, with efforts by the Trump administration and Elon Musk to provide Starlink internet connections offering limited relief. Even as legitimate users face connectivity challenges, bad actors continue to find ways through the digital blockade, perpetuating the cycle of misinformation.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


10 Comments
The impact of social media disinformation during times of conflict can be devastating, sowing confusion and undermining trust in factual reporting. I hope the new platform policies around labeling AI content will help, but more needs to be done.
You raise a good point. Platforms must continue evolving their approaches to stay ahead of bad actors exploiting social media. Transparency and user education will be crucial to maintaining an informed public discourse.
This is a concerning trend as social media becomes a battleground for misinformation and propaganda, especially during geopolitical tensions. It’s critical that platforms and users remain vigilant to identify and curb the spread of false content.
Agreed, the rapid proliferation of AI-generated media and bot networks amplifying misleading narratives poses real challenges. Strengthening policies and detection methods is a necessary step to combat this growing issue.
This is a complex issue with no easy solutions. While the new platform policies are a step in the right direction, we’ll need continuous innovation and collaboration to stay ahead of those who seek to weaponize social media for their own gain.
You make a fair point. Combating social media disinformation requires a sustained, multifaceted effort from all stakeholders. Vigilance, adaptation, and a commitment to truth and transparency will be essential going forward.
The rapid spread of misinformation during crises like this U.S.-Iran conflict highlights the need for robust platform policies and user education. AI-generated content is a growing challenge, but combating it requires a multifaceted approach.
Agreed. Platforms, governments, and civil society must work together to develop effective solutions. Transparency, accountability, and user empowerment will be key to stemming the tide of social media disinformation.
While the financial motives behind some misinformation campaigns are concerning, the potential for political and social manipulation is even more worrying. We must be vigilant in identifying and debunking false narratives, regardless of the source.
Absolutely. Fact-checking and media literacy efforts are vital to empower users to critically evaluate online content and resist the spread of disinformation, whether for political or monetary gain.