Listen to the article
Fake Videos Flood Social Media as Iran Conflict Intensifies
As tensions escalate following U.S. and Israeli military operations against Iran, a sophisticated disinformation campaign has emerged on social media platforms worldwide. One widely shared video shows crowds gazing up at a burning high-rise building allegedly in Bahrain, with smoke and debris visible from the structure’s upper floors.
The footage, claiming to document an Iranian missile strike, spread rapidly across multiple platforms. However, closer inspection reveals telltale signs of artificial intelligence manipulation—including physically impossible elements like two vehicles appearing fused together and a pedestrian’s elbow passing through a backpack.
This AI-generated forgery represents just one example in a wave of fabricated content circulating since hostilities began last weekend. Intelligence experts have identified much of this material as originating from accounts linked to the Iranian government, designed to exaggerate military successes and inflate casualty figures among their adversaries.
“Content from state actors tends to be better targeted,” explains Melanie Smith, senior director of policy and research on information operations at the Institute for Strategic Dialogue. “They follow a clear narrative structure, using videos to support specific messaging about the conflict and broader geopolitical situation.”
Iranian state media has reinforced these narratives, contributing to the proliferation of AI-generated videos depicting supposed air strikes and infrastructure damage. Simultaneously, a Russia-aligned influence campaign dubbed “Operation Overload” (also known as Matryoshka or Storm-1679) has been creating content impersonating intelligence agencies and news organizations.
One particularly concerning example involved a fabricated warning falsely attributed to Israeli intelligence, advising Israeli citizens in Germany and the United States to avoid public spaces—a psychological tactic previously deployed during election periods to undermine public confidence and influence behavior.
While misinformation has been prevalent in other recent conflicts, including Russia-Ukraine and Israel-Hamas, experts note a critical difference in the current situation: stringent Iranian censorship and internet restrictions have severely limited authentic civilian perspectives from reaching global audiences.
“In Ukraine, citizen journalism created a powerful narrative that changed the entire dynamic of the conflict, as the world aligned with Ukrainians showing resilience in the face of attacks. We’re missing that story from Iran,” says Todd Helmus, senior behavioral scientist at RAND specializing in irregular warfare and information operations.
Beyond state-sponsored campaigns, opportunistic content creators seeking engagement and advertising revenue have contributed significantly to the problem. These accounts repurpose footage from unrelated conflicts, share video game clips presented as combat footage, and deploy their own AI-generated content.
The rapid advancement of artificial intelligence technology has dramatically transformed the misinformation landscape compared to conflicts from even a few years ago. When combined with state-directed disinformation and censorship, these developments create an environment where accurate information becomes increasingly difficult to identify.
“The volume of AI content is polluting the information environment in crisis settings to a terrifying degree,” Smith warns. “The ability to access verified, credible information during these events is becoming harder and harder.”
Social media platforms have begun implementing countermeasures. Nikita Bier, X’s head of product, announced the platform will suspend users from revenue-sharing programs if they share undisclosed AI-generated content related to armed conflicts—imposing 90-day suspensions for first offenses and permanent bans for repeat violations.
Emerson Brooking, director of strategy at the Atlantic Council’s Digital Forensic Research Lab, cautions that social media has effectively become a battlefield extension, with users thousands of miles from physical conflict zones still vulnerable to manipulation.
“If you’re in these spaces, understand that this is an extension of the physical battlespace,” Brooking advises. “Actors on all sides are actively spreading propaganda and disinformation to convince you of falsehoods. Your attention is an asset they’re fighting to control.”
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


11 Comments
Interesting report on the use of AI-generated misinformation during the Iran conflict. It’s concerning to see how sophisticated these state-backed disinformation campaigns have become. Fact-checking and media literacy are crucial to cut through the noise.
Absolutely, we need to be vigilant about verifying the sources and authenticity of online content, especially during times of heightened geopolitical tensions. Misinformation can have serious real-world consequences.
The proliferation of AI-powered misinformation during the Iran conflict is a worrying development. It’s crucial that we find ways to rapidly identify and debunk these fabricated videos and images before they can be used to inflame tensions or sway public opinion.
Absolutely. Developing robust digital forensics capabilities and enhancing media literacy education will be key to combating this threat. We must also pressure social media platforms to improve their policies and enforcement around verifying content authenticity.
It’s alarming to see how easily manipulated videos and images can spread on social media. These AI-generated fakes are incredibly hard to detect without careful analysis. We must be critical consumers of online content, especially around sensitive political issues.
Agreed. The ability of state actors to rapidly generate and disseminate this type of misleading visual content is a major challenge. Improved media literacy and fact-checking efforts are essential to combat the spread of disinformation.
The use of AI to create realistic yet fabricated videos and images is deeply troubling. This type of state-sponsored disinformation can have serious consequences, undermining public trust and escalating geopolitical tensions. We must find ways to combat these deceptive tactics.
Agreed. Strengthening media literacy, improving digital forensics, and enhancing platform policies around verified content are all crucial steps. But the arms race between disinformation and detection will only intensify as the technology advances.
This report highlights the need for greater transparency and accountability around the use of AI in content creation. While the technology has many beneficial applications, the potential for misuse by state actors to spread disinformation is deeply concerning.
This report highlights the urgent need for better tools and policies to address AI-powered misinformation. As the technology advances, it’s becoming increasingly difficult to distinguish genuine footage from fabricated content. What can be done to stay ahead of these evolving threats?
This report is a sobering reminder of the lengths that state actors will go to in order to shape the narrative around conflicts and crises. The use of AI-generated misinformation is a particularly pernicious tactic that requires a multifaceted response.