Listen to the article

0:00
0:00

As attacks intensified following U.S. and Israeli bombings of Iran, a video showing crowds watching smoke and fire billowing from a high-rise building in Bahrain spread rapidly across social media platforms. Users claimed the footage depicted an Iranian missile strike, but experts quickly identified it as an AI-generated fake, deliberately circulated by accounts linked to the Iranian government to exaggerate their military successes.

The fabricated video contained several telltale signs of AI manipulation, including two vehicles that appeared unnaturally fused together and a pedestrian whose elbow impossibly passed through a backpack. This is just one example in a flood of misleading and manufactured content that has proliferated online since the conflict began last weekend.

“State actors produce more targeted content,” explained Melanie Smith, senior director at the Institute for Strategic Dialogue. “They follow a clear narrative structure, using videos to support specific claims about the conflict and broader geopolitical situations.”

Pro-Iranian social media accounts have consistently pushed narratives that overstate the damage and casualties inflicted by Iranian military operations, mirroring messaging from Iranian state media. This campaign has generated numerous AI-created videos purporting to show devastating airstrikes, including the falsified Bahrain high-rise incident.

Simultaneously, a Russian-aligned influence operation known as Operation Overload (also called Matryoshka or Storm-1679) has been posting videos impersonating intelligence agencies and news organizations. One example included a fake warning supposedly from Israeli intelligence, advising Israelis in Germany and the United States to avoid public spaces—a fear-inducing tactic previously deployed during election periods.

While misinformation and fabricated videos have characterized other recent conflicts, including the Russia-Ukraine and Israel-Hamas wars, experts point to a crucial difference in this situation: Iran’s internet shutdowns and strict censorship have eliminated vital perspectives from Iranian citizens that might otherwise provide context or counterpoints.

“In Ukraine, citizen accounts dramatically changed the conflict’s dynamic as the world aligned with Ukrainians displaying resilience under attack. We’re missing that narrative from Iran,” noted Todd Helmus, a senior behavioral scientist at RAND who specializes in information operations.

The problem extends beyond state-sponsored disinformation. Opportunistic social media users chasing engagement have contributed significantly to the spread of false information, repurposing old footage from previous conflicts, sharing video game clips as authentic battle footage, and creating their own AI-generated content.

The rapid advancement of AI technology has accelerated misinformation in ways that were inconceivable even a few years ago. When combined with state-backed disinformation campaigns and widespread censorship, this creates a dangerous information vacuum.

“The sheer volume of AI content is polluting the information environment during crises to a frightening degree,” Smith warned. “Accessing verified, credible information during these periods is becoming increasingly difficult.”

In response, social media platforms are beginning to implement measures against AI-generated war content. Nikita Bier, X’s head of product, announced that users posting undisclosed AI-generated content from armed conflicts will face suspension from the platform’s revenue-sharing program—90 days for first offenses and permanent removal for subsequent violations.

Emerson Brooking from the Atlantic Council’s Digital Forensic Research Lab cautions that social media platforms have become extensions of the battlefield, with users worldwide potentially becoming unwitting participants in information warfare.

“If you’re in these spaces, understand that this is an extension of the physical battle space,” Brooking advised. “Actors on all sides are actively spreading propaganda and disinformation to manipulate perceptions. Your attention and engagement are assets they’re fighting for.”

As the conflict continues, experts urge users to approach all unverified war footage with heightened skepticism, particularly content showing dramatic or explosive scenes that seem designed to provoke strong emotional responses.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

8 Comments

  1. Michael Martin on

    I wonder what other tactics state actors are employing to spread disinformation related to the Iran conflict. This AI-generated video is just the tip of the iceberg. We need to be vigilant in scrutinizing the information we consume online.

    • Good point. Coordinated social media campaigns, bot networks, and other sophisticated techniques are likely being used as well. It’s a complex challenge that requires a multifaceted response to address.

  2. Oliver Miller on

    The use of AI-generated fake videos to exaggerate military successes is a concerning tactic. It speaks to the broader challenge of combating coordinated disinformation campaigns online. We must remain vigilant and committed to upholding the integrity of information.

  3. Interesting how state actors are using AI-generated videos to manipulate the narrative around the Iran conflict. Definitely highlights the need for critical thinking when consuming social media content these days.

    • Absolutely. It’s concerning to see how sophisticated disinformation campaigns have become. We need better media literacy education to help people identify these kinds of fabricated videos.

  4. This article highlights the importance of media literacy and the need to be critical consumers of online content, especially when it comes to sensitive geopolitical issues. Fact-checking and verifying sources is crucial to avoid falling victim to state-sponsored disinformation.

  5. Robert Hernandez on

    This is a concerning trend. The proliferation of AI-generated fake videos is a serious challenge for maintaining an informed public discourse. We need robust fact-checking and media literacy efforts to counter these deceptive tactics.

    • William Brown on

      Well said. Identifying the source and verifying the authenticity of online content is crucial, especially when it comes to sensitive geopolitical issues. Diligence is required to cut through the noise.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.