Listen to the article

0:00
0:00

As tensions escalate in the Middle East, Iran has deployed a sophisticated digital influence campaign utilizing artificial intelligence to project military strength and sow confusion among global audiences. The campaign represents a significant evolution in modern information warfare, blending real military actions with fabricated content designed to appear authentic.

Iranian state entities have been circulating AI-generated videos and images that purport to show successful military operations against Western targets. These digital forgeries have gone viral across social media platforms before fact-checkers or government officials can issue corrections, reaching millions of viewers worldwide.

One prominent example involved a synthetic video depicting a strike on Tel Aviv that gained widespread attention before being identified as fake. Even advanced AI systems like Elon Musk’s Grok failed to flag the content as fabricated, highlighting the growing sophistication of these deceptive tactics.

The campaign operates through a coordinated network of state-aligned media outlets, including the Tehran Times, working alongside foreign propaganda channels such as Russia Support. This cross-border collaboration creates a resilient ecosystem that amplifies false narratives across multiple platforms and jurisdictions.

“The goal is not merely to mislead the public about one event or another. It is to sow confusion, erode trust in legitimate reporting, and project an image of military capability that does not exist,” explained one security analyst familiar with the operations.

While Iran has conducted genuine retaliatory strikes against U.S. bases in the region, social media platforms have simultaneously been flooded with unverified footage showing bombings and battlefield scenes that never occurred. Many videos have subsequently been identified as AI-generated or repurposed from previous conflicts.

On March 2, a particularly striking example emerged when Iranian state media circulated AI-generated footage of a burning skyscraper in Bahrain. The content was shared by @TehranTimes79, a verified account with links to the Iranian government. Despite being debunked, the video had already garnered millions of impressions.

In another incident, a Russian-operated account shared fabricated images purporting to show a downed U.S. B-2 bomber and captured Delta Force personnel. These images reached over one million views before being removed, with Iranian outlets including Tehran Times helping to amplify the false content across their networks.

Security experts suggest Iran’s increasing reliance on digital disinformation reveals both strategic adaptation and underlying weakness. Facing superior conventional military capabilities from adversaries like Israel and the United States, Tehran has shifted toward psychological warfare tactics that can be deployed inexpensively through widely available AI tools.

“By weaponizing AI to exploit the fog of war, Tehran is not just targeting military assets, but the very concept of objective truth,” said a cybersecurity researcher who tracks state-sponsored disinformation campaigns. “The line between fact and fiction becomes increasingly blurred with each technological advancement.”

The campaign also serves domestic purposes, allowing the Iranian regime to portray itself as militarily powerful to its own citizens while creating an atmosphere of fear and uncertainty in neighboring countries.

This approach mirrors similar tactics employed by Russia and China, which have increasingly incorporated AI-generated content into their geopolitical influence operations. As AI technology continues to advance, detecting and countering such fabrications becomes more challenging for governments, media organizations, and technology platforms alike.

Experts warn that the ultimate danger extends beyond individual false claims to a broader erosion of trust in information systems. As synthetic media becomes increasingly indistinguishable from authentic content, audiences may develop a cynicism that makes them unable to recognize factual reporting when it appears.

“In this new era of conflict, the ability to verify information has become just as critical as the ability to defend airspace,” noted one defense analyst. “The battlefield now extends to our information ecosystem and our capacity to discern reality.”

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

9 Comments

  1. This is a concerning development. The use of AI to create disinformation and propaganda is a worrying trend that undermines trust and public discourse. Fact-checking and media literacy will be crucial to combat these manipulative tactics.

  2. This is a stark reminder of the evolving nature of information warfare in the digital age. The ability of state actors to leverage AI to create convincing yet fabricated content is a significant challenge that requires a coordinated global response.

  3. The article raises important questions about the geopolitical implications of this type of AI-driven disinformation campaign. It will be critical for international organizations and governments to work together to address these emerging threats to global security and stability.

    • Absolutely. Combating AI-powered disinformation requires a multifaceted approach, including strengthening media literacy, improving content moderation, and advancing AI safety and ethics research. The stakes are high, and the international community must respond accordingly.

  4. William White on

    The Iran case demonstrates the need for greater transparency and accountability around the use of AI in media and communications. Policymakers and technology companies must work together to develop effective frameworks for regulating and mitigating the risks of AI-driven disinformation.

  5. This article underscores the importance of robust fact-checking and media verification processes, especially when it comes to content depicting military or geopolitical events. The public needs to be vigilant and rely on authoritative and trustworthy sources of information.

  6. Michael Martin on

    While the use of AI in disinformation campaigns is concerning, it also highlights the need for continued innovation and development in the field of AI. Advancing AI capabilities, including in the area of content detection and verification, will be crucial to staying ahead of these threats.

  7. Mary T. Lopez on

    Iran’s deployment of AI-generated content to bolster its military image and sow confusion is a troubling escalation of information warfare. It highlights the need for greater transparency and accountability around the use of AI in media and communications.

    • Isabella Williams on

      Agreed. The sophisticated blending of real and fabricated content makes it increasingly difficult for the public to discern truth from fiction. This raises serious questions about the responsible development and use of AI technology.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.