Listen to the article

0:00
0:00

Iran Escalates Digital Deception Tactics in Ongoing Conflict with U.S.

The Iranian regime has intensified its disinformation efforts during the current conflict with the United States, employing sophisticated methods including artificial intelligence to manipulate public opinion both domestically and internationally, according to U.S. officials and cybersecurity experts.

“They are a country that for years — I didn’t know this until recently — they’re a country based on disinformation. And now they’re using disinformation plus AI. That’s a terrible situation,” President Donald Trump said during a March 16 event, highlighting growing concerns about Iran’s information warfare capabilities.

Defense Secretary Pete Hegseth specifically referenced AI-generated images circulating online that falsely depicted the USS Abraham Lincoln aircraft carrier in flames. “These AI-generated images are meant to make it look like something’s happening when the exact opposite is,” Hegseth explained during a March 19 press briefing. “They make up fake reports and fake images to lie to their own people.”

While Iranian state actors have indeed deployed fabricated content, experts emphasize that the regime’s information manipulation strategy extends far beyond simple disinformation. Emerson Brooking, strategy director of the Atlantic Council’s Digital Forensic Research Lab, notes that Iran distributes state propaganda through covert channels including fake news websites, inauthentic social media accounts, and proxy media networks.

“The content is biased and covertly placed, but it is rarely wholly invented,” Brooking said. “Iran is a country that has made clandestine propaganda a core instrument of national security policy.”

Iran’s sophisticated digital influence operations trace back to 2010, when the regime began building its capabilities following the 2009 pro-democracy Green Movement. By 2011, the country had recruited thousands of operatives trained in content production and digital media, establishing networks of bots and social media accounts to spread regime messaging without revealing state connections.

Recent research from Clemson University’s Media Forensics Hub identified approximately 60 accounts across X (formerly Twitter), Instagram, and Bluesky linked to the Islamic Revolutionary Guard Corps. These accounts adopted false identities, posing as individuals from various regions including Texas, Venezuela, Chile, and the British Isles to build credibility before pivoting to pro-regime messaging after hostilities escalated in February.

In this current conflict, Iranian state actors have deployed AI-generated imagery to support their narratives. The Iranian embassy in Austria shared a fabricated image of a bloodstained children’s backpack, linking it to a strike on a girls’ school in Minab that reportedly killed over 170 people, mostly children. While U.S. responsibility for the bombing has been preliminarily established, the backpack image was determined to be AI-generated.

Similarly, the state-controlled Tehran Times distributed an AI-created image claiming to show an American radar installation in Qatar destroyed by Iranian forces—a tactic the regime previously employed during the 12-day war between Iran and Israel in June 2025, when Iranian state media shared a fabricated image of a downed F-35 jet.

“Projecting an ‘oppressed yet militarily victorious’ nation is central to the regime’s war narrative,” explained Mahsa Alimardani, associate director at the human rights organization Witness. “The irony is devastating: the regime illustrated real deaths with fabricated imagery, and the identification of those fakes now provides ammunition for people denying the actual bombing occurred.”

Meta recently dismantled an Iranian influence operation comprising 294 Instagram accounts, eight Facebook accounts, and two Facebook pages, though the company noted it hasn’t observed new campaigns specifically linked to the current U.S.-Israel conflict with Iran.

The proliferation of AI-generated content has created an environment where authentic evidence is increasingly dismissed as fake. After the Minab school tragedy, legitimate photos of burial sites were wrongly labeled as AI fabrications by some social media users.

“The claim that something ‘looks AI-generated’ has become a low-effort, high-impact way to discredit real documentation, requiring no actual forensic analysis to deploy,” Alimardani observed.

Iranian influence operations target multiple audiences using tailored messaging. Content emphasizing anti-imperialism and resistance to Western dominance resonates with audiences in the Global South and far-left Western circles. During U.S. election cycles, Iranian groups have operated websites posing as American news outlets and conducted hacking operations targeting political campaigns.

However, research indicates the primary target of Iran’s information operations isn’t the United States but rather the Arab world and its own population. During periods of domestic unrest, including recent protests, the regime has employed campaigns to discredit protesters while imposing internet shutdowns to control the narrative.

As these digital threats evolve, the Trump administration has reduced U.S. capacity to monitor foreign influence operations by shuttering the FBI’s Foreign Malign Influence Task Force and the State Department’s Global Engagement Center—moves that Brooking warns have “effectively made ourselves blind to this threat, even as the White House seems increasingly set on linking any setback in the war effort to Iranian disinformation.”

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

10 Comments

  1. This report highlights the increasing sophistication of state-sponsored disinformation campaigns. The use of AI to create fake images is particularly concerning, as it could erode public trust in the media and undermine informed decision-making. Vigilance and fact-checking will be essential going forward.

    • William Rodriguez on

      I agree, the erosion of public trust is a serious risk. Building resilience against this kind of manipulation will require a sustained, multi-stakeholder effort involving tech companies, media outlets, and educational institutions.

  2. James Rodriguez on

    As someone who follows developments in the energy and commodities sectors, I’m curious to see if this report examines any disinformation efforts related to mining, metals, or other natural resources. The geopolitical implications of such campaigns could be significant.

    • Olivia White on

      That’s a great point. Disinformation targeting the energy and mining industries could have far-reaching economic and political consequences. I hope the report provides insights on any such efforts, as they would be highly relevant to our field.

  3. Jennifer Garcia on

    This is a concerning trend, as AI-powered disinformation could be incredibly difficult to detect and counter. I hope policymakers and tech companies can work together to address this threat and protect against the spread of malicious propaganda.

    • Mary B. Lopez on

      Absolutely. Tackling the challenge of AI-generated disinformation will require a multi-pronged approach involving technical solutions, public awareness campaigns, and international cooperation.

  4. Linda Martinez on

    As someone interested in the geopolitics of the Middle East, I’m curious to learn more about the specific tactics Iran is using. What are the key channels and platforms they are targeting, and how can we better identify and debunk their false narratives?

    • Amelia Smith on

      That’s a great question. Identifying the platforms and tactics used by state actors like Iran is crucial for developing effective counter-measures. I hope the report provides more insights on their evolving disinformation strategies.

  5. Linda Martinez on

    Interesting report on Iran’s sophisticated disinformation tactics. The use of AI to generate fake images is particularly concerning and demonstrates the evolving nature of information warfare. It’s critical to remain vigilant and fact-check claims, especially those involving sensitive geopolitical issues.

    • I agree, the deployment of AI-generated content is a worrying development that highlights the need for improved media literacy and fact-checking skills among the public.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.