Listen to the article
Iran Wields Influence Operations and AI in Information Warfare, Experts Say
Amid escalating tensions in the Iran war, the Trump administration has accused Tehran of spreading false information through sophisticated means, including generative artificial intelligence.
“They are a country that for years — I didn’t know this until recently — they’re a country based on disinformation. And now they’re using disinformation plus AI. And that’s a terrible situation,” President Donald Trump said during a March 16 event.
Both Trump and Defense Secretary Pete Hegseth have specifically referenced fake footage purporting to show the USS Abraham Lincoln aircraft carrier engulfed in flames. “These AI-generated images are meant to make it look like something’s happening when the exact opposite is,” Hegseth explained during a March 19 press briefing.
The administration’s focus on Iranian disinformation comes with some irony. During Trump’s first term, the State Department launched the Iran Disinformation Project in 2018, only to shut it down a year later after the program’s official accounts began inappropriately targeting journalists and academics. In his second term, Trump has dismantled other offices designed to combat foreign influence operations, including the FBI’s Foreign Malign Influence Task Force and the State Department’s Global Engagement Center.
Experts emphasize that Iran’s information manipulation extends beyond simple disinformation. Emerson Brooking, strategy director at the Atlantic Council’s Digital Forensic Research Lab, describes a more nuanced approach: “Iran distributes state propaganda using covert tactics: through fake news websites, inauthentic social media accounts and proxy media networks that perpetuate talking points from the regime under the guise of independent reporting.”
“The content is biased and covertly placed, but it is rarely wholly invented,” Brooking added, suggesting the regime’s approach is more sophisticated than outright fabrication.
Iran’s digital influence capabilities date back to 2010, according to the Atlantic Council. Following the 2009 pro-democracy “Twitter Revolution,” the Iranian regime began building its digital influence infrastructure, recruiting thousands of operators skilled in content production and creating inauthentic social media accounts across platforms.
These operations have evolved significantly over time. In March, researchers at Clemson University’s Media Forensics Hub identified approximately 60 accounts across X, Instagram, and Bluesky linked to Iran’s Islamic Revolutionary Guard Corps. These accounts created false personas, including fictitious Latina women from Texas and California, to build credibility before pivoting to pro-regime messaging after February’s military strikes.
There is evidence of AI-generated content in the current conflict. The Iranian embassy in Austria posted an AI-created image of a bloody children’s backpack, linking it to a strike on a girls’ school in Minab that reportedly killed over 170 people. Similarly, the state-controlled Tehran Times shared an AI-generated image supposedly showing a destroyed American radar installation in Qatar.
Mahsa Alimardani, associate director at the human rights organization Witness, notes this creates a troubling dynamic: “The irony is devastating: the regime illustrated real deaths with fabricated imagery, and the identification of those fakes now provides ammunition for people denying the actual bombing occurred.”
Meta reported removing an Iran-linked influence operation in March comprising nearly 300 Instagram accounts, eight Facebook accounts, and two Facebook pages. These fake personas posed as various credible figures, including an American political scientist and a women’s rights activist.
Iran’s information control strategies extend beyond social media manipulation. Alimardani identifies three main types of misleading information: real events presented with government-approved interpretations, state-generated AI disinformation, and accounts amplifying regime narratives using AI technologies.
The rise of AI has complicated efforts to distinguish authentic from fabricated content. “The claim that something ‘looks AI-generated’ has become a low-effort, high-impact way to discredit real documentation, requiring no actual forensic analysis to deploy,” Alimardani explained.
While U.S. elections have been targets of Iranian influence operations, research suggests the regime’s primary audience is often the Arab world. A 2021 study analyzing over 9.3 million tweets linked to Iranian influence operations found that more than 86% of Arabic-language content received minimal engagement.
Perhaps most significantly, Iran focuses substantial resources on controlling information reaching its own citizens. During periods of domestic unrest, the regime has conducted campaigns to discredit protesters while implementing internet shutdowns that limit access to independent news sources.
“The most consistent target of Iran’s information operations is its own population during moments of domestic unrest,” Alimardani noted, highlighting how the regime provides selective internet access to those supporting government messaging.
As this information battle continues, experts warn that the U.S. government’s dismantling of monitoring capabilities has created dangerous blind spots. “We have effectively made ourselves blind to this threat, even as the White House seems increasingly set on linking any setback in the war effort to Iranian disinformation,” Brooking concluded.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


12 Comments
This is a complex issue with no easy solutions. Both Iran and the US need to take responsibility for their actions and work towards greater transparency and accountability.
I agree. Finger-pointing alone won’t solve the problem. All parties involved should focus on developing effective strategies to combat the spread of disinformation.
The irony of the US accusing Iran of spreading disinformation, while they’ve had their own issues in the past, is not lost on me. Both sides need to be more transparent and accountable.
Exactly, the hypocrisy is clear. All governments should strive for truthfulness and integrity in their communications, regardless of political tensions.
The use of AI in information warfare is a concerning development that raises a lot of questions about the future of media and communications. We need to find ways to ensure the integrity of online content.
Absolutely. Policymakers, tech companies, and the public all have a role to play in addressing this challenge. It’s a complex issue that requires a multifaceted approach.
The increasing use of AI for influence operations is a concerning development. We need robust fact-checking and media literacy efforts to combat the spread of fake news.
Well said. Technological advancements are a double-edged sword, and we must be proactive in developing effective countermeasures against AI-enabled disinformation.
Interesting to see how both sides are using advanced technology like AI to spread disinformation. It’s a concerning trend that needs close monitoring and fact-checking.
Agreed, the use of AI-generated fake imagery is especially worrying. Governments and media need to be vigilant in verifying the authenticity of content.
As an investor, I’m worried about how this geopolitical tension and disinformation campaign could impact commodity and energy markets. Reliable information is crucial for making sound investment decisions.
Absolutely. Market volatility driven by conflicting narratives and false information could lead to significant losses. Transparency and data integrity are paramount.