Listen to the article
Foreign Powers Amplify Iran War Misinformation Through AI-Generated Content
Foreign governments are actively disseminating sophisticated AI-generated videos and false information about the ongoing Iran war across social media platforms, security experts warn. The fabricated content includes computer-generated footage of alleged missile strikes and exaggerated casualty claims designed to manipulate public perception of the conflict.
In the wake of U.S. and Israeli military actions against Iran, a particularly convincing fake video recently gained widespread attention online. The footage appeared to show crowds watching flames and debris falling from a burning skyscraper supposedly located in Bahrain, with users claiming it depicted damage from an Iranian missile strike.
Analysis by digital forensics experts revealed the video was artificially created using AI technology and distributed through accounts linked to the Iranian government, apparently to exaggerate Iran’s military capabilities. Several technical flaws exposed the video’s artificial nature, including two vehicles that appeared unnaturally fused together and a person whose arm passed through a backpack in a physically impossible manner.
“The content that’s coming from state actors tends to be a little better targeted,” explained Melanie Smith, senior director of policy and research on information operations at the Institute for Strategic Dialogue. “They have a very clear kind of narrative structure and the videos are just used to support some kind of statement they want to make about the conflict and about the geopolitical situation writ large.”
Since hostilities escalated last weekend, Iranian-aligned social media accounts have consistently promoted narratives that overstate damage and death tolls from Iran’s military operations, mirroring themes prevalent in Iran’s official state broadcasting. This coordinated messaging campaign has produced numerous computer-generated videos depicting fictional airstrikes, similar to the fabricated Bahrain tower footage.
Intelligence agencies have identified an ongoing Russian-connected disinformation initiative known as Operation Overload (alternatively called Matryoshka or Storm-1679) that’s actively distributing videos falsely attributed to intelligence services and media organizations. The operation’s apparent goal is to generate fear and uncertainty that could influence public opinion and behavior. One example included a fabricated warning purportedly from Israeli intelligence advising Israeli citizens in Germany and America to avoid public spaces or remain indoors.
While false and manipulated videos have featured prominently during other recent conflicts, including the Russia-Ukraine and Israel-Hamas wars, researchers point to a critical difference in the current situation: the severe limitation of information flowing directly from Iranian citizens. Internet blackouts and widespread censorship within Iran have effectively eliminated perspectives that might otherwise support or contradict the government’s official narrative.
“In Ukraine, that message was so full-throated it really changed the entire dynamic of the conflict because the world really aligned with the perspective of Ukrainians facing the attacks and showing resilience in light of the attacks, but we’re sort of missing that story from Iran,” said Todd Helmus, a senior behavioral scientist at RAND who studies irregular warfare, terrorism and information operations.
Beyond state-sponsored efforts, opportunistic social media users seeking viral content have significantly contributed to the misinformation ecosystem. Common tactics include sharing outdated footage from previous conflicts as current events, posting video game sequences as authentic combat footage, and creating original AI-generated material.
The rapid advancement of artificial intelligence technology has enabled misinformation campaigns that would have been technically impossible even during conflicts occurring just a few years ago. When combined with government-sponsored disinformation and media restrictions, this creates an environment where accurate information becomes increasingly difficult to identify.
“The volume of AI content is starting to just pollute the information environment in these kinds of crisis settings to a really terrifying degree,” Smith noted. “The inability to get access to verified and credible information in times like this — it’s getting harder and harder to do that.”
Social media platforms have begun implementing countermeasures. Nikita Bier, head of product at X (formerly Twitter), announced Tuesday that the platform would remove users from its revenue-sharing program for posting AI-generated conflict content without proper labeling. First-time violations result in 90-day suspensions, while repeat offenders face permanent bans.
Emerson Brooking, director of strategy at the Atlantic Council’s Digital Forensic Research Lab, emphasized that social media platforms now function as extensions of the modern battlefield. He urged users to recognize their potential exploitation by government actors, regardless of their physical distance from actual combat zones.
“If you’re in these spaces, just understand that this is an extension of the physical battle space,” Brooking warned. “There are actors on all sides of the conflict that are actively trying to spread propaganda and disinformation to convince you that certain things are true that aren’t. Your eyeballs and your attention are an asset.”
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
This is a sobering example of how bad actors are exploiting emerging technologies to sow discord and confusion. I hope security experts can stay ahead of these tactics and find ways to quickly identify and debunk fabricated content.
Yes, the speed at which these disinformation campaigns can spread on social media is alarming. Robust fact-checking and public education will be crucial to inoculate against the impact of these sophisticated AI-generated fakes.
It’s disheartening to see foreign governments resorting to such deceptive tactics, but I’m not surprised. As AI capabilities advance, we’ll likely see an escalation of these information warfare efforts. We must remain vigilant and demand accountability.
This is deeply concerning. Spreading misinformation and fabricated videos is a dangerous tactic to manipulate public opinion. It’s critical that we verify information from credible sources and not fall for such blatant propaganda efforts.
Agreed. Governments using AI to generate fake content is extremely alarming. We must be vigilant in identifying and calling out these deceptive tactics to maintain an informed public.
The use of AI to create false videos and exaggerate claims is a troubling new frontier in the spread of misinformation. It’s crucial that we develop robust methods to detect and debunk these manipulated materials.
Absolutely. This highlights the urgent need for improved digital forensics capabilities to expose AI-generated fakes. Fact-checking and media literacy will be essential to combat these sophisticated disinformation campaigns.
While it’s concerning to see foreign governments leveraging technology to spread misinformation, I’m curious to learn more about the specific tactics and techniques they’re employing. Understanding the mechanics behind these AI-powered propaganda efforts could help develop better countermeasures.