Listen to the article
AI-Generated Misinformation Clouds Understanding of Iran Conflict
A photograph showing a massive explosion at an Iraqi airport. Satellite images depicting damage to a U.S. Naval base in Qatar. Video footage of Iranian ballistic missiles striking central Tel Aviv. All of these images have circulated widely since the Trump administration’s military action against Iran. The problem? None of them are real.
These fabricated or manipulated images—along with countless others—represent a growing challenge for those attempting to distinguish reality from fiction in the ongoing conflict. While misinformation has always been a component of warfare, the rise of generative AI has dramatically simplified the process of creating convincing fake imagery.
“We have reached a level of realism in video, audio, and image deepfakes that for most people, it is not discernible from fact,” explains Rumman Chowdhury, a prominent AI researcher and former ethics head at X (previously Twitter). “While AI companies have agreed to watermarking and other methods of verification, they are not built with the consideration of how users interact with social media.”
This technological shift poses particular dangers in the context of the Iran conflict. “Most Americans are likely entering with low information and probably biased and prejudiced information,” Chowdhury adds. “Fake media will only confuse and compound these biases.”
The Rise of “Shallowfakes”
On February 28, Iran’s state-aligned newspaper Tehran Times shared what appeared to be satellite images showing destruction at a U.S. naval base in Qatar following an alleged Iranian drone strike on American radar equipment. BBC Verify, a journalistic team dedicated to fact-checking, identified these images as AI-generated fakes based on authentic satellite imagery but digitally altered using Google AI to falsely depict damage.
Political scientist Steven Feldstein notes that as public awareness of deepfakes has increased, disinformation creators have adapted with more subtle techniques known as “shallowfakes.”
“Rather than present something that would look completely false, [they] present shades of the truth, manipulate what’s there,” Feldstein explains. This approach involves making just enough alterations to real content to change its meaning while maintaining enough authenticity to bypass skepticism.
Examples include a genuine photo of an Iraqi airport that was modified using AI to show an enormous fireball explosion over a U.S. military base, or the practice of presenting genuine imagery out of context, such as claiming an old photo depicts a recent event.
“It’s become very sophisticated and also a critical part of geopolitics,” says Feldstein, author of “The Rise of Digital Repression: How Technology is Reshaping Power, Politics, and Resistance.”
The 12-Day War of June 2025, when the U.S. and Israel attacked Iran, marked a significant turning point. BBC Verify’s Shayan Sardarizadeh described it as the “first example of a major global conflict where we were seeing more misinformation being produced using AI than in traditional ways,” with numerous AI-generated videos and images achieving “millions and millions of views.”
Beyond AI: Video Game Footage and Official Propaganda
Misinformation tactics extend beyond sophisticated AI tools. In some cases, screenshots from video games circulate as purported documentation of actual destruction. Even more concerning is the deliberate spread of propaganda by official government sources.
On March 4, the White House released a video on its official X account that combined authentic clips of Iranian missile strikes with footage from the “Call of Duty” video game, featuring a voiceover declaring, “We’re winning this fight.” The following day, another White House video celebrating “justice the American way” incorporated clips from entertainment media including “Braveheart,” “Breaking Bad,” and “Gladiator.”
This casual approach to conflict messaging comes as the war has already claimed over 1,000 lives, including more than 100 Iranian schoolgirls according to Iranian state media, and at least six American service members.
“War isn’t a video game,” tweeted military veteran and podcaster Connor Crehan in response. “The consequences of war are final. I wish we didn’t treat it with such a cavalier approach.”
High-Stakes Information Environment
Feldstein observes a troubling trend where the blurring of reality and fiction leads people to dismiss authentic footage as fake. “It’s now to a point where nothing that comes in beyond your own pre-existing narrative is accepted as something that is truthful,” he says, “and that’s just as harmful, as well.”
The strategic use of information to mobilize action has intensified, with social media enabling rapid dissemination before verification can occur. Both the U.S. president and Israel’s prime minister have used various media to encourage Iranian citizens to oppose their government.
“The U.S. is not [currently putting] troops on the ground, but it is relying on information transmission as a means to mobilize change on the ground in terms of Iran’s government,” explains Feldstein. “You can see how high the stakes are when it comes to how quickly that information is digested and it [spurring] action.”
Beyond political manipulation lies an urgent humanitarian concern: people in conflict zones rely on accurate information for survival—knowing where to seek shelter, which areas to avoid, and when to evacuate. In this environment, the ability to discern truth from fiction isn’t merely an academic exercise—it can be a matter of life and death.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
It’s worrying to see how AI is being leveraged as a weapon of information warfare. The ease of creating convincing fakes is a real challenge for maintaining an informed public discourse, especially around critical geopolitical events. More work is needed on verification and resilience.
Fascinating to see how AI is being used as a weapon in geopolitical conflicts. Generative AI models can create incredibly realistic misinformation that’s hard for the public to detect. This really highlights the need for robust verification standards and media literacy education.
The proliferation of AI-powered disinformation is a troubling development with major implications for how we understand and respond to global conflicts. Effective strategies for verification, source transparency, and media literacy education will be essential to combat this growing challenge.
Absolutely. As AI capabilities advance, the potential for abuse and manipulation increases exponentially. Developing robust safeguards and public awareness is critical to maintaining an informed citizenry and preserving the integrity of information flows.
The article raises important points about the dangers of AI-generated misinformation, especially in sensitive conflict zones. While the technology advances, we must also improve our ability to spot fakes and verify information sources. Fact-checking will be critical going forward.
Agreed, this is a concerning trend that will require a multi-pronged response. Improved digital forensics, source transparency, and public awareness campaigns will all be key to combat the rising tide of AI-powered disinformation.
This article highlights the alarming potential for AI to be weaponized as a tool of disinformation, particularly in the context of geopolitical conflicts. The ability to create highly realistic fake media poses serious risks to our understanding of events and our ability to respond effectively. Addressing this challenge will require a multifaceted approach.
This article highlights the growing threat of AI-generated misinformation in conflicts. The ability to fabricate realistic imagery and video is a serious concern that requires concerted efforts from tech companies, governments, and the public to address. Vigilance and fact-checking will be crucial going forward.