Listen to the article
In a conflict defined by misinformation, Israeli-Iranian war becomes testing ground for AI deception
The recent outbreak of conflict between Israel, the United States and Iran has become ground zero for unprecedented levels of misinformation, particularly AI-generated content that has flooded social media channels, complicating efforts to understand the unfolding crisis.
The hostilities began on February 28 with Israeli and U.S. strikes on Iran that resulted in the assassination of Iran’s Supreme Leader, Ayatollah Ali Khamenei, along with other key regime figures. The conflict rapidly expanded beyond Iran, with Israel focusing operations in Lebanon while Iran launched retaliatory strikes against Gulf states, dramatically widening the regional impact.
Just five days into the conflict, BBC Verify’s Shayan Sardarizadeh noted that “this war might have already broken the record for the highest number of AI-generated videos and images that have gone viral during a conflict.” His daily documentation of war misinformation highlights how rapidly false information has spread across social platforms, often amplified by high-profile accounts.
The proliferation of sophisticated AI models in recent months has created ideal conditions for the dissemination of false content about the conflict, making it increasingly difficult to distinguish authentic footage from fabricated material.
One notable example involved a viral video purportedly showing the aftermath of a drone attack on the U.S. embassy in Riyadh, Saudi Arabia. Fact-checkers revealed it actually showed an unrelated car accident in Riyadh from an earlier period, but not before the clip had been widely shared.
Even official government accounts have contributed to the problem. Israeli Prime Minister Benjamin Netanyahu’s office posted a video showing him apparently speaking Farsi directly to Iranian citizens, urging them to “take to the streets” and overthrow the regime. While the image was real, fact-checkers determined the audio was AI-generated, as Netanyahu does not speak Farsi and lip-syncing errors were evident in the footage.
Among the most contentious incidents was an attack on an elementary school in Minab, Iran, which reportedly killed at least 168 people, many of them children. The school was struck three times according to local officials, sparking outrage and anti-war protests in the United States. The attack became a focal point for both legitimate reporting and rampant misinformation.
Through collaborative verification efforts, journalists were able to geolocate the school site using satellite imagery, which showed its proximity to Revolutionary Guard buildings. However, this tragedy also generated significant misinformation that was sometimes amplified by AI tools themselves. For instance, X’s AI assistant Grok incorrectly claimed that real images of the devastation were fake, calling media reports a “hoax” without any evidence.
In a separate incident, the Iranian embassy in Austria published an AI-generated image of a blood-spattered schoolbag, which Google’s SynthID detector confirmed was created using Google AI.
The conflict’s expansion to Dubai, a major hub for European expatriates, triggered another wave of AI-generated content showing supposed rocket strikes on landmarks like the Burj Khalifa. While Dubai’s airport was indeed struck, many dramatic videos of explosions at city landmarks proved to be AI fabrications. Media analysis identified these as “clearly from a slightly older AI model” where “the debris and smoke plume look like something out of a cartoon.”
Adding to the confusion were coordinated messages from social media influencers in Dubai, many posting nearly identical statements that the city remained safe. Deutsche Welle reported questions about whether these influencers were being paid to spread a particular narrative, noting the UAE’s strict regulations on social media content that could “harm public order or the reputation of the state.”
As the conflict continues to evolve, open-source intelligence (OSINT) analysts have turned to satellite imagery to verify actual damage at various locations, including Iran’s Natanz nuclear facility, drone facilities in western Iran, Zahedan airbase, and the Ras Tanura oil refinery in Saudi Arabia.
With traditional reporting hampered by access restrictions and the flood of misinformation, these OSINT techniques have become critical tools for understanding the true scale and trajectory of a conflict that continues to reshape the Middle East.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
The viral spread of fake content is alarming, especially when it involves sensitive global conflicts. Rigorous fact-checking and media literacy are essential to combat the rise of AI-powered misinformation.
Agreed. Fact-checking and critical thinking are vital skills in the digital age to separate truth from fiction, especially around high-stakes world events.
This highlights the urgent need for robust safeguards and transparency around the development and deployment of AI systems. Unregulated AI-generated content poses serious risks to informed public discourse.
Absolutely. Strong governance frameworks and accountability measures for AI are crucial to mitigate the spread of disinformation and protect democratic institutions.
The viral spread of AI-generated fake content during the Israel-Iran conflict is a worrying sign of the evolving threat landscape. Balanced, fact-based reporting is essential to cut through the noise.
This is a concerning trend. AI-generated disinformation can spread rapidly and undermine public understanding of complex geopolitical events. Verifying information sources is crucial during conflicts.
The ease with which AI-generated disinformation can spread is deeply concerning. Strengthening information verification processes and empowering citizens to critically assess online content are vital steps forward.
This is a sobering example of how AI can be weaponized to manipulate public perceptions, especially around high-stakes geopolitical events. Investing in media literacy and digital forensics is key.