Listen to the article
AI Weaponized to Discredit Authentic War Footage in Iran-Israel Conflict
The ongoing Iran-Israel-US conflict may be remembered as the first war where artificial intelligence overwhelmed the information environment at an unprecedented scale. Across social media platforms, AI-generated images, fabricated videos, and repurposed video game footage are spreading with extraordinary speed, making it increasingly difficult to distinguish fact from fiction.
Major news organizations including WIRED, BBC, and CNN have documented the surge of fake AI war imagery circulating online. But a more troubling tactic is now emerging from this chaos: technical-looking analyses are being weaponized to falsely discredit authentic evidence of real events.
This development marks the realization of warnings issued for years by researchers and civil society organizations about the dangers of releasing powerful generative AI tools without meaningful safeguards. The current conflict appears to be the culmination of that trajectory, building on patterns observed during the Israel-Iran 12-day war of June 2025, when an unprecedented surge of AI-generated content flooded the information environment.
Today’s situation is significantly worse. In the eight months since that conflict, generative AI tools have become more sophisticated, producing increasingly realistic outputs while becoming accessible to a wider range of actors. The January massacres of Iranian protesters, in which thousands were killed by state security forces, have deepened fractures within Iranian society and intensified the information battle between the regime, opposition movements, diaspora communities, and foreign states.
This collision is unfolding in what was already one of the most challenging environments for verification anywhere in the world. Decades of state media manipulation and internet censorship have eroded baseline trust in institutional sources in Iran. The Iranian state deploys its media infrastructure to document civilian casualties caused by foreign strikes, yet no comparable infrastructure existed to document the thousands of protesters killed by security forces in January 2026.
The near-total internet shutdown imposed by Iranian authorities since February 28, with connectivity at just 1% of ordinary levels, has further severed Iranians from real-time information. The communities most affected have been cut off from participating in the evidentiary record of their own situation.
BBC Verify journalist Shayan Sardarizadeh has noted that this conflict may have already broken records for the amount of AI-generated content going viral during a war. OSINT researcher Tal Hagin observed that the problem is no longer limited to ordinary social media users being deceived; the volume has outpaced the verification capacity of even professional newsrooms.
The consequences are far from abstract. When audiences cannot distinguish authentic from manipulated evidence, atrocities become easier to deny. Genuine documentation of civilian harm can be dismissed as “AI-generated,” and crucial questions about the human cost of war become obscured by an avalanche of spectacle.
A particularly concerning case involved photographs taken by photojournalist Erfan Kouchari for Iran’s Student News Network (SNN), a state-affiliated outlet. The images documented a March 1 strike in Niloofar Square in eastern Tehran and were distributed through the Parspix/ABACA wire service before being published by international outlets including The Telegraph and The Guardian.
Shortly after the images began circulating, a user claiming to be a visual effects artist posted what they described as “heatmap overlays” alongside purported outputs from Gemini and ChatGPT, claiming the photos were “very likely all AI-generated images.” These visualizations spread quickly and were cited as evidence of fabrication.
However, experts who reviewed the images found the alleged “heatmaps” themselves appeared to be fabricated. Nikos Sarris, a senior researcher at MeVer, noted the unusual nature of the heatmaps, which did not localize semantically meaningful artifacts as genuine forensic tools would. Dr. İlke Demir, CEO of Cauth AI, raised similar concerns, pointing out nonsensical elements like the legend reading “Low / High / Map.”
In another case, a photograph published by The New York Times on March 9 documenting crowds in Tehran following Mojtaba Khamenei’s announcement as Iran’s new Supreme Leader became the target of a coordinated discrediting effort. The “Empirical Research and Forecasting Institute” (ERFI) claimed the image “shows signs of digital manipulation” and shared screenshots of purported forensic analyses.
The problem? ERFI had run its analysis not on the original image but on a screenshot of an Instagram post, including the surrounding platform interface. As experts explained, screenshots produce compression artifacts that have no bearing on whether an original image is authentic or synthetic.
This evolution of manipulation tactics represents a sophisticated development: rather than simply labeling content “fake,” bad actors now fabricate technical-looking forensic evidence to support those claims. The approach is particularly effective because most people are unfamiliar with how genuine forensic tools actually work.
The result is a dangerous feedback loop where synthetic media undermines trust in real evidence, and fabricated forensic analysis further erodes confidence in verification itself. In this environment, the tools designed to detect manipulation become instruments of manipulation.
This crisis stems not only from malicious actors operating in a fragile information ecosystem but also from the deployment of powerful generative technologies without adequate safeguards. When trust in evidence collapses, the greatest casualty is not only truth online—it’s accountability for real-world harm.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


6 Comments
This is a complex issue with wide-ranging implications. While the potential for AI to be weaponized is alarming, I’m curious to hear more about the specific safeguards and solutions being proposed to address this challenge. Constructive dialogue and a multi-stakeholder approach will be crucial.
This is a concerning development. The weaponization of AI to discredit authentic war footage is a troubling trend that will only make it harder for the public to navigate the information landscape during conflicts. We need robust verification and fact-checking measures to combat these emerging challenges.
The proliferation of fake war imagery is a disturbing development that undermines public understanding and trust. I hope the relevant authorities and tech companies can collaborate to find effective ways to combat the malicious use of AI in these sensitive contexts.
The surge of AI-generated content during the 2025 Israel-Iran conflict is a worrying precursor to the current situation. It’s crucial that news organizations and social media platforms take proactive steps to identify and mitigate the spread of manipulated media. Vigilance is key to preserving truth in wartime.
Agreed. The public deserves accurate and reliable information, especially during times of crisis. Responsible use of AI tools and strong verification protocols will be essential to maintaining trust in the media.
This is a troubling trend that highlights the urgent need for better AI governance and accountability measures. Robust fact-checking, transparent source verification, and clear labeling of AI-generated content will be essential to maintaining the integrity of information during conflicts.