Listen to the article
AI-Generated Disinformation Campaign Falsely Claims Destruction of Tel Aviv
In a troubling evolution of information warfare, Iran’s regime and its supporters have launched a widespread campaign of AI-generated videos falsely depicting Tel Aviv in ruins, despite the Israeli city remaining largely unscathed from recent conflicts.
The sophisticated disinformation effort shows fictitious scenes of massive ballistic missile strikes on the city, often complemented by misrepresented footage from other conflicts or outdated material presented as current events. The campaign appears designed to demoralize Western audiences and undermine support for military actions against Iran.
These fabricated videos have gained significant traction online, with thousands of posts collectively amassing hundreds of millions of views across social media platforms. While similar disinformation circulated during last summer’s 12-Day War, recent advances in AI technology have made the current wave of fake content considerably more convincing.
“The technology has evolved to simulate ‘shaky camera’ effects and virtual ‘citizen reporters’ that lend an air of authenticity to completely fabricated scenarios,” explains a media analyst tracking the phenomenon. “The barrier to creating this content has also dropped dramatically in terms of both cost and technical expertise required.”
The financial incentives built into social media platforms have exacerbated the problem. Many services pay content creators based on engagement metrics, inadvertently rewarding those who publish sensational—even if entirely false—material. While X (formerly Twitter) has temporarily demonetized accounts posting unlabeled AI-generated war content, other major platforms including TikTok and Meta’s services have not implemented similar safeguards.
Major news organizations including the BBC, CNN, The New York Times, and The Guardian have attempted to counter this wave of disinformation through fact-checking efforts, but the sheer volume of fake content has overwhelmed traditional media’s ability to effectively respond.
Many social media users have turned to AI chatbots for verification, unaware that these tools often provide inconsistent or incorrect authentication of visual content. This technological confusion further muddles an already chaotic information landscape.
Israelis in Tel Aviv have responded to the disinformation campaign with a mix of serious rebuttals and satirical content. Some residents post authentic videos showing normal city life, while others have created deliberately absurd AI videos featuring dinosaurs or spaceships attacking the city to mock the false narratives.
Journalists on the ground report that Tel Aviv continues to function normally, with only limited damage from actual attacks. However, believers in the disinformation often dismiss authentic footage as fabricated, claiming without evidence that destruction must exist just outside the camera frame or that Israel has implemented a “media blackout” to conceal widespread damage.
These conspiracy theories mischaracterize Israel’s standard wartime reporting restrictions, which temporarily limit sharing precise missile impact locations (to prevent calibration of enemy attacks) and prohibit coverage of sensitive military installations. Similar protocols exist in most conflict zones, including Ukraine.
“When Israelis say Tel Aviv is doing just fine, they mean it,” said one correspondent currently in the city. “The disconnect between reality on the ground and what’s being portrayed online is staggering.”
Media experts warn that this case represents a troubling glimpse into future information environments, where AI-generated content can create parallel realities that significant portions of the public accept despite contradictory evidence. As AI tools become more sophisticated and accessible, distinguishing truth from fabrication will likely become increasingly challenging.
This evolution in disinformation tactics presents urgent questions for social media platforms, governments, and media organizations about how to preserve information integrity in an era where seeing is no longer believing.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


6 Comments
The Iranian regime’s use of deepfakes to depict the destruction of Tel Aviv is a brazen attempt at information warfare. This highlights how critical it is to invest in safeguards against this emerging threat to truth and democracy.
Deepfake tech poses a real threat to truth and trust online. We need to stay vigilant against AI-generated propaganda and misinformation campaigns, especially from bad actors trying to manipulate public opinion.
This is a concerning development. The ability to create such realistic fake videos could be incredibly damaging, undermining faith in media and making it harder to discern fact from fiction. Addressing this challenge will be critical.
I agree, the sophistication of these deepfakes is alarming. We’ll need advanced detection tools and media literacy to combat the spread of manipulated content.
This is a troubling example of how bad actors are leveraging AI to create convincing disinformation. We have to be vigilant and develop robust ways to identify and counter these types of propaganda efforts.
The ability to generate realistic fake videos is a major concern. As deepfake tech advances, the potential for large-scale manipulation of public perception becomes very worrying. We need to get ahead of this issue.