Listen to the article
AI-Fueled Disinformation Floods Social Media Amid Israel-Iran Tensions
A surge of artificial intelligence-generated disinformation has overwhelmed social media platforms since Israel launched strikes against Iran last week, with BBC Verify documenting dozens of fabricated posts exaggerating Iran’s military response. The three most widely viewed fake videos have amassed over 100 million views across multiple platforms, signaling an unprecedented scale of AI-generated content during an active conflict.
“This marks the first time we’ve seen generative AI be used at scale during a conflict,” said Emmanuelle Saliba, Chief Investigative Officer with analyst group Get Real. The verification organization Geoconfirmed described the volume of misleading content as “astonishing,” pointing to engagement-seeking accounts profiting from the conflict by sharing sensational but false content.
The disinformation landscape has become fertile ground for “super-spreaders” who have seen dramatic follower growth. One pro-Iranian account with no obvious ties to Tehran authorities – Daily Iran Military – doubled its follower count on X from 700,000 to 1.4 million in less than a week following Israel’s strikes on June 13.
Many posts feature AI-generated imagery purporting to show successful Iranian counterstrikes, including widely shared images depicting missiles raining down on Tel Aviv. Another common theme targets Israel’s advanced F-35 fighter jets, with numerous fabricated videos claiming to show these aircraft being shot down.
“If the barrage of clips were real, Iran would have destroyed 15% of Israel’s fleet of fighters,” noted Lisa Kaplan, CEO of the Alethea analyst group. One widely circulated post claimed to show a downed F-35 in the Iranian desert, but analysis revealed obvious AI artifacts – civilians in the image were the same size as nearby vehicles, and the sand showed no signs of impact.
Pro-Israeli accounts have also contributed to the misinformation environment, primarily by recirculating old footage of protests in Iran while falsely claiming they represent current public opposition to the Iranian government or support for Israel’s military actions. One widely shared AI-generated video falsely portrayed Iranians chanting “we love Israel” on Tehran streets.
As tensions escalate and speculation grows about potential U.S. strikes on Iranian nuclear facilities, new AI-generated images have begun appearing showing B-2 stealth bombers over Tehran – an aircraft that has received heightened attention due to its unique capability to effectively target Iran’s underground nuclear sites.
Even official sources have participated in spreading dubious content. Iranian state media has shared fake strike footage and AI-generated images of downed F-35s, while the Israel Defense Forces received a community note correction on X for using unrelated historical footage of missile barrages.
Social media platforms have struggled to contain the flood of false information. On X, users frequently consult the platform’s AI chatbot Grok to verify content, but in multiple instances, the chatbot erroneously authenticated obvious fakes. When presented with an AI-generated video showing an impossible scene of endless trucks carrying ballistic missiles from a mountain complex, Grok repeatedly insisted it was authentic, even citing major news outlets as sources.
TikTok, where one fake video of a downed F-35 accumulated 21.1 million views before removal, told BBC Verify it “proactively enforces community guidelines which prohibit inaccurate, misleading, or false content” and works with independent fact-checkers. Instagram parent company Meta did not respond to requests for comment.
Matthew Facciani, a researcher at the University of Notre Dame, explained why such content spreads so effectively: “People want to re-share things if it aligns with their political identity, and more sensationalist emotional content will spread more quickly online.” This psychological tendency becomes particularly problematic when people are faced with binary choices during conflicts, making them more susceptible to believing content that confirms their existing viewpoints.
The current wave of disinformation represents a troubling evolution in information warfare, with increasingly sophisticated AI tools being deployed to shape public perception of military effectiveness and political support during active conflicts.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


14 Comments
This article underscores the growing threat of AI-driven disinformation, especially during times of heightened geopolitical tensions. Robust fact-checking, content moderation, and public awareness campaigns will be critical to mitigate the spread of these fabricated narratives.
Well said. The proliferation of AI-generated content that can deceive and manipulate is a major challenge that requires a multi-pronged approach. Sustained efforts to improve digital literacy and empower the public to spot and resist disinformation will be key.
This is a troubling development, but not entirely surprising given the capabilities of modern generative AI. The key will be identifying proactive ways to limit the impact and reach of this type of content during conflicts.
Absolutely. Policymakers, tech companies, and the public need to work together to find solutions. Fact-checking, content moderation, and public awareness campaigns will all be crucial in the months and years ahead.
The explosion of AI-generated disinformation during the Israel-Iran conflict highlights how quickly these technologies can be weaponized. Urgent action is needed to address this threat to information integrity and public discourse.
Couldn’t agree more. This is a complex challenge that requires a multifaceted response. We need to stay vigilant and continue exploring technical, regulatory, and educational approaches to counter the spread of AI-fueled falsehoods.
The article highlights the challenges we face in an era of advanced AI tools that can generate convincing but false content. Combating this will require a multi-pronged approach of technological, policy, and public education solutions.
Well said. Improving digital literacy and critical thinking skills among the public is just as important as developing robust AI detection systems. We all have a role to play in fighting the spread of disinformation.
The scale of AI-generated disinformation during the Israel-Iran conflict is truly alarming. This highlights the urgency of developing effective counter-measures and promoting media literacy to help the public navigate this challenging information landscape.
Agreed. Tackling this problem will require a concerted effort from tech companies, policymakers, educators, and the public. Collaborative solutions that address the technical, regulatory, and social dimensions of the issue are needed.
Disturbing to see how AI is being used to amplify disinformation during an active conflict. This underscores the need for greater transparency and accountability around the development and deployment of these powerful technologies.
Absolutely right. Responsible AI governance has never been more important. We must find ways to harness the benefits of AI while putting robust safeguards in place to prevent misuse and manipulation.
Concerning to see AI-generated disinformation used to inflame tensions during the Israel-Iran conflict. We need better safeguards and oversight to prevent the spread of fabricated content that could fuel real-world harm.
Agreed. The scale and speed of this AI-driven disinformation is alarming. Platforms and authorities must act quickly to identify and remove these misleading posts before they cause further damage.