Listen to the article
Iran’s ongoing military action against Israel has triggered a surge in online misinformation, with artificial intelligence emerging as a key tool for spreading false narratives across social media platforms, security experts warn.
The conflict, which began with Iran launching approximately 300 missiles and drones toward Israel on Tuesday, has been accompanied by a flood of AI-generated images and videos circulating online. These fabricated visuals have complicated efforts to verify actual events on the ground, creating confusion among global audiences trying to understand the rapidly evolving situation.
“What we’re seeing is unprecedented in terms of scale,” said Emerson Brooking, senior fellow at the Atlantic Council’s Digital Forensic Research Lab. “AI tools have democratized the ability to create convincing fake imagery, allowing virtually anyone with internet access to manufacture content that appears authentic at first glance.”
Among the most widely shared fabrications was a computer-generated image purporting to show an explosion near the Israeli parliament in Jerusalem. The image gained significant traction across platforms like X (formerly Twitter) and Telegram before being identified as synthetic. Similar AI-generated content depicted fictitious scenes of Iranian missiles striking Tel Aviv and Jerusalem, contributing to public anxiety.
Security researchers have identified both state-affiliated actors and independent users participating in the spread of misinformation. Some AI-generated content appears designed to exaggerate the scale of the attacks or create the impression of greater damage than what actually occurred.
“The barrier to entry for creating this type of misleading content is essentially gone,” said Darren Linvill, professor of communication at Clemson University who specializes in disinformation research. “We’re observing multiple agendas at play – from deliberate information operations to individuals simply seeking attention or engagement.”
The situation highlights the growing challenge faced by major social media platforms, which have struggled to effectively label or remove synthetic content during fast-moving events. Despite policies against misleading AI-generated media, enforcement remains inconsistent across different platforms.
Meta, the parent company of Facebook and Instagram, acknowledged the challenge in a statement, noting that its content moderation teams had been expanded to address the surge in misleading content related to the Iran-Israel conflict. Similarly, X indicated it had deployed additional resources to identify and label synthetic media, though researchers found numerous examples of unlabeled AI content continuing to circulate.
The proliferation of false imagery has real-world consequences, according to national security experts. Such content can influence public perception of the conflict, potentially affecting diplomatic responses and public support for various policy positions.
“When citizens and even government officials cannot easily distinguish between real and fake imagery, it creates an information environment where truth itself becomes contested,” said Elisabeth Braw, senior fellow at the American Enterprise Institute. “This benefits actors who wish to create confusion or control narratives around military actions.”
The current wave of AI-generated content represents a significant escalation from previous conflicts, where manipulation typically involved more easily detectable photoshopped images or contextually misrepresented authentic media.
As tensions between Iran and Israel continue, fact-checking organizations have published guides to help users identify potential AI-generated content. Key indicators include unnatural lighting, inconsistent shadows, distorted backgrounds, and peculiar rendering of human features, particularly hands and eyes.
Technology companies are racing to develop more sophisticated detection tools, but experts warn that the technology to create deceptive content is advancing more rapidly than the safeguards against it.
“We’re in uncharted territory,” said Brooking. “Every major geopolitical event now features this layer of synthetic content that makes understanding reality more difficult for ordinary citizens and policymakers alike.”
The situation underscores the growing need for digital literacy education and greater cooperation between technology companies, government agencies, and civil society to counter the threat of AI-enabled misinformation during international crises.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
This is a concerning development. AI-generated misinformation can be extremely damaging, especially in conflict situations where people are already on edge. We need stricter controls and regulations around the use of these technologies.
This is a sobering reminder of the potential dangers of AI. While the technology has many beneficial applications, the ability to generate convincing misinformation is extremely worrying, especially in sensitive geopolitical contexts.
Disturbing but not entirely surprising. The anonymity and scale of social media make it an ideal vector for AI-generated propaganda. Regulating these technologies and improving public awareness is going to be an ongoing challenge.
I’m curious to learn more about the specific AI tools and techniques being used to create these fakes. Understanding the technical details could help inform countermeasures and mitigation strategies.
That’s a good point. Developing a deeper understanding of the AI capabilities fueling this problem will be key to finding effective solutions.
This just highlights how AI can be weaponized to spread disinformation and sow chaos. As the technology advances, we’ll likely see even more sophisticated fakes in the future. Robust fact-checking and media literacy efforts will be crucial.
Wow, the scale and sophistication of these AI-powered fakes is really alarming. It’s crucial that people remain vigilant and fact-check any controversial or sensitive content they see online. Fact-checking is more important than ever.
I agree, it’s crucial that we find ways to combat this. Improving digital literacy and empowering people to spot manipulated media could be an important step.