Listen to the article
As Iran conflict intensifies, AI-generated misinformation floods social media
A troubling surge in artificial intelligence-generated misinformation about the Iran conflict is rapidly spreading across social media platform X, creating significant challenges for users trying to discern fact from fiction in real-time.
Researchers tracking the phenomenon have documented numerous instances where AI tools, rather than clarifying events, are actively contributing to confusion. In one revealing case, disinformation researcher Tal Hagin tested X’s AI chatbot Grok by asking it to verify a video purportedly showing Iranian missiles striking Tel Aviv. Instead of providing accurate analysis, the chatbot repeatedly misidentified both the location and timing of the footage, which had originated from Iranian state media. More concerningly, Grok then generated and shared its own AI-created image to support its flawed claims, effectively amplifying misinformation.
This incident exemplifies a widespread problem that has escalated since February 28, when the United States and Israel began military operations against Iran. The platform has been flooded with misleading content ranging from recycled old footage to sophisticated fabrications presented as battlefield evidence.
“What we’re seeing is unprecedented in terms of volume and sophistication,” said a researcher who requested anonymity due to safety concerns. “The accessibility of AI image and video generation tools has dramatically lowered the barrier to creating convincing fake content.”
Iranian officials and state media accounts have been particularly active in this space, circulating AI-generated videos showing fictitious scenes like a high-rise building in Bahrain engulfed in flames. Other viral fabrications included footage of a supposedly downed American B-2 Spirit bomber and staged captures of U.S. special forces. Many such posts garnered millions of views before moderation actions were taken.
Even obviously artificial content has achieved significant reach. One widely shared video claiming to depict Iranian troops manufacturing missiles inside a hidden cave complex displayed clear hallmarks of AI generation yet still accumulated more than a million views as it spread across multiple accounts.
The Institute for Strategic Dialogue has identified coordinated propaganda campaigns utilizing AI-generated imagery. According to their analysis, networks of pro-Iranian regime accounts have distributed fabricated images portraying Orthodox Jewish figures directing American soldiers into battle or celebrating U.S. casualties, blending political propaganda with antisemitic narratives.
The problem extends beyond military content. A fabricated video appearing to show young girls walking past former president Donald Trump wearing only underwear reportedly attracted 6.8 million views before removal, though copies continue to circulate through other channels.
X has attempted to address the issue by implementing temporary demonetization for premium accounts that share AI-generated war footage without proper labeling. This policy specifically targets blue-checkmark accounts benefiting from the platform’s monetization and engagement systems. However, the company has not disclosed enforcement statistics, and critics argue implementation remains inconsistent.
Traditional misinformation continues alongside these AI-generated materials. A particularly divisive narrative involves a missile strike on a primary school in the Iranian city of Minab that reportedly killed over 168 people, including 110 children. While some pro-Trump accounts on X have circulated unrelated videos claiming Iranian government responsibility, footage verified by independent journalists shows a Tomahawk cruise missile striking a nearby naval facility. Despite assertions by former president Trump that Iran possesses such weapons, the United States is currently the only military in the conflict known to operate Tomahawk missiles.
The challenge of AI misinformation extends beyond X. Meta’s oversight board recently criticized the company’s labeling systems for AI-generated media as inadequate for the speed and scale of synthetic content during crises.
“We’re entering uncharted territory where the volume of synthetic media threatens to overwhelm traditional fact-checking methods,” said a digital policy expert at a Washington think tank. “The technology is advancing faster than our systems to identify and contextualize this content, creating dangerous information vacuums during critical global events.”
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


19 Comments
The proliferation of AI-fueled propaganda around the Iran-Israel conflict is deeply troubling. Platforms must urgently improve their ability to identify and suppress this kind of manipulative, destabilizing content.
I agree. Misinformation can have severe real-world consequences, especially during periods of heightened geopolitical tensions. Platforms have a responsibility to combat the spread of this kind of disinformation.
The AI chatbot’s inability to properly contextualize that video is extremely problematic. Platforms relying on AI to handle sensitive geopolitical information need to dramatically improve their content verification capabilities to avoid amplifying misinformation.
I agree. Failures like this show the urgent need for much more rigorous testing and validation of AI systems before deploying them for high-stakes information curation. Platforms must get this right to maintain public trust.
This is deeply concerning. AI-generated disinformation about real world conflicts is a major threat to informed discourse. Platforms must do more to detect and mitigate the spread of this kind of manipulative content.
I agree. Rapid spread of AI-fueled propaganda during crises like this Iran conflict undermines our ability to understand events accurately. Robust fact-checking and content moderation are urgently needed.
The AI chatbot’s inability to properly analyze the video content is highly concerning. Platforms relying on AI to curate information around sensitive geopolitical events must dramatically improve their verification processes to avoid amplifying misinformation.
Absolutely. Failures like this demonstrate the critical need for much more rigorous testing and validation of AI systems before deploying them for high-stakes information curation. Platforms can’t afford these kinds of lapses.
This is a worrying development. AI-generated disinformation around the Iran-Israel conflict is a serious threat to informed public understanding. Platforms need to significantly enhance their ability to detect and remove this manipulative content.
This is a deeply concerning trend. The proliferation of AI-generated disinformation around the Iran-Israel conflict is a serious threat to informed public discourse. Platforms need to take far stronger action to identify and suppress this manipulative content.
The AI chatbot’s inability to properly contextualize that video is extremely problematic. Platforms relying on AI to handle sensitive geopolitical information need to dramatically improve their content verification capabilities to avoid amplifying misinformation.
I agree. Failures like this show the urgent need for much more rigorous testing and validation of AI systems before deploying them for high-stakes information curation. Platforms must get this right.
This is a worrying trend. AI-generated disinformation is a growing menace that threatens to distort public understanding of complex, unfolding situations like the Iran-Israel conflict. Platforms need to get much better at detecting and removing this.
The AI chatbot’s inability to properly verify the origin and context of that video is quite alarming. Platforms relying on AI to curate information around major events need to significantly improve their capabilities.
Absolutely. AI systems must be rigorously tested and validated before being deployed to handle sensitive geopolitical content. Failing to do so enables the propagation of dangerous misinformation.
This is a very concerning development. AI-generated misinformation is a serious threat to informed discourse, particularly around volatile conflicts like the one between Iran and Israel. Platforms need to take much stronger action to address this.
The AI chatbot’s inability to properly verify the video content is a major red flag. Platforms relying on AI to curate information around sensitive geopolitical events must significantly improve their capabilities to avoid amplifying dangerous misinformation.
Absolutely. Failures like this demonstrate the urgent need for much more rigorous testing and validation of AI systems before deploying them to handle complex, high-stakes information. Platforms cannot afford these kinds of lapses.
This is a very worrying trend. The proliferation of AI-generated disinformation around the Iran-Israel conflict is a serious threat to informed public discourse. Platforms must take far stronger action to identify and suppress this manipulative content.