Listen to the article
“Do you want to count my fingers?”
This was the question Israeli Prime Minister Benjamin Netanyahu asked in a video of himself ordering coffee in Jerusalem—a pointed response to viral AI-generated fakes claiming he had been assassinated.
The incident highlights the growing battleground of disinformation in the Middle East conflict, where Iranian state media outlets recently discussed rumors that Netanyahu was dead or injured. Some social media accounts supporting the Iranian regime pointed to alleged evidence that Netanyahu had six fingers in official videos—a telltale sign of AI-generated content.
Meanwhile, as Netanyahu was publicly disproving rumors of his demise, a video circulated on social media showing an Iranian man hugging a cardboard cutout of Mojtaba Khamenei, Iran’s new supreme leader. Mojtaba, who succeeded his father Ali Khamenei, has not made any public appearances since assuming power.
“Battles are now fought not only on the ground, but on social media and media as well,” said Shahriar Kaisar, a senior lecturer at RMIT University. According to Kaisar, opposing sides in the Middle East conflict are employing AI-fakes as psychological warfare to undermine trust in information.
“The distinction between truth and lie is very blurred. It’s very difficult to understand what to trust anymore,” he explained. “War crimes can be real or fake. But based on the video or audio or image, we cannot really distinguish what is true and what is false.”
This uncertainty works both ways—genuine footage of atrocities can be dismissed as fabricated, protecting perpetrators from accountability.
Since hostilities escalated in the region, social media has been flooded with competing narratives. US research think-tank Newsguard reported last week that the Iranian regime has actively engaged in disinformation campaigns designed to “exaggerate or entirely fabricate tales of Iran’s military prowess.”
These include deepfake footage purporting to show Iranian attacks on US bases in the Middle East, residential buildings in Tel Aviv, and commercial buildings in Dubai. Other fabricated videos depict US and Israeli soldiers allegedly crying and expressing homesickness.
The Iranian government’s nationwide internet blackout has created an environment where “disinformation has been really powerful inside the country,” according to Dara Conduit, a senior lecturer in political science at the University of Melbourne.
“The Iranian regime basically, for once, actually has control of the narrative inside Iran,” Conduit told SBS Examines. The regime is working to convince citizens that “we’re the victim of the Israeli and US conspiracy that we’ve been telling you about for decades. And here it is, it’s come to fruition, it’s killed our supreme leader, and we’re fighting back because we’re strong.”
However, not all disinformation targets domestic audiences. “There are various campaigns running… targeting a wide range of people and serving a wide range of goals,” Conduit noted. “When targeting the West, they are looking to sow confusion and looking to sow dissent.”
While AI-generated content represents a technological advancement in spreading misinformation, traditional methods remain effective. “Authoritarian regimes have been using disinformation their entire lives through state media,” Conduit said. In 2019, Twitter (now X) removed 4,800 accounts it identified as spreading Iranian regime-related misinformation.
Social media influencers, including some Australians, have shared misleading footage claiming to show attacks on CIA headquarters in Dubai—actually footage from a 2015 fire in a UAE residential complex.
Staged videos represent another traditional tactic. US-based Iranian journalist Masih Alinejad shared footage from Iranian state television featuring interviews with women crying about alleged attacks, claiming they were actors appearing in multiple similar interviews.
“Disinformation has long been central to warfare,” Conduit emphasized. “We kind of think of disinformation as trying to spread a certain narrative, but actually, just one of the most powerful ways that disinformation can have an impact is by creating distrust.”
Combating this problem requires a multi-faceted approach. Kaisar advocates for “a collective effort” by media, legislators and the public to fight disinformation in all its forms. While social media platforms are beginning to incorporate fact-checking and AI detection tools, government regulation lags behind—”in Australia we have deepfake laws for pornographic images, but not necessarily for other kinds of deepfakes,” Kaisar noted.
For individuals, Kaisar recommends the “ABC rule” when evaluating potentially fake content: examine the Actor’s movements and posture; check the Background for inconsistencies; and verify the Context and source of the material.
“I think we should be able to address this,” Kaisar concluded. “But then again, it’s an ongoing war between the good and the evil.”
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


14 Comments
The Middle East conflict taking on a new digital dimension is deeply concerning. AI-fueled disinformation campaigns could have destabilizing regional and global impacts if left unchecked.
Absolutely. This highlights the need for greater international cooperation and information-sharing to counter these cross-border threats. Coordinated efforts will be essential to mitigate the risks.
The use of AI for disinformation is a troubling trend, but it’s encouraging to see leaders like Netanyahu publicly disproving false claims. Fact-checking and rapid response will be key to countering these threats.
Absolutely. Proactive steps by officials to debunk misinformation can help limit the spread and impact of these AI-driven fakes. But it will require ongoing vigilance and collaboration between government, tech companies, and citizens.
This is a timely and important story. The use of AI for disinformation is a troubling development that requires vigilance and a multi-stakeholder response. Protecting the integrity of online information is critical for democracy.
Well said. Addressing this challenge will require collaboration between governments, tech companies, media outlets, and civil society. Developing effective strategies to identify and counter AI-driven fakes is crucial.
This highlights the importance of digital media literacy. Ordinary users need to be equipped with the skills to critically evaluate online information and spot potential AI-driven manipulation attempts.
Agreed. Educating the public on identifying deepfakes and other AI-generated content should be a top priority. Protecting the integrity of the information landscape is vital for a healthy democracy.
This is a complex issue with no easy solutions. AI-generated disinformation will likely continue to be a challenge, but finding the right balance between free speech and content moderation will be crucial.
Well said. Policymakers will need to carefully navigate these tricky waters, ensuring we protect democratic freedoms while also safeguarding the information landscape from malicious actors exploiting new technologies.
Fascinating how AI is being weaponized for disinformation campaigns. The arms race between truth and falsehood on social media is intensifying. Governments must find ways to address this threat to democracy and public discourse.
Absolutely. Tackling AI-generated deep fakes is a critical challenge for social media platforms and policymakers. Verifying the authenticity of online content will only become more difficult.
The arms race between truth and falsehood on social media is deeply concerning. AI-generated deepfakes pose a serious threat to public discourse and trust in institutions. Concerted efforts are needed to address this emerging challenge.
Absolutely. Strengthening digital media literacy, enhancing content moderation, and improving transparency around AI systems will all be essential steps in the fight against AI-fueled disinformation campaigns.