Listen to the article

0:00
0:00

AI-Generated Misinformation Complicates Middle East Conflict Coverage

As tensions escalate between the United States, Israel, and Iran, a new battlefield has emerged alongside the physical one: the information space, where artificial intelligence is fueling unprecedented levels of misinformation.

In the past five days, U.S. and Israeli forces launched attacks against Iran, bringing West Asia to the brink of regional war. As the conflict intensifies, social media platforms have been flooded with imagery purportedly showing damage across Iran, Lebanon, Israel, the UAE, Qatar, Kuwait, Bahrain, and other Middle Eastern nations.

A recent incident highlighted the growing problem when Iran’s newspaper Tehran Times posted an image on X (formerly Twitter) supposedly showing damage to an American radar system in Qatar from an Iranian drone strike. Financial Times analysis revealed the image was actually AI-altered, showing an area in Bahrain rather than Qatar. Despite being debunked, the post garnered nearly one million views.

“With satellite imagery, you’re looking at buildings, roads, terrain — things that don’t have inherent cues that signal manipulation,” explained Henk van Ess, an expert in online research methods and author of the Digital Digging newsletter. “Most people have no idea what a genuine satellite image is supposed to look like from a specific sensor at a specific resolution.”

This pattern of AI-driven misinformation isn’t new. During the 12-day conflict between Israel and Iran in June 2025, BBC reported numerous AI videos falsely depicting Iran’s military capabilities and damage to Israeli sites circulating widely. Similarly, pro-Israel accounts shared outdated footage of Iranian protests, falsely claiming they were current demonstrations against the Khamenei regime.

Online verification group GeoConfirmed has been working overtime to identify fake and unrelated videos being shared in the context of the conflict. Their most recent efforts debunked a viral tweet claiming that a strike on the Minab girls’ school was a failed Iranian Revolutionary Guard Corps launch rather than a U.S.-Israeli attack.

“This claim, with almost 11,000 likes, 5,000 retweets and 750,000+ views is WRONG based on GeoConfirmed geolocations,” the group stated on X.

The problem extends beyond social media into traditional news outlets. In the rush to be first with breaking news, television channels have aired AI-generated videos, including one purportedly showing an Iranian ballistic missile hitting Tel Aviv. Indian journalist and Alt-News founder Mohammed Zubair later debunked this video as an AI creation.

Another viral video claimed to show Tel Aviv after being struck by Iranian missiles, depicting collapsed buildings and broken roads. Fact-checkers quickly identified the footage as actually showing aftermath from the 2024 Turkey earthquakes.

Brady Africk, an independent open-source intelligence researcher and director of media relations at the American Enterprise Institute, noted that manipulated satellite images present a particular challenge. “There is a large trust factor with satellite images due to the complex nature of the content and technology used to capture it,” Africk told the Financial Times.

In response to the surge in fake content, X’s head of product Nikita Bier announced stronger measures to combat AI-generated material. “Starting now, users who post AI-generated videos of an armed conflict—without adding a disclosure that it was made with AI—will be suspended from Creator Revenue Sharing for 90 days. Subsequent violations will result in a permanent suspension from the program,” Bier wrote.

The platform has also enhanced its “community notes” feature, which helps fact-check viral content and alerts users who interacted with posts containing false information.

Government authorities are also taking action. In the United Arab Emirates, Dubai police warned against spreading rumors and disinformation, stating that violators face fines of at least 200,000 dirhams (approximately $54,000).

This Middle East conflict demonstrates how artificial intelligence has become a powerful tool for spreading misinformation during geopolitical crises. With easily accessible AI tools like ChatGPT, Gemini, and Grok, virtually anyone can alter images or generate convincing fake videos with simple prompts, creating significant challenges for journalists, fact-checkers, and the public seeking accurate information during critical events.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

10 Comments

  1. Lucas X. Lee on

    This incident highlights the need for robust fact-checking and verification processes, especially when it comes to sensitive geopolitical issues. The proliferation of AI-generated misinformation is a concerning development.

  2. Isabella H. Davis on

    The use of AI to generate deceptive imagery is a worrying trend that could have serious implications, especially in the context of sensitive geopolitical conflicts. Rigorous verification processes are essential to combat the spread of misinformation.

  3. It’s alarming to see how AI-altered satellite imagery can be used to mislead the public and potentially escalate tensions. Maintaining transparency and truth in media coverage is crucial during such delicate situations.

    • Olivia Thomas on

      Absolutely. The ability to manipulate visual evidence so convincingly is a major challenge that news organizations and social media platforms will have to grapple with. Vigilance and fact-checking will be key.

  4. Olivia Garcia on

    Interesting development with the AI-modified satellite images being used to spread misinformation. This really highlights the challenges of verifying information in the digital age, especially around sensitive geopolitical issues.

    • Robert Smith on

      Absolutely, the ease with which images can be manipulated is very concerning. Fact-checking and source verification are critical to combat the spread of disinformation.

  5. Patricia Thomas on

    This is a concerning development that highlights the need for improved digital literacy and verification processes, particularly when it comes to sensitive issues like geopolitical conflicts. The ability to manipulate visual evidence using AI is a serious threat to informed discourse.

    • Olivia Miller on

      Agreed. The proliferation of AI-generated misinformation is a major challenge that will require a concerted effort from media outlets, tech companies, and the public to address effectively.

  6. Olivia Martinez on

    The use of AI to generate misleading imagery is a worrying trend. It’s crucial that media outlets and the public remain vigilant in scrutinizing visual content, especially during times of heightened tensions.

    • Agreed. The ability to convincingly alter images using AI is a serious threat to informed discourse. Improving digital literacy and verification processes will be key to addressing this challenge.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.