Listen to the article
AI-Enhanced Images Fueling Misinformation in West Asia Conflict
As fighting intensifies across West Asia, a new front has emerged in the information war: AI-enhanced photographs that subtly alter reality while appearing authentic. These manipulated images are increasingly shaping public perception of the conflict, blurring the line between documentation and distortion.
Unlike completely fabricated images, this new breed of misinformation starts with genuine photographs that are then digitally “enhanced” using artificial intelligence tools. The result is often more dramatic, detailed, or emotionally charged than the original.
“AI-enhancement may subtly alter textures, faces, lighting, or background details, creating an image that looks more ‘real’ than the original,” explains Evangelos Kanoulas, a professor in AI at the University of Amsterdam. These modifications can “strengthen a particular narrative about an event – for example, making a protest appear more violent, making a crowd appear larger, making facial expressions more intense.”
A widely circulated image demonstrates this problem. It shows a kneeling U.S. pilot being confronted by a Kuwaiti local after parachuting from his aircraft. The high-resolution image spread rapidly across social media platforms and was even published by established news outlets.
Forensic analysis revealed telltale signs of AI manipulation, including a SynthID watermark – an invisible marker embedded in content created with Google AI tools. Most notably, the pilot appeared to have only four fingers on each hand, a common error in AI-generated imagery.
Investigators located an earlier version of the same photograph on Telegram that appeared blurry rather than sharply detailed. AI verification tools confirmed this earlier image was authentic, suggesting it served as the source material before being processed through AI enhancement tools.
The incident matches verified reports that Kuwait had mistakenly shot down three U.S. warplanes, with video evidence and satellite imagery confirming the location and event. While the scene itself was real, the widely shared image had been digitally altered to appear more dramatic.
Similar alterations appeared after Iranian strikes near Erbil airport in Iraq on March 1. Social media users shared an image showing massive flames rising skyward, but AI detection tools again identified Google AI’s SynthID watermark embedded in the photograph.
The original, unaltered version of the same image showed a much smaller fire, thinner smoke column, and less intense colors. The AI-enhanced version significantly amplified the visual impact and apparent scale of the destruction.
“Even little changes can end up telling a very different story,” warns James O’Brien, professor of computer science at the University of California, Berkeley. These alterations “could change the perception of events” in significant ways, he notes.
The technology can also introduce completely fabricated elements. Kanoulas points out that AI systems sometimes “hallucinate” details that never existed in the original image, further complicating efforts to separate fact from fiction.
This pattern emerged earlier this year following the shooting of Alex Pretti by federal immigration agents in Minneapolis. An AI-enhanced version of a video frame showing the incident circulated widely online. In the original low-quality footage, Pretti held a mobile phone, but the enhanced version was ambiguous enough that some social media users misinterpreted the object as a weapon.
The proliferation of these subtly altered images is eroding public trust in visual evidence from conflict zones. Without clear labeling or disclosure of AI enhancement, audiences face increasing difficulty distinguishing between authentic documentation and manipulated imagery.
“This type of content is already having a huge impact on people and their ability to trust the truth,” O’Brien says. The problem extends beyond obvious fakes to undermine confidence in legitimate photojournalism.
Kanoulas agrees, noting, “People start doubting authentic images as well,” creating a crisis of credibility that complicates public understanding of conflicts worldwide.
As the war triggered by U.S.-Israeli attacks on Iran continues to unfold, the battle between authentic and manipulated imagery threatens to create parallel perceptions of reality, with audiences increasingly uncertain about what to believe.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


6 Comments
Disturbing to see how AI-enhanced images can be weaponized to mislead the public during conflicts. We need robust methods to detect and counter this kind of digital misinformation.
This highlights the need for greater digital literacy and media analysis skills, so the public can critically evaluate the authenticity of images they encounter, especially during times of crisis and unrest.
AI-enhanced images are a double-edged sword – they can provide more detailed visuals, but also enable the spread of misinformation. Robust verification processes are crucial to maintain trust in visual reporting.
The use of AI to create more ‘realistic’ but distorted images is a troubling trend. We must be vigilant in distinguishing truth from fiction, and ensure journalistic integrity in reporting on complex conflicts.
This is a concerning development. AI tools that subtly alter photographs risk undermining public trust and spreading false narratives. Fact-checking and digital forensics will be crucial to combat these manipulated images.
Agreed. Maintaining transparency and authenticating visual evidence is essential, especially in conflict zones where information is hotly contested.