Listen to the article
AI Manipulation Reaches News Photo Agencies in Middle East Conflict Coverage
Manipulated or recycled photos and videos frequently circulate during wars and crises, but the ongoing US-Israel war with Iran has brought this problem to unprecedented levels. Several major European news organizations have discovered that photo agencies themselves were supplied with manipulated or fake images, which then made their way into mainstream newsrooms across the continent.
Some images appear to have been generated with artificial intelligence, while others were digitally altered by humans, creating a troubling new front in the battle against misinformation during armed conflicts.
The SalamPix Scandal Unfolds
In early March, Dutch media reported that ANP (Algemeen Nederlands Persbureau), the Netherlands’ largest news agency, removed approximately 1,000 Iran-related photos from its database after suspicions arose that some had been manipulated with AI.
Just days later, the Dutch branch of media network RTL revealed that its news service, RTL Nieuws, had unknowingly used three of these images on its website and mobile app. After being alerted by ANP that the photos were AI-generated, RTL promptly removed them and published a detailed explanation identifying which images had been taken down and why.
The problem quickly proved more widespread. German weekly news magazine Der Spiegel acknowledged that it had also used an AI-manipulated image in its coverage before discovering it was fake.
In both cases, the images came through respected news agencies. RTL received them through ANP, while Der Spiegel obtained them via dpa Picture Alliance, ddp and Imago Images, all of which had sourced the material from French agency Abaca Press.
The trail ultimately led back to SalamPix, an Iranian photo agency. According to Der Spiegel, SalamPix provided the photos to Abaca Press, which then distributed them to the databases of multiple international agencies and, eventually, to newsrooms across Europe.
In response to these revelations, many photo agencies have blocked SalamPix entirely or issued “kill notices,” instructing media clients to remove all SalamPix images from publication.
Breaking the Agency Trust Model
In many countries, including Germany, the “agency privilege” is a legal concept that generally allows media outlets to rely on the authenticity of verified text, image and video material delivered by news agencies with whom they have established relationships. Even global broadcasters like Deutsche Welle (DW) routinely rely on external agencies to cover international events.
But as AI-generated and AI-manipulated content becomes increasingly sophisticated, distinguishing real images from fabricated ones grows more challenging. The problem is both technological and logistical. During fast-moving breaking news situations, journalists and agencies must sift through enormous volumes of visuals at high speed. DW alone receives an average of 140,000 images per day from agencies.
“Transparency is one of our highest priorities. Whenever we show AI-generated content, it must be clearly and unmistakably identifiable as such,” explains DW Editor-in-Chief Mathias Stamm. “And if we make a mistake — as in the case of using images from the agency SalamPix — we acknowledge it and remain transparent.”
Examples of AI Manipulation
Following initial reports about SalamPix, DW reviewed its own coverage and discovered it had also used these problematic images. The broadcaster subsequently removed all SalamPix images from its publications and issued correction statements under each changed article.
One example was a seemingly realistic street scene depicting an alleged missile strike in Tehran, showing yellow taxis in the foreground, buildings in the background, and smoke rising behind them. It was distributed through news agencies in 2026.
On closer inspection, several unmistakable AI glitches became apparent. Most notably, writing on walls and vehicles resembles text at first glance but, when examined closely, does not resemble Farsi, Arabic, or any actual language. Instead, it consists of nonsensical pseudo-text, a common flaw in AI-generated imagery.
Another SalamPix image purported to show security forces suppressing protesters in Tehran on January 8, 2026. This image also contained revealing AI artifacts: mismatched shoes, anatomically incorrect hands, and shadows that didn’t correspond to the actual body parts.
DW’s Fact Check team also examined older SalamPix content and, like Germany’s Süddeutsche Zeitung newspaper, found irregularities in previously published images. One example from November 2022 allegedly showed Iranian protesters clashing with security forces in Mahabad city. The image displayed typical AI errors from that period: deformed hands and fingers, misaligned building windows, and distorted facial features.
The Growing Challenge of Detection
As AI tools improve, fabricated visuals are becoming increasingly difficult to distinguish from authentic ones. This means both everyday users and professional journalists risk being misled, as this case clearly demonstrates.
Media organizations, including DW, are investing heavily in training staff to detect and debunk AI manipulation. Many are also producing media literacy content to help audiences recognize misleading images and videos.
The SalamPix case represents a concerning evolution in misinformation tactics during international conflicts, where manipulated content is being laundered through trusted news agencies rather than just circulating on social media platforms. As detection methods improve, so too will the sophistication of these deceptions, creating an ongoing challenge for news organizations committed to accuracy and transparency.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


21 Comments
The prevalence of AI-generated and manipulated imagery in news reporting is deeply troubling. Maintaining public trust requires a concerted effort to strengthen verification practices.
Agreed. This is a complex challenge that will require ongoing innovation, collaboration, and a steadfast commitment to ethical, fact-based journalism.
The use of AI-generated and manipulated imagery to mislead the public is a serious threat to the credibility of news reporting. Robust verification processes are essential to upholding journalistic standards.
This is a wake-up call for the news industry. Combating the spread of manipulated media content requires collaboration, innovation, and a renewed commitment to fact-based reporting.
The use of fake or doctored images to mislead the public is deeply concerning. Fact-checking and source verification should be a top priority for all news organizations.
Absolutely. Maintaining credibility and public trust requires rigorous editorial processes to weed out misinformation, no matter the source.
The use of fake imagery to mislead the public is a serious breach of journalistic ethics. Robust verification processes and transparency are essential to restoring trust in the media.
Absolutely. News outlets must be vigilant and proactive in identifying and debunking manipulated visuals, no matter the source or intent.
Kudos to the news outlets that took swift action to address this issue. Combating the spread of misinformation through manipulated visuals must be a top priority for all media organizations.
This is a stark reminder of the need for rigorous fact-checking and source verification, especially when it comes to visuals. The integrity of news reporting is at stake.
Absolutely. Maintaining public trust requires constant vigilance and a commitment to transparent, ethical journalism practices.
Disturbing to see how easily AI-generated and altered visuals can infiltrate the mainstream media. This underscores the urgent need for more stringent sourcing and authentication measures.
Agreed. The stakes are high, as misinformation can have real-world consequences. Vigilance and technological safeguards must be constantly improved.
This is a sobering example of the challenges facing the media industry in the digital age. Combating the spread of misinformation through manipulated visuals must be a top priority.
Agreed. Collaboration, innovation, and a steadfast commitment to fact-based reporting are crucial to maintaining the integrity of news coverage.
This is a sobering example of the challenges the media faces in the digital age. Increased vigilance and collaboration are needed to combat the spread of manipulated visuals.
Well said. Journalists must stay ahead of evolving tactics used to deceive and sow confusion. Robust fact-checking protocols are essential.
Troubling to see how misinformation can spread so easily, even at the highest levels of media. This highlights the need for greater scrutiny and verification of visual content, especially in conflict reporting.
Agree, AI-generated and manipulated imagery is a growing concern. News outlets must be extremely diligent in vetting sources and authenticity of photos and videos.
The proliferation of fake and manipulated imagery is a serious threat to journalism and democracy. Strict verification processes are crucial to upholding the integrity of news reporting.
This is a concerning development that highlights the need for greater scrutiny and accountability in the media industry. Fact-checking and source verification should be the top priority.