Listen to the article
Viral Image of Iranian Mourners Grieving Schoolgirl Deaths Revealed as AI-Generated
A widely circulated image claiming to show Iranian civilians mourning schoolgirls killed in U.S. and Israeli airstrikes has been confirmed as artificially generated, according to a fact-check investigation.
The image, which began spreading across social media platforms on March 3, 2026, purportedly depicted Iranians grieving for victims from a girls’ primary school in Minab following reported airstrikes on February 28. The emotional scene quickly gained traction online, accompanied by captions condemning the United States for civilian casualties during military operations in Iran.
Analysis of the image revealed multiple telltale signs of artificial intelligence generation. Several of the schoolchildren’s photos appeared distorted and unnaturally rendered. Additionally, mourners in the background displayed jumbled hand configurations – a common artifact in AI-generated imagery where software struggles to accurately render complex human features.
Technical verification through specialized detection tools provided further evidence. Both Undetectable and WasItAI, independent AI detection platforms, identified a high probability the image was synthetically created. While such tools aren’t definitive on their own, they offered supporting evidence alongside the visual anomalies apparent in the image.
Investigators also submitted the image to Google Gemini to check for the presence of SynthID – a hidden watermark Google embeds in all AI-generated images created with its tools. No such watermark was found, indicating the image was likely created using different AI generation software.
The origin of the viral post was traced to Harmeet Singh, who describes himself as Pakistan’s first Sikh journalist and anchor in his social media biography. In a follow-up post, Singh acknowledged the image was AI-generated, stating he shared it “symbolically to reflect the scale of the tragedy.” At the time of publication, Singh had not responded to requests for comment on whether he personally created the image.
This incident occurs amid heightened tensions in the Middle East following reported airstrikes in Iran. Separate fact-checking efforts have examined unverified claims that the strike on the girls’ school resulted from a misfired Iranian missile. Meanwhile, verification of an aerial image showing graves for the victims confirmed that element was authentic.
The spread of this synthetic image highlights growing challenges in distinguishing between authentic and artificially generated content during international crises. As AI image generation tools become increasingly sophisticated and accessible, media literacy experts warn consumers to scrutinize emotional imagery carefully, especially during developing news events where verification may lag behind viral spread.
For journalists and researchers, the incident underscores the importance of multiple verification methods when assessing potentially synthetic content, including close examination of image details, use of detection tools, source tracking, and confirmation from multiple reliable sources on the ground.
The rapid proliferation of the fake mourning image also demonstrates how emotionally provocative content can quickly gain traction during geopolitical tensions, potentially influencing public perception of events before verification processes can catch up.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


10 Comments
Interesting, so this image was not authentic after all. It’s concerning to see misinformation like this spread so quickly online. I wonder what the motivations were behind creating and sharing this fake image.
You’re right, it’s troubling how easily false information can go viral these days. Fact-checking is so important to combat the spread of propaganda and misleading narratives.
The revelation that this emotional image was artificially generated raises concerns about the potential for AI to be misused for propaganda purposes. Rigorous verification of online content is crucial to maintain trust and counter the spread of falsehoods.
While I’m glad this particular image has been debunked, it’s troubling to think about the broader trend of using AI to create convincing but fake media. We’ll need increasingly sophisticated tools to combat the spread of disinformation in the digital age.
This is a good reminder to always be skeptical of sensational or emotionally-charged content on social media. Verifying the source and authenticity of images and claims is crucial, especially around sensitive geopolitical issues like this.
Agreed. It’s vital that we rely on reputable, fact-based journalism rather than unverified social media posts when it comes to reporting on conflicts and tragedies.
It’s disappointing to see this type of manipulated media being used to potentially inflame tensions or sway public opinion. Strengthening our collective ability to discern authentic from synthetic content should be a priority.
The use of AI-generated imagery to spread disinformation is quite concerning. This incident highlights the need for better digital literacy and media verification tools to help people identify manipulated content online.
Absolutely. As AI capabilities continue to advance, the potential for this type of synthetic media abuse will only grow. Addressing this challenge will require concerted efforts from tech companies, policymakers, and the public.
This is a sobering example of how easily misinformation can spread, even around sensitive topics like civilian casualties in conflict zones. Fact-checking and media literacy are essential skills in today’s information landscape.