Listen to the article
In the wake of the recent US-Israeli military strike on Iran, social media platforms have become flooded with images and videos purporting to document the conflict. However, many of these visuals are misleading—either recycled from previous conflicts, manipulated with AI, or even taken directly from video games like War Thunder.
As misinformation proliferates online during times of conflict, digital investigators and reputable news organizations have developed sophisticated verification methods to separate fact from fiction. Publications like The New York Times, Indicator, and Bellingcat employ rigorous authentication procedures before publishing any visual content related to breaking news events.
“Audiences can turn to trusted, independent news organizations that take the time and effort to authenticate visuals and clearly explain sourcing,” says Charlie Stadtlander, executive director for media relations and communications at The Times.
While no verification system is entirely foolproof, especially with the rapid advancement of AI technology, media organizations maintain high standards backed by years of experience in detecting misleading content. Their verification methods offer valuable lessons for the general public to better evaluate information during major news events.
Visual investigation begins with close examination of images for inconsistencies that might indicate manipulation. When unverified images of Venezuelan leader Nicolás Maduro appeared after his reported abduction by the US in January, The Times’ Visual Investigations team meticulously scrutinized them, looking for telltale signs of tampering. In one case, they spotted unusual-looking aircraft windows that raised red flags about authenticity.
“Minor tweaks like cropping or contrast are fine and always have been, but once you add, remove, or fabricate elements (especially with AI), it’s no longer a photo, it’s digital art or propaganda,” explains Eliot Higgins, creative director at Bellingcat.
Source credibility forms another critical pillar of verification. Even official government accounts aren’t automatically trustworthy. When The Times published an image of Maduro in custody from President Donald Trump’s Truth Social account, they presented it as a screenshot of the full post rather than an isolated image, acknowledging they couldn’t independently verify its authenticity.
“In this case, the president’s Truth Social post itself was newsworthy, even if we had no surefire way to confirm that the image was authentic,” noted Meaghan Looram, The Times’ photography director.
Verification experts also recommend examining account history when evaluating social media posts. Jeremy Carrasco, creator of verification tools ShowtoolsAI and Riddance, calls this the “Account Age Paradox”—accounts spreading sophisticated deepfakes often have recent creation dates coinciding with the release of advanced AI models, while older fakes typically contain more obvious flaws.
Digital footprint analysis provides another verification method. Using reverse image search tools like those offered by Google and Yandex can quickly reveal if an image has appeared elsewhere or in different contexts. For instance, a viral post claiming to show missiles striking an Israeli nuclear facility was exposed as footage from Ukraine in 2017.
OSINT platform Bellingcat combines visual checks, cross-referencing, and specialized software to verify content. However, Higgins acknowledges the growing challenge posed by AI: “The flood of convincing fakes has sped things up and given bad actors a handy ‘it could be AI’ excuse to dismiss real footage. Our methods still hold because we focus on provenance and context, not just pixels, but the noise level is way higher now.”
Location verification forms another key component of authentication. Investigators use satellite imagery, Google Maps, and identifiable landmarks to confirm if footage matches its claimed location. The New York Times’ teams can even estimate the time of day based on shadow analysis using tools like SunCalc, while corroborating evidence from nearby security cameras might provide additional verification.
Craig Silverman, fake news expert and cofounder of OSINT platform Indicator, emphasizes the importance of vigilance for everyday social media users. “The average person needs to understand that the current information environment is tilted towards manipulation and deception. This requires you to scroll with an awareness of how easily images, video, and text can be manipulated,” he told The Verge.
The failure of major social platforms to consistently label AI-generated content compounds the problem, creating what Silverman describes as “a chaotic, deception-filled, digital landscape that overwhelms and misinforms.”
For the average person navigating this environment, experts suggest practicing restraint before sharing emotional or viral content. Many verification tools used by professional newsrooms are freely available, and cross-checking suspicious posts with multiple independent sources can help prevent the spread of misinformation.
“Remember that it takes time for information to develop, especially when it comes to fast-moving conflicts and other news stories,” Silverman advises. “Awareness and patience are critical, and they don’t require tools or expertise. But you do have to practice.”
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


10 Comments
While the threat of deepfakes is real, it’s heartening to see leading news outlets taking such a measured and transparent approach to authenticating visual content. Their work helps maintain public trust.
In an era of rampant online misinformation, the role of reputable news outlets in verifying visual content is critical. Their dedication to maintaining high journalistic standards is admirable.
While no verification system is infallible, the thoughtful approach taken by these news organizations is reassuring. Transparency around their authentication processes builds trust with the audience.
Absolutely, that level of openness is important for readers to understand the rigor behind the reported visuals, even as deepfake technology advances.
Verifying visual content in the age of deepfakes is critical for maintaining trust in news reporting. Rigorous authentication processes by reputable outlets are essential to combat the spread of misinformation.
Fact-checking visual content is a complex challenge, but necessary to maintain the integrity of news coverage during sensitive events. Kudos to the outlets investing the time and effort to get it right.
The Times and other leading publications demonstrate how to uphold high journalistic standards even as AI manipulation techniques become more sophisticated. Their commitment to transparency is commendable.
Agreed, their diligence in sourcing and explaining their verification methods is crucial for readers to assess the credibility of reported visuals.
The verification methods employed by publications like The Times demonstrate a commitment to responsible reporting, even as deceptive visuals become more prevalent. Their efforts are a model for the industry.
Agreed, their approach sets an important precedent for how news organizations should handle the challenges posed by deepfakes and other manipulated media.