Listen to the article
In the wake of Monday’s deadly blast near Delhi’s Red Fort, investigators and journalists alike are battling a new challenge: artificially generated content claiming to depict the explosion that killed eight people and injured dozens more.
An extensive review of videos and images circulating on social media has revealed multiple instances of both AI-generated footage and repurposed older content falsely presented as documentation of the explosion.
One particularly concerning example is a viral Instagram reel purporting to show the actual moment of the blast. The video, which has gained significant traction online, features text in Hindi describing a “terrorist blast in a car parked outside Delhi’s Red fort” along with the accurate casualty count of eight deaths.
The first portion of the video shows pedestrians walking toward an apparent explosion, followed by footage of a burning vehicle. However, analysis by BBC Verify has confirmed this video is entirely artificial.
When processed through Google’s Synth ID, an AI-detection tool designed to identify synthetic media, the results indicated artificial generation in both the video and audio components. Further examination revealed a watermark reading “Veo” – Google’s own AI video generation tool – in the bottom right corner of the footage.
What makes this particular piece of misinformation especially deceptive is its incorporation of elements from authentic footage. The creators appear to have used verified images from the actual blast as a foundation, then manipulated them using AI to add dramatic explosions, flames, people, and sound effects.
Several Indian fact-checking organizations have also independently verified that the video is not authentic documentation of Monday’s events. The proliferation of such convincing falsified content presents significant challenges for authorities attempting to conduct a proper investigation while managing public fears.
Delhi police officials have issued public statements urging citizens not to share unverified materials related to the explosion. “Circulating fabricated content not only impedes our investigation but can create unnecessary panic among the public,” a senior police official said. “We ask everyone to rely on official channels for information as our work continues.”
The incident highlights the growing sophistication of AI-generated content and its potential to complicate crisis situations. As artificial intelligence tools become more accessible to the general public, the line between authentic documentation and convincing fakes continues to blur.
Monday’s explosion occurred in a busy area near one of Delhi’s most prominent historical landmarks. The blast’s proximity to the Red Fort, a UNESCO World Heritage site and symbol of national importance, has heightened concerns about security in the capital.
Authorities have not yet released official conclusions regarding the cause of the explosion or potential perpetrators, though investigations are ongoing. Meanwhile, hospital sources confirm that several of the injured remain in critical condition.
Media literacy experts warn that during crises, the rapid spread of misinformation – particularly visually convincing AI-generated content – can hamper emergency response efforts and complicate the work of authorities. They recommend verifying information through multiple reliable sources before sharing content related to breaking news events.
As the investigation into the Delhi blast continues, officials emphasize the importance of responsible information sharing and urge the public to remain vigilant about the sources of content they consume and distribute online.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


6 Comments
Wow, it’s alarming how advanced AI-generated content has become. The ability to create such realistic-looking fake videos is a real challenge for media verification and fact-checking. It’s crucial that we remain vigilant and rely on authoritative sources like the BBC to separate truth from fiction.
Kudos to the BBC for their diligent verification work. In an age of rampant misinformation, it’s vital that trusted news sources like them continue to fact-check and expose these kinds of deceptive videos. Maintaining journalistic integrity is more important than ever.
This is a concerning development, as the spread of disinformation can have real-world impacts. I’m glad the BBC was able to debunk this false video through careful analysis. It’s a good reminder to always be skeptical of online content, especially when it involves sensitive events.
You’re right, it’s a complex issue. AI-generated content will only become more sophisticated, so improving detection methods and media literacy is crucial.
This is a disturbing trend, but not entirely unexpected given the rapid advancements in AI and synthetic media. While the technology can have many positive applications, it’s clear that bad actors will try to exploit it for nefarious purposes. Strengthening our ability to detect and counter such threats is crucial.
The proliferation of AI-generated content is a serious challenge for media and the public. I’m glad the BBC took the time to thoroughly investigate this video and expose it as fabricated. It’s a sobering reminder that we must approach online information with a critical eye and rely on authoritative, fact-based reporting.