Listen to the article

0:00
0:00

In the wake of Monday’s deadly blast near Delhi’s Red Fort that claimed at least eight lives and injured dozens, misleading content has begun circulating widely on social media platforms, complicating public understanding of the tragedy.

Investigation reveals that artificial intelligence has been employed to create fabricated footage of the explosion, with one particularly viral Instagram reel purporting to show the precise moment of the blast garnering significant attention despite being entirely synthetic.

The deceptive video, which includes Hindi text describing a “terrorist blast in a car parked outside Delhi’s Red Fort” alongside the correct casualty count, depicts people walking toward an explosion followed by footage of a burning vehicle. Analysis using Google’s Synth ID, a specialized AI-detection tool, confirmed the video was artificially generated, with both visual and audio elements showing clear markers of synthetic creation.

Closer examination reveals a telltale watermark reading “Veo” – Google’s own AI video generation platform – in the bottom right corner of the footage. The fabricated content appears to have incorporated elements from authentic footage of the aftermath, repurposing them within an entirely AI-generated sequence that adds dramatic explosions, flames, crowds, and sound effects not present in any verified documentation of the incident.

This sophisticated manipulation represents a concerning evolution in misinformation tactics. By blending elements of authentic imagery with computer-generated content, such videos can appear convincing to casual viewers, particularly during breaking news events when reliable information may be limited.

Several Indian fact-checking organizations have already flagged the video as inauthentic, warning their audiences about the misleading nature of the content. Delhi Police have meanwhile issued public appeals urging citizens to exercise caution and refrain from sharing unverified materials related to the incident, noting that official investigations remain ongoing.

The proliferation of AI-generated content isn’t limited to this single video. Additional analysis has identified multiple images circulating online that falsely claim to depict the Delhi explosion. Some utilize entirely AI-generated imagery, while others repurpose older, unrelated photographs and present them as current.

This incident highlights the growing challenge facing both authorities and the public in distinguishing between authentic and fabricated crisis documentation. As artificial intelligence tools become more sophisticated and accessible, the potential for rapid spread of convincing misinformation during emergencies increases substantially.

Media literacy experts recommend that consumers verify information through multiple trusted sources before sharing content related to breaking news events. Key indicators of potentially fabricated content include unusual visual artifacts, inconsistent lighting or physics, and watermarks from known AI generation platforms.

The Delhi blast investigation continues as authorities work to determine the cause and identify those responsible. Officials have called for public cooperation in maintaining calm and avoiding speculation that could hamper investigative efforts or increase community tensions.

For those seeking accurate information about the incident, authorities recommend following updates from official government channels, established news organizations with verification protocols, and recognized fact-checking services that specialize in analyzing visual evidence during crisis events.

The emergence of AI-generated content around this tragedy serves as a stark reminder of how rapidly evolving technology continues to reshape information ecosystems during critical public safety incidents, creating new challenges for responsible citizenship in the digital age.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

11 Comments

  1. Robert Johnson on

    While the details around this incident remain murky, the proliferation of AI-generated misinformation is a growing challenge that undermines public trust. Rigorous fact-checking and transparency from authorities will be crucial in this case.

  2. Elijah Johnson on

    This incident highlights the urgent need for greater media literacy and digital verification skills among the public. Combating the spread of AI-enabled disinformation will require a multi-faceted approach.

  3. Olivia C. Williams on

    This situation highlights the urgent need for robust digital forensics capabilities to detect synthetic media. Policymakers should prioritize developing effective frameworks to combat the spread of AI-enabled disinformation.

  4. Robert Williams on

    This case underscores the growing challenge of disinformation in the digital age. While AI technology can be a powerful tool, its misuse to create deceptive content must be swiftly addressed by authorities.

  5. This is a concerning case of misinformation using AI-generated footage. It’s crucial that authorities thoroughly investigate the blast and provide accurate details to the public, rather than allowing false claims to spread.

  6. Michael C. Williams on

    The use of synthetic media to create misleading content around a tragic event is deeply troubling. I hope investigators can swiftly identify the source of this fabricated video and hold those responsible accountable.

    • Agreed. In times of crisis, it’s vital that verified information from official sources is prioritized over unsubstantiated claims, no matter how realistic the visuals may appear.

  7. The emergence of AI-generated fake footage in the aftermath of the Delhi blast is deeply concerning. I hope investigators can quickly identify the source and motives behind this misleading content.

    • Absolutely. In a crisis, the public deserves accurate, verified information from credible sources, not fabricated visuals that sow confusion and erode trust.

  8. The emergence of fabricated footage around the Delhi blast is a troubling sign of the evolving disinformation landscape. Robust fact-checking and transparency from authorities will be crucial in this case.

  9. The use of AI to generate fake footage of the Delhi blast is a disturbing development. I hope investigators can rapidly uncover the origins of this misinformation and hold the perpetrators accountable.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.