Listen to the article

0:00
0:00

Russia’s alleged claims of a drone attack on President Vladimir Putin’s residence have been debunked after CCTV footage circulating on social media was confirmed to be AI-generated, according to an investigation by Vishvas News.

The footage, which purportedly showed drones dropping bombs on a building identified as Putin’s palace, began circulating on January 1, 2026. Social media users, including Facebook user Ahmed Nisar, shared the video with captions suggesting that drones had penetrated deep inside Russian territory to target the Russian leader’s home.

Close examination of the video revealed several inconsistencies that raised immediate red flags. Most notably, the timestamp on the CCTV footage jumps backward from 2:14:23 to 2:14:10 during the sequence. Additionally, a vehicle in the footage makes an unnatural turn midway through the video, further suggesting manipulation.

To verify the authenticity of the footage, investigators employed multiple AI detection tools. Deepfake-O-Meter’s AVSRDD (2025) model assessed the footage as having a 100% probability of being AI-generated. This conclusion was supported by TruthScan, which indicated a 99% probability of artificial creation, while undetectable.ai registered a 59% probability of AI manipulation.

AI expert Azhar Machwe, consulted during the investigation, pointed to specific elements in the video that betrayed its artificial nature, including the unnatural movement of a fluttering flag and the suspicious behavior of the vehicle in motion.

The timing of this fabricated footage coincides with heightened tensions between Russia and Ukraine. According to Reuters reporting from January 1, Russia’s Defense Ministry had released separate video footage allegedly showing a downed Ukrainian drone and claimed that Ukraine had attempted to attack Putin’s residence in the Novgorod region. Russian officials, including Major-General Alexander Romanenkov, alleged that 91 drones were launched from Ukraine’s Sumy and Chernihiv regions in what they described as a “perfectly planned” attack that was ultimately thwarted by Russian air defenses.

Ukraine has categorically denied these allegations, suggesting that Moscow fabricated the alleged attack to impede progress in peace negotiations aimed at ending the conflict. Ukrainian officials maintain that Russia has presented no credible evidence to support its claims.

The circulation of AI-generated footage purporting to show attacks on high-profile leaders highlights the growing challenge of misinformation in conflict zones. As AI technology becomes more sophisticated and accessible, distinguishing between authentic and fabricated content has become increasingly difficult for the general public.

This incident underscores the critical importance of verification and fact-checking in an era where visual evidence can be manipulated to support misleading narratives. It also demonstrates how sophisticated AI tools can now create convincing fake footage that might be used to influence public opinion or escalate international tensions.

As the Russia-Ukraine conflict continues, the battle against misinformation remains a crucial component in understanding the true nature of events on the ground.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

8 Comments

  1. This is a concerning development, as the ability to generate realistic-looking yet completely fabricated footage can have serious implications for public discourse and trust in information sources. Thorough fact-checking is essential.

  2. This highlights the importance of robust verification processes to identify AI-generated content. It’s concerning to see how easily disinformation can spread, even around high-profile political incidents.

    • Agreed. With the rapid advances in AI, it’s becoming increasingly challenging to differentiate real from fabricated footage. Diligence and skepticism are essential when evaluating online content.

  3. Mary Rodriguez on

    It’s troubling to see how sophisticated AI technology can be used to create such convincing yet fabricated footage. This incident highlights the importance of developing robust detection methods to combat the spread of disinformation.

    • Amelia Rodriguez on

      Absolutely. As AI capabilities continue to evolve, the need for effective verification tools and critical thinking skills becomes even more crucial to maintain the integrity of information.

  4. Patricia Thompson on

    The debunking of this alleged footage as AI-generated is a good reminder of the need for caution when consuming media, especially around sensitive political topics. It’s crucial to rely on authoritative and verified sources.

  5. Emma E. Miller on

    Interesting how AI technology is being used to create fake footage these days. It’s important to carefully verify the authenticity of any videos circulating online, especially those involving sensitive political events.

    • Elijah Garcia on

      Absolutely. The ability to generate such convincing AI-created footage is quite concerning. Fact-checking and critical analysis are crucial to avoid spreading misinformation.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.