Listen to the article
Government Warns of AI-Generated Fake Videos Targeting Army Chief
India’s Press Information Bureau (PIB) has issued a stern warning about AI-generated fake videos circulating on social media platforms that falsely attribute inflammatory statements to Army Chief General Upendra Dwivedi. The fabricated content, reportedly disseminated by Pakistani propaganda accounts, represents an escalation in cross-border information warfare.
The most recent falsified clip, which gained significant traction on X (formerly Twitter), portrayed General Dwivedi commenting on the alleged custodial death of climate activist Sonam Wangchuk. PIB’s fact-checking unit promptly labeled the video as “entirely fabricated” and confirmed that the Army Chief made no such statement.
“This fake video has been created using AI technology. The Chief of the Army Staff has made no such statement,” the PIB fact-checking team declared in their official statement. To counter the disinformation, authorities also released the authentic, unedited video of General Dwivedi’s actual remarks from the Chanakya Defence Dialogue 2025.
Security officials familiar with the matter suggest these sophisticated forgeries are designed to undermine public trust in India’s armed forces and create discord among different segments of Indian society. The incidents reflect the growing sophistication of disinformation campaigns utilizing artificial intelligence technology.
This isn’t an isolated incident. Earlier this week, similar Pakistani-linked accounts distributed another falsified video claiming General Dwivedi had proposed surrendering Arunachal Pradesh to China as a strategic concession to prevent Beijing from supporting Pakistan. The fabricated clip even contained fabricated admissions about Chinese technology supposedly damaging India’s Rafale fighter jets during past tensions with Pakistan.
The PIB categorically dismissed these claims as malicious disinformation. “The Army Chief has made no such remarks,” the agency emphasized, highlighting the dangerous nature of such content.
The targeted campaign extends beyond military leadership. Government authorities have also identified a separate manipulated video featuring President Droupadi Murmu. In this instance, Pakistani propaganda accounts circulated altered footage falsely suggesting the President had warned about rising extremism, diminishing freedoms, and threats to minorities within India.
To counter this narrative, PIB released side-by-side comparisons of the authentic and doctored videos, along with a link to President Murmu’s complete Constitution Day address from the historic Central Hall of the old Parliament building.
Intelligence experts point to a concerning trend of state-backed disinformation campaigns employing increasingly convincing deepfake technologies. The sophistication of these videos marks a significant evolution from earlier, more easily identifiable manipulated content.
“What makes these new AI-generated videos particularly dangerous is their improved quality and the speed at which they can be produced and distributed,” noted a cybersecurity analyst who requested anonymity due to the sensitive nature of the topic. “Even brief viral circulation can cause substantial damage before fact-checkers intervene.”
The government has urged citizens to exercise heightened vigilance when consuming social media content, particularly involving senior officials or sensitive geopolitical matters. Authorities recommend verifying information through official government channels and established news organizations before sharing or reacting to potentially controversial content.
As deepfake technology becomes more accessible, intelligence agencies across the region are bolstering their capabilities to quickly identify and counter such disinformation. The incidents highlight the emerging challenges in the digital information landscape, where the line between authentic and fabricated content continues to blur.
The PIB has reiterated its commitment to promptly addressing misinformation and has encouraged citizens to report suspicious content to its fact-checking unit for verification.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


16 Comments
This is concerning, as the use of AI to create fake videos can have serious implications for national security and public trust. It’s crucial that authorities remain vigilant and take proactive measures to counter such disinformation campaigns.
Absolutely. Vigilance and fact-checking are essential to combat these sophisticated forgeries. The public needs to be made aware of the risks posed by AI-generated deepfakes and the importance of verifying information from trusted sources.
This incident underscores the importance of developing robust AI-based tools to detect and counter deepfakes. The collaboration between government agencies, tech companies, and the public will be essential in mitigating the risks posed by such sophisticated disinformation campaigns.
Well said. The battle against AI-driven disinformation requires a comprehensive, coordinated response from all stakeholders. Investing in advanced detection methods, strengthening information-sharing, and educating the public will be crucial in staying ahead of these evolving threats.
The escalation of cross-border information warfare through the use of AI-generated deepfakes is a worrying trend that requires immediate attention. International cooperation and the development of effective countermeasures should be a priority.
I agree. This incident highlights the need for a comprehensive, global strategy to address the challenges posed by emerging technologies like AI-powered disinformation. Strengthening cybersecurity and information-sharing mechanisms will be crucial in this regard.
The Pakistani government’s alleged involvement in this disinformation campaign is troubling. It highlights the need for stronger international cooperation and information-sharing to address cross-border propaganda and protect national interests.
You’re right. This is a complex issue that requires a coordinated global response. Robust cybersecurity measures and diplomatic pressure may be necessary to deter such malicious activities and hold perpetrators accountable.
The Indian government’s swift action in debunking the fake video and providing the authentic footage is a positive step. However, the persistent threat of AI-driven disinformation campaigns remains a serious concern that requires ongoing vigilance and proactive measures.
Absolutely. Combating the spread of deepfakes and other forms of manipulated media content will require a multi-faceted approach, including technological solutions, policy frameworks, and public awareness campaigns. Maintaining transparency and trust in institutions is paramount in these challenging times.
This incident highlights the need for greater investment in AI-based tools to detect and combat deepfakes. Proactive measures to educate the public on the risks of manipulated media content are also essential.
Absolutely. As AI technology continues to advance, the potential for malicious use will only grow. Developing robust detection methods and raising awareness are critical to staying ahead of these threats.
The use of AI to create fake videos is a worrying trend that can undermine public discourse and erode trust in institutions. I’m glad the Indian authorities are taking this threat seriously and working to debunk these fabrications.
Agreed. The release of the authentic, unedited video of the Army Chief’s remarks is an important step in countering the disinformation. Transparency and fact-based communication are crucial in these situations.
While the use of AI in this disinformation campaign is concerning, I’m encouraged to see the Indian government taking a strong stance and working to expose the truth. Maintaining transparency and public trust is vital in these situations.
Well said. The government’s prompt response in labeling the video as fabricated and providing the authentic footage is a commendable effort to combat the spread of false information. Vigilance and fact-checking are the keys to countering such threats.