Listen to the article
In a concerning development, India’s Press Information Bureau (PIB) Fact Check unit has exposed a digitally manipulated video circulating on social media that falsely attributes controversial statements to Lieutenant General Manjinder Singh, Commander of the South Western Army.
The fabricated video, being spread through Pakistani propaganda accounts, shows Lt Gen Singh allegedly criticizing Indian Army exercises as merely “political optics to boost the image for the Bihar elections.” The altered footage also falsely depicts the senior military officer making politically charged statements about serving the nation rather than any specific political party.
“The AI-generated fake video is being circulated to mislead people and create distrust against the Indian Armed Forces,” the PIB stated in its official fact-check report. The bureau emphatically confirmed that Lt Gen Manjinder Singh never made such remarks.
The PIB compared the doctored video with the original footage recorded in Bikaner, Rajasthan, revealing stark differences in content and intent. In the authentic recording, Lt Gen Singh discusses the Indian Army’s preparedness and training methodologies, with no mention of political motivations or elections.
“The Indian Army is following the political direction of the ‘New Normal’, under which any terror act on the country will be considered an ‘act of war’. The military has to prepare for any such activities,” Lt Gen Singh actually stated in the original video. He further elaborated on technological advancements and the army’s focus on night training, noting that “70% of the training [is conducted] at night and 30% during the day.”
By contrast, the fabricated version included inflammatory statements suggesting political interference in military operations: “We all know these so-called threshold exercises are being pushed for optics to boost the image for Bihar elections, but remember we serve India, not any party… They may try to saffronize our image, but the Indian Army belongs only to the Republic, not to Modi.”
This incident marks another chapter in the growing challenge of deepfake videos and AI-manipulated content being deployed for geopolitical purposes. The India-Pakistan relationship, historically fraught with tensions, has increasingly witnessed information warfare tactics utilizing advanced technologies to spread misinformation.
Security analysts point out that such sophisticated disinformation campaigns are designed to create internal divisions and undermine institutional credibility. The targeting of high-ranking military officials is particularly concerning as it attempts to suggest political partisanship within an organization that prides itself on political neutrality.
The timing of this manipulated content also appears strategic, coming amid heightened political activity surrounding the Bihar state elections. By falsely linking military exercises to electoral politics, the fabricated video attempts to sow doubt about the independence of India’s armed forces.
In response to the rising tide of deepfakes and AI-manipulated content, the PIB has intensified its fact-checking efforts. Authorities have urged social media users to verify information through official channels before sharing potentially misleading content.
This incident underscores the evolving nature of information warfare in South Asia, where traditional geopolitical rivalries are increasingly playing out in the digital domain through sophisticated technological deception rather than conventional means.
Cybersecurity experts warn that as AI-generated content becomes more convincing and easier to produce, distinguishing truth from fabrication will require greater vigilance from both institutions and individual citizens in the information ecosystem.
Verify This Yourself
Use these professional tools to fact-check and investigate claims independently
Reverse Image Search
Check if this image has been used elsewhere or in different contexts
Ask Our AI About This Claim
Get instant answers with web-powered AI analysis
Related Fact-Checks
See what other fact-checkers have said about similar claims
Want More Verification Tools?
Access our full suite of professional disinformation monitoring and investigation tools


12 Comments
The use of AI to create false narratives and undermine public discourse is a growing threat to democratic societies. This incident highlights the need for greater transparency and oversight in the development and deployment of these powerful technologies. Robust fact-checking and public awareness campaigns will be essential.
Well said. Addressing the challenges posed by AI-generated propaganda requires a multifaceted approach involving policymakers, technology companies, media organizations, and the public. Collaboration and a shared commitment to truth and accountability will be key to mitigating these risks.
Propagandists are becoming increasingly sophisticated in their use of AI and other technologies to create false narratives. This highlights the need for robust media literacy programs to help the public spot manipulated content. Governments must also invest in tools to detect and counter such disinformation campaigns.
Absolutely. Combating AI-generated propaganda requires a multi-pronged approach, including technological solutions, public awareness, and international cooperation. Fact-checking and debunking efforts are crucial first steps.
This is a concerning report on the use of AI-generated propaganda to mislead the public. It’s worrying to see the spread of disinformation targeting military leadership and sowing distrust. Fact-checking is crucial to expose these fabricated videos and maintain public trust.
Agreed. Doctoring footage in this way is a serious issue that undermines democratic discourse. I’m glad the Indian authorities were able to quickly identify and refute the false claims.
This case of AI-generated propaganda targeting the Indian military is deeply concerning. It’s a stark reminder of the potential for malicious actors to weaponize emerging technologies to sow discord and erode public trust. Strengthening digital forensics and media literacy should be top priorities to counter these threats.
Absolutely. The proliferation of AI-powered disinformation poses a serious threat to democratic institutions and public discourse. Enhancing international cooperation and developing comprehensive policy frameworks to govern the responsible use of these technologies will be critical going forward.
It’s worrying to see the level of sophistication in these AI-manipulated videos. While the technology has many beneficial applications, it’s clear that bad actors are exploiting it for nefarious purposes. Vigilance and proactive measures are needed to stay ahead of the propaganda curve.
You make a good point. The rapid advancement of AI presents both opportunities and challenges. Responsible development and deployment of these technologies, coupled with robust safeguards, will be essential to mitigate the risks of malicious use.
This is a stark reminder of the dangers of AI-powered disinformation. While the technology has immense potential, it’s crucial that we develop robust frameworks to ensure it’s not misused for propaganda and undermining public trust. Fact-checking and digital literacy initiatives will be key going forward.
I agree. The proliferation of AI-generated content raises serious concerns about the integrity of information. Strengthening media literacy, improving detection capabilities, and implementing accountability measures will be critical to combating the spread of manipulated media.