Listen to the article
A viral video circulating on social media purportedly showing AIMIM MP Asaduddin Owaisi threatening to support Pakistan has been exposed as a sophisticated deepfake, according to an investigation by Vishvas News.
The 35-second clip, which began circulating in early January, appeared to show Owaisi addressing Prime Minister Narendra Modi and suggesting that Indian Muslims might side with Pakistan if they continued to face persecution. The video was shared by numerous social media accounts, including the Facebook page “Kashmiri Association of Australia,” which has approximately 36,000 followers.
Digital forensics experts at Vishvas News immediately identified several red flags in the video, including poor lip-syncing and unnatural voice modulation that suggested artificial manipulation. The news organization employed multiple AI detection tools to verify their suspicions.
Analysis using Aurigin.ai, a Swiss deep-tech company specializing in audio deepfake detection, revealed a 98% probability that the audio had been artificially generated. Further examination through the University at Buffalo’s Deepfake-o-meter using the FTCN model showed a 99% likelihood of AI manipulation in the video.
The Deepfakes Analysis Unit (DAU), an initiative of the Trusted Information Alliance, also examined the video using the Hive AI Deepfake Detector tool. Their analysis confirmed signs of AI manipulation affecting Owaisi’s facial features throughout multiple segments of the recording.
AI expert Azhar Machwe, consulted by Vishvas News, definitively confirmed the video as a deepfake rather than authentic footage.
Investigators traced the original source material through reverse image searches of key frames, leading them to a legitimate video livestreamed on ANI’s YouTube channel on January 4, 2026. The section used to create the deepfake appears approximately 20 minutes into this authentic recording.
In the original footage, Owaisi’s actual statement was dramatically different from the manipulated content. Rather than threatening to support Pakistan, he was discussing India’s financial support to Bangladesh under Prime Minister Modi’s administration. His authentic remarks translated from Hindi were: “Ask Narendra Modi, this year, Modi gave 120 crore rupees to Bangladesh. Tell me. A grant of 120 crores and 1160 MW of energy goes to Bangladesh. And listen, Hindu brothers and sisters, in 10 years, Modi gave 8 billion dollars of credit to Bangladesh. And you abuse Owaisi? I was never against any religion, nor will I ever be.”
This incident highlights the growing concern over deepfake technology in political discourse, particularly as India approaches election season. Such sophisticated AI manipulations can potentially sway public opinion and inflame communal tensions by putting inflammatory words into the mouths of public figures.
The case represents one of the most high-profile examples of deepfake technology being deployed against a prominent Indian politician in recent months. As detection tools improve, the race between deepfake creators and verification experts continues to escalate.
Social media platforms have faced increasing pressure to develop more robust systems for identifying and removing such manipulated content before it can reach wide audiences. However, the viral spread of this particular deepfake demonstrates the continuing challenges in containing disinformation once it begins circulating.
Vishvas News has urged social media users to verify controversial political content through official sources before sharing, particularly when videos contain inflammatory statements that could heighten social tensions.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


7 Comments
Disinformation campaigns targeting minority groups are a serious concern. I’m glad this video was quickly debunked before it could spread further and potentially incite tensions. Fact-checking is vital for maintaining social cohesion.
This case highlights the importance of digital literacy and critical thinking when consuming online content. Audiences must be equipped to identify manipulated media and rely on authoritative sources to separate truth from fiction.
Agreed. Building public awareness around deepfake detection is crucial, so people can navigate the digital landscape more safely and responsibly.
Digital forensics experts play a crucial role in detecting AI-manipulated media like this. Their analysis using advanced tools is essential for exposing deepfakes and maintaining trust in information online.
Absolutely. Reliable detection methods are key to combating the rise of sophisticated deepfakes, which can be hard for the average person to spot.
This is a concerning development. Deepfakes can be used to spread misinformation and sow discord. It’s important to rely on credible news sources and fact-checking organizations to verify the authenticity of viral videos and claims.
The use of deepfakes to falsely portray political figures is a worrying trend. Vigilance and rigorous verification are needed to prevent the malicious spread of misinformation that could influence public opinion and decision-making.