Listen to the article
Fact-Check: Viral Video of Hindu Man in Bangladesh Revealed as AI-Generated
A video circulating widely on social media that allegedly shows a Hindu man in Bangladesh pleading for his life amid violence has been confirmed as artificially generated, according to an investigation by India Today’s Fact Check team.
The nighttime selfie video, which gained significant traction online, depicts a man walking along a street with shops ablaze in the background. In the clip, the individual claims, “I’m in Bangladesh. It’s nighttime. Like ‘Dipu Chadar’, they’ll kill us as well. You can see what is happening here. Share this video as much as possible so that somebody can save us.”
The video surfaced amid heightened tensions following reports of violence against minority Hindu communities in Bangladesh. Some social media users claimed the footage was recorded on December 23, with one user posting it with the caption “Bangladesh is getting out of hand.” The clip was further amplified when Sudarshan News journalist Sagar Kumar shared it with an appeal to Indian Prime Minister Narendra Modi to intervene and “save Hindus in Bangladesh.”
However, multiple telltale signs within the footage reveal its artificial nature. The most obvious indicator is the mispronunciation of Dipu Chandra Das’s name as “Dipu Chadar” – “chadar” being the Hindi word for “sheet.” This linguistic error would be unusual for a Bangladeshi Hindu in distress.
Visual inconsistencies throughout the video further confirm its artificial creation. The fact-checking team identified several anomalies, including an electric pole mysteriously emitting light without any visible source and a car with door coloring that doesn’t match the rest of its body. Perhaps most revealing is a scene showing a Bangladeshi flag on a pole that partially changes color to white as the subject walks past it – a common glitch in AI-generated content.
The investigation traced the video to its original source: an Instagram account belonging to Kuldeep Meena, who posted it on December 24. This account has a history of publishing numerous AI-generated selfie videos featuring the same person, often depicting scenes of alleged violence against Hindus in Bangladesh. A comparison between these videos and actual photographs of Meena on his account confirms that he used his own likeness as the template for creating these artificial clips.
This fabricated video emerges against a backdrop of real concerns about minority safety in Bangladesh. In recent months, reports of violence against Hindu communities have raised alarm among human rights organizations and neighboring countries. The situation has been particularly tense following political transitions in the country.
The circulation of such AI-generated content represents an alarming trend in misinformation that can potentially inflame already sensitive situations. The increasingly sophisticated nature of AI tools makes detection challenging for average social media users, though visual inconsistencies still provide clues for careful observers.
Digital rights experts have warned about the growing prevalence of synthetic media being used to misrepresent events, particularly in regions experiencing political or sectarian tensions. This incident highlights the importance of media literacy and critical evaluation of emotionally charged content shared through social platforms.
While concerns about minority treatment in Bangladesh remain valid topics for discussion, this particular video does not represent actual events but rather stands as an example of how artificial intelligence can be misused to manufacture false narratives around legitimate issues.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


4 Comments
This is a concerning situation. It’s important to verify information carefully before amplifying unverified claims, especially when they involve sensitive ethnic and religious tensions. I hope the authorities can get to the bottom of this and address any genuine issues of violence or discrimination.
Hmm, the AI-generated nature of this video is troubling. While the situation in Bangladesh is complex, spreading unconfirmed information can inflame tensions further. I hope the relevant authorities investigate this properly and take appropriate action to protect all citizens, regardless of their background.
This is a complex and worrying situation. While the use of AI to create misleading videos is concerning, it’s important not to dismiss genuine issues of inter-community tensions and violence. A balanced, fact-based approach is needed to address the root causes and protect all citizens.
Fabricated videos like this are dangerous, as they can distort the truth and exacerbate existing conflicts. It’s crucial that the media and public remain vigilant and fact-check claims, especially those related to sensitive inter-community issues. Responsible reporting is essential for maintaining social harmony.