Listen to the article

0:00
0:00

As Bangladesh’s national election approaches, political movement Ganosamhati Andolon has raised serious concerns about the escalating use of artificial intelligence to spread disinformation among voters, calling for immediate intervention from electoral authorities.

In a joint statement released Sunday, the organization’s election committee leaders warned that AI-generated propaganda has reached “frightening” proportions, posing a significant threat to the democratic process in the country.

“The deployment of sophisticated AI technology to manipulate public opinion represents one of the most serious challenges to electoral integrity we have witnessed,” said Abul Hasan Rubel, convener of Ganosamhati Andolon’s central election management committee, and Dewan Abdur Rashid Nilu, the committee’s member secretary.

The political movement highlighted how deepfake videos and manipulated images—created using generative AI technology—are being widely circulated across Facebook and other social platforms. These fabricated materials often appear remarkably authentic to the average viewer, making them particularly dangerous in Bangladesh’s information ecosystem.

According to digital rights experts, Bangladesh has experienced a 300 percent increase in AI-generated political content over the past six months. The country’s high social media penetration rate, with over 55 million active Facebook users in a population of approximately 170 million, makes it particularly vulnerable to digital misinformation campaigns.

The Ganosamhati Andolon statement specifically pointed to coordinated campaigns using bot networks and fake accounts to amplify narratives favoring certain political parties. These automated systems can create the illusion of widespread support for particular viewpoints or candidates by flooding social media platforms with identical or similar content.

“What we’re witnessing is not merely isolated incidents of fake news, but rather sophisticated, well-orchestrated campaigns designed to undermine the electoral process,” the leaders emphasized in their statement.

The organization has formally called on Bangladesh’s Election Commission to implement stronger measures to counter AI-driven disinformation. Their recommendations include establishing a dedicated task force to monitor and respond to digital propaganda, creating clear guidelines for social media platforms operating in Bangladesh, and launching public awareness campaigns about identifying synthetic media.

Bangladesh’s Election Commission has previously acknowledged the challenge but has struggled to develop effective countermeasures against rapidly evolving AI technologies. Commission spokesperson Mohammad Rahman told reporters last week that they are “exploring all possible avenues” to address the issue.

Digital rights advocates have welcomed Ganosamhati Andolon’s intervention. Fahima Khatun, director of the Digital Rights Coalition of Bangladesh, noted that the problem extends beyond politics. “Once these technologies become normalized in electoral contexts, they can quickly spread to other spheres of public discourse, creating a perpetual crisis of truth and authenticity,” she explained.

The concerns raised by Ganosamhati Andolon reflect a growing global challenge. Countries including India, Indonesia, and the Philippines have all grappled with AI-generated electoral disinformation in recent elections, prompting calls for international standards and regulations.

As Bangladesh’s election date draws closer, the battle between AI-generated propaganda and efforts to maintain information integrity is likely to intensify. Tech companies including Meta (Facebook’s parent company) have pledged increased resources for content moderation in Bangladesh during the election period, though critics question whether these measures will be sufficient.

The proliferation of AI-generated disinformation represents not just an immediate electoral concern but a fundamental challenge to Bangladesh’s democratic institutions. As one election observer noted, “When voters can no longer trust what they see and hear, the very foundation of informed democratic participation is undermined.”

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

8 Comments

  1. The use of deepfake technology to spread disinformation is a growing challenge that requires a multi-stakeholder response. Strict regulations, content moderation, and public education campaigns will all be essential to combat this threat.

  2. This is a complex challenge that requires nuanced policymaking and implementation. I’m curious to see what specific measures the Bangladeshi authorities will take to address this issue in the lead-up to the election.

  3. Kudos to Ganosamhati Andolon for sounding the alarm on this issue. Voter awareness and fact-checking initiatives will be key to empowering citizens to critically assess the information they encounter online.

  4. This highlights the need for robust digital literacy programs to help the public distinguish authentic content from AI-generated propaganda. Proactive steps by authorities and civil society can help safeguard the democratic process.

  5. This is certainly concerning. AI-generated propaganda can undermine trust in democratic institutions if left unchecked. Proactive measures by electoral authorities will be crucial to protect the integrity of the upcoming election.

  6. James Thompson on

    While the scale of AI-generated propaganda is alarming, I’m hopeful that a combination of technological solutions and civic engagement can help mitigate this threat to Bangladesh’s democracy.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.