Listen to the article
Social media platforms in Bangladesh have become breeding grounds for abusive commentary, misinformation, and character assassination, experts warn. The unchecked spread of offensive content is eroding moral standards across the country’s digital landscape, with even top state officials and their families becoming targets.
According to observers, short-form videos circulating on Facebook, Instagram, TikTok, and YouTube are particularly problematic. These platforms host countless instances of hate speech disguised as legitimate political expression or freedom of speech.
“Bot networks,” allegedly operating with political patronage, have amplified this problem. Their activities have created an atmosphere where many respectable citizens now avoid posting about political or social issues altogether, fearing harassment and abuse.
Recent incidents highlight the severity of the situation. In one case, a prominent individual was falsely portrayed on Facebook as having a daughter involved in inappropriate activities, despite not having a daughter at all. In another disturbing example, the death of a female political activist triggered a flood of obscene comments, which many observers viewed as indicative of a collapse in social values.
The problem extends beyond politics. When a science-focused Facebook page called “Biggan Tathya” shared information about NASA’s Artemis II lunar mission featuring astronaut Christina Koch, users responded with vulgar remarks about the 47-year-old female astronaut. The page administrator noted that such behavior reflects poorly on the education level of these commenters and suggests that no woman is safe from harassment in certain segments of society.
Writer and researcher Mohiuddin Ahmed expressed his frustration last year, writing: “I used to post about politics on Facebook. Now I avoid it as much as possible. The way ignorant and ill-mannered individuals swarm and troll makes one’s blood pressure rise.” He characterized many Facebook users in Bangladesh as “ultra-nationalist, religiously fanatic, communal and deeply misogynistic, lacking basic education, discipline and civility.”
A female journalist warned that this toxic online environment is particularly dangerous when abusive language becomes normalized through political slogans. She raised concerns about children’s exposure to such content through their devices, questioning whether authorities realize how this toxicity is affecting the next generation.
Political tensions have exacerbated the problem. During last year’s election cycle, opposing political groups reportedly encouraged supporters to create fake accounts specifically to attack and defame rivals.
The situation has worsened dramatically in recent months. According to Rumour Scanner Bangladesh, a recognized fact-checking organization, 1,974 pieces of misinformation were identified during the first quarter of this year—a 136 percent increase compared to the same period last year. The organization noted that current Prime Minister Tarique Rahman and the Bangladesh Nationalist Party were the most frequent targets of misinformation.
Dr. Shah Kawsar Mustafa Abul Ulai, former professor of philosophy at the University of Dhaka, told Kaler Kantho that while abusive language existed in previous eras, it remained within certain limits. Today’s digital landscape has removed those boundaries.
“As a philosopher, I consider this trend deeply harmful and unacceptable,” he said. Dr. Ulai emphasized that beyond abusive behavior and character assassination, the spread of half-truths and false information is equally damaging. He called for social thinkers, teachers, and intellectuals across the political spectrum to take collective action to promote civility and truth.
Legal remedies do exist for those targeted by online abuse. Senior lawyer Manzill Murshid explained that provisions under the Penal Code allow for filing defamation cases and seeking compensation. With the rise of digital platforms, laws such as the Digital Security Act were introduced to address online offenses, and despite amendments, relevant provisions remain in place.
However, Murshid identified fake accounts as a major obstacle to enforcement. Many malicious comments come from unidentified accounts without real identities, with single individuals potentially operating hundreds of fake profiles to spread defamatory content while concealing their identity. He urged the public to ignore anonymous accounts, suggesting that doing so would ultimately render such malicious efforts ineffective.
As this digital crisis deepens, many observers are calling for more robust platform regulation, better digital literacy education, and stronger ethical standards in online discourse to preserve Bangladesh’s social fabric.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


11 Comments
Hate speech and misinformation are a huge challenge for social media companies, especially in regions with political polarization. Improving content moderation while preserving free speech is a delicate balance.
Agreed. Transparency and accountability around content moderation policies is key. Platforms need to do more to address the root causes of these problems.
This issue underscores the need for greater transparency and accountability around content moderation practices. Platforms must do more to curb the spread of harmful misinformation and abuse.
This is a concerning trend that could have serious implications for public discourse and civic engagement. Platforms must find ways to better moderate content and crack down on coordinated disinformation campaigns.
The surge in online toxicity is a complex challenge without easy solutions. Promoting digital literacy, enhancing content moderation, and fostering constructive dialogue will all be key.
The rise of obscenity and disinformation on social media is deeply concerning. Restoring trust and civility in online discourse should be a top priority for these platforms.
This highlights the need for better content moderation and enforcement of community guidelines. Platforms must find ways to limit the spread of harmful misinformation and abusive behavior.
It’s disheartening to see how social media can be weaponized to harass and defame individuals. Platforms need to take stronger action to protect users and democratic norms.
Addressing the proliferation of disinformation and hateful content is crucial for preserving the integrity of public discourse. Social media companies have their work cut out for them.
Disturbing to see how easily false narratives can spread online and target individuals. Platforms have a responsibility to users to foster a healthier information ecosystem.
Indeed. More robust fact-checking, demonetization of bad actors, and user education are some potential solutions. Tackling this issue requires a multifaceted approach.