Listen to the article
Platforms Face Scrutiny Amid Rising Digital Hate Speech in India-Pakistan Tensions
Digital platforms are facing severe criticism for their inadequate response to a surge in online hate speech and disinformation during recent India-Pakistan tensions, according to a statement from the Association for Progressive Communications (APC).
The watchdog organization points to fundamental flaws in the business models of major social media companies, which it says prioritize engagement and profit over user safety and human rights. As hostilities between the nuclear-armed neighbors escalated in recent weeks, platforms became battlegrounds for inflammatory content targeting religious and gender identities.
“These harms are not incidental, but the result of engagement-driven business models that amplify polarizing content,” the APC stated, highlighting how algorithms designed to maximize user interaction systematically reward outrage and sensationalism.
Of particular concern is the spread of anti-Muslim rhetoric by right-wing accounts in India, contributing to what the APC describes as a broader global surge in Islamophobia. The organization warns that such content threatens the physical, psychological, and economic security of already marginalized communities, especially Kashmiris and Indian Muslims.
Gender-based abuse has also proliferated across platforms, with users from both countries deploying dehumanizing language that frames women as disposable in the conflict.
Major platforms have shown little meaningful response to the crisis. While X (formerly Twitter) publicly criticized the Indian government’s directive to block over 8,000 accounts, it has taken no substantive steps to curb misinformation. The platform’s “community notes” feature, designed to provide context to misleading posts, has proven “ineffective and easily manipulated,” according to the APC.
Other tech giants including Meta (parent company of Facebook), YouTube, and TikTok have similarly failed to effectively moderate harmful content during the heightened tensions.
The situation has been complicated by state censorship from both governments. India ordered the blocking of thousands of accounts, including those belonging to news organizations, fact-checkers, and Kashmiri voices. Pakistan, which only recently lifted its months-long ban on X, responded by blocking 16 Indian YouTube channels, 31 video links, and 32 websites for allegedly spreading propaganda.
Tech platforms’ recent policy changes have exacerbated the problem. X’s significant reduction of its trust and safety team under Elon Musk’s leadership has compromised the platform’s moderation capabilities. Similarly, Meta has rolled back protections for vulnerable groups and weakened its fact-checking infrastructure.
Digital rights experts draw parallels to previous failures, including Facebook’s role in amplifying anti-Rohingya propaganda during Myanmar’s genocide. These recurring problems underscore what critics describe as a fundamental misalignment between platform business models and human rights obligations.
“In moments of heightened conflict like these, social media companies must be held to a higher standard of urgency and responsibility,” the APC stated, calling for the implementation of emergency protocols akin to “digital disaster response measures.”
The organization has outlined specific reforms, including transparent crisis protocols with independent oversight, resistance to unjustified state censorship, and regular human rights impact assessments. It also calls for algorithmic transparency and equitable content moderation that addresses all geographies and languages equally.
Financial accountability is another key demand, with the APC urging platforms to “proactively demonetize harmful content” by disabling ad revenue and algorithmic boosting for accounts repeatedly violating platform policies.
As regional tensions continue to simmer, the digital landscape remains vulnerable to further exploitation, highlighting the urgent need for structural reforms that better align platform practices with human rights standards and safety considerations.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
This underscores the broader challenges of content moderation and the need for greater regulation of social media platforms. Their profit-driven business models appear to be at odds with the public interest. Fundamental reforms are long overdue.
This is a concerning trend. Social media platforms need to take stronger action to combat disinformation and hate speech, especially during times of heightened tensions. Their business models should prioritize user safety and human rights, not just engagement and profit.
Agreed. Amplifying inflammatory content can have serious real-world consequences. Platforms must be more proactive in moderating harmful narratives.
The spread of Islamophobia is particularly troubling. Platforms have a responsibility to prevent their services from being used to incite violence or discrimination against religious or ethnic minorities.
Absolutely. Algorithms that reward sensationalism and outrage can fuel dangerous polarization. Fundamental reforms are needed to protect vulnerable communities.
As regional tensions escalate, it’s critical that digital platforms take swift action to curb the flow of inflammatory rhetoric that could further inflame the situation. Their response so far seems inadequate given the gravity of the problem.
Agreed. The lack of proactive measures to address this surge in harmful content is concerning. Platforms must do more to uphold their responsibility to protect user safety and human rights.
This highlights the challenges of balancing free speech with content moderation. But platforms cannot hide behind the excuse of neutrality when their design choices enable the spread of disinformation and hate. More transparency and accountability is needed.