Listen to the article
In a week marked by significant shifts in social media policies, LSE Professor Shakuntala Banaji has raised concerns about the growing proliferation of hate speech and disinformation online. Recent decisions by major tech companies are creating what experts describe as a dangerous new landscape for vulnerable communities.
Meta’s announcement to end its third-party fact-checking program for US content and loosen moderation policies has drawn sharp criticism from researchers and human rights advocates. The company, led by Mark Zuckerberg, will replace its existing fact-checking system with a community notes approach similar to X (formerly Twitter), while simultaneously relaxing rules against hateful content.
According to Banaji, these changes will permit dehumanizing language against women, LGBTQIA+ communities, and immigrants to flourish unchecked. The policy shift comes as X owner Elon Musk reportedly considers purchasing TikTok’s US operations, following the Biden administration’s ultimatum that the platform must be sold to American owners or face a ban.
“A powerful minority are profiting massively from a rise in hate,” Banaji notes, pointing out that many decision-makers have vested interests in multiple companies while holding regulatory power over the information sphere.
The timing of these changes has raised eyebrows among media analysts. Meta’s policy shift was announced shortly after Donald Trump’s election victory, with critics describing it as a “sycophantic show of support” for the incoming administration. Joel Kaplan, Meta’s head of global policy, justified the changes using “free speech” rhetoric that many experts find misleading.
User resistance to these platform changes has manifested in several ways. A significant exodus from X to competitor Bluesky occurred in late November, while millions of American TikTok users have begun migrating to Xiaohongshu (Little Red Book), a Chinese e-commerce platform, in protest of the potential TikTok ban.
Banaji describes this unexpected cross-cultural exchange on Xiaohongshu as “fascinating and humbling,” noting the emergence of genuine connections between American and Chinese users that challenge longstanding stereotypes perpetuated by Western media and government sources.
The history of social media moderation provides important context. Platforms implemented fact-checking and anti-hate measures following public outrage over scandals like Cambridge Analytica’s role in Brexit, the public murder of George Floyd, and the spread of COVID-19 misinformation. These measures were attempts to address genuine harms, not to suppress legitimate expression.
Research indicates that online hate speech has continued to rise globally despite fact-checking efforts, not because of them. Human Rights Watch documented how Meta’s policies have systematically censored pro-Palestinian content since October 2023, demonstrating that platform moderation already disproportionately impacts marginalized voices.
The consequences of reduced moderation extend far beyond the digital realm. In August 2024, far-right disinformation networks contributed to violent racist riots across the UK, demonstrating the real-world impact of online hate.
Banaji calls for decisive action to break up media and tech monopolies, address conflicts of interest when billionaires control both media platforms and political influence, and increase critical media literacy. She emphasizes that the current challenges require both regulatory intervention and grassroots efforts to rebuild public trust and community resilience against racist and misogynistic violence.
“We are in a very grim period when it comes to trust and integrity in the media and social media sphere,” Banaji concludes, highlighting the urgent need for reform in an increasingly polarized information landscape where corporate interests and political agendas frequently override public safety concerns.
Verify This Yourself
Use these professional tools to fact-check and investigate claims independently
Reverse Image Search
Check if this image has been used elsewhere or in different contexts
Ask Our AI About This Claim
Get instant answers with web-powered AI analysis
Related Fact-Checks
See what other fact-checkers have said about similar claims
Want More Verification Tools?
Access our full suite of professional disinformation monitoring and investigation tools


22 Comments
The growing influence of billionaires over social media platforms is a troubling development that warrants close scrutiny. We must ensure these powerful companies are held accountable to the public, not driven solely by profit motives.
Well said. Social media’s impact on public discourse and social cohesion is too significant to be left in the hands of unaccountable private interests.
This situation highlights the need for robust, independent oversight and regulation of social media platforms. Allowing billionaires to dictate content policies threatens the very foundations of a healthy, democratic public discourse.
Well said. Meaningful reforms are urgently needed to curb the unchecked power of tech giants and safeguard the integrity of online spaces.
Fact-checking and content moderation are essential to counter the rise of online disinformation. Dismantling these safeguards in favor of a community-driven approach seems ill-advised and potentially harmful.
Community-driven moderation can be valuable, but without strong oversight, it risks becoming an echo chamber that amplifies harmful narratives.
The decision to loosen rules against hateful content is deeply troubling. Marginalized communities will bear the brunt of this policy shift, which prioritizes the profits of tech giants over the wellbeing of their users.
Exactly. Platforms have a moral and ethical responsibility to protect their users, not enable the spread of hate and discrimination.
Allowing hate speech and disinformation to spread unchecked on social media platforms is a recipe for social division and harm. Tech leaders must uphold their responsibility to foster healthy, inclusive online communities.
Exactly. Ethical content moderation and fact-checking are essential to maintaining the integrity of social media as a public forum.
The erosion of content moderation and fact-checking on social media is deeply concerning. This shift prioritizes the profits of tech billionaires over the wellbeing of vulnerable users and the broader public good.
I share your concerns. Robust governance and oversight are needed to ensure social media platforms serve the public interest, not the narrow interests of their owners.
Concerning to see tech giants relaxing content moderation policies. This could enable the spread of hate speech and misinformation, harming vulnerable communities. We need strong, ethical leadership to protect social media integrity.
I agree, a more hands-off approach is risky. Platforms must balance free speech with responsible content curation to prevent harm.
The billionaire control over social media platforms is worrying. Profit motives should not compromise user safety and the broader public interest. Robust governance and transparency are crucial to maintain platform integrity.
Well said. Social media wields immense influence – its leaders must prioritize the public good over narrow self-interest.
The growing influence of billionaires over social media is a worrying trend that deserves close scrutiny. We must ensure that platforms are governed in the public interest, not solely for the profit of their owners.
I agree. Platforms should be held accountable to transparent, ethically-grounded policies that prioritize user safety and societal wellbeing.
Relaxed moderation policies could enable the proliferation of hateful rhetoric against marginalized groups. This is a concerning development that requires close scrutiny and accountability from regulators and the public.
I share your concerns. Vulnerable communities deserve protection from online abuse and harassment, not further marginalization.
The potential acquisition of TikTok’s US operations by Elon Musk raises red flags. Given his track record on content moderation, this could severely undermine the platform’s integrity and user safety.
Agreed. Musk’s plans for TikTok warrant rigorous oversight to ensure the platform remains a safe, inclusive space for all users.