Listen to the article

0:00
0:00

Social Media Safety Concerns Mount Ahead of Critical 2024 Election Year

As the world prepares for a pivotal year in global democracy with 65 elections scheduled across various nations, concerns are intensifying about social media platforms’ ability to protect users from disinformation and hate speech. Recent investigations reveal alarming gaps in content moderation that could have significant implications for democratic processes worldwide.

Major social media companies, including Meta and X (formerly Twitter), have made substantial staff cuts over the past year, particularly in teams responsible for election safety and human rights. These reductions have raised fears about the platforms’ capacity to address the spread of harmful content during crucial electoral periods.

“In light of this critical year for global democracies, it has never been more urgent for social media platforms to protect their users from the inevitable onslaught of online hate and disinformation,” notes a recent report from Global Witness, an organization that has conducted extensive testing of platform safeguards.

The stakes couldn’t be higher. In 2024, billions of voters across diverse democratic landscapes—from established democracies like the European Union and the United States to emerging ones in Indonesia, Tunisia, Ethiopia, and Egypt—will head to the polls. Historical precedents, such as the January 2021 Capitol riots in Washington DC and the January 2023 storming of government buildings in Brasilia by Bolsonaro supporters, demonstrate how online disinformation can fuel real-world violence.

To assess the platforms’ readiness, Global Witness has conducted a series of investigations since 2021, focusing on advertising standards—a critical area where companies claim to apply rigorous content moderation. Their methodology involves submitting ads containing extreme hate speech or disinformation, carefully ensuring these ads are never published but merely tested for approval.

The results have been consistently troubling. After testing in more than ten countries—including Brazil, Ethiopia, Ireland, Kenya, Myanmar, Norway, South Africa, the UK, and the USA—Global Witness found that major platforms frequently failed to enforce their own policies regarding hate speech and disinformation in advertisements.

“We believe our findings indicate that these companies would prefer to protect their profit-making business model than properly resource content moderation and protect the human rights of users,” the organization states. Particularly concerning is the apparent disparity in enforcement standards across regions, with platforms seemingly dedicating more resources to content moderation for US users than those in other countries.

This geographic inequity raises serious ethical questions. As social media becomes increasingly central to political discourse worldwide, users should not face varying levels of protection from manipulation and hate speech based solely on their location.

In response to these challenges, Global Witness has joined the Global Coalition for Tech Justice, a movement comprising 149 partners worldwide. The coalition aims to pressure social media companies to invest more heavily in safeguarding the 2024 elections, especially in regions that have historically received less attention.

Despite the concerning findings, advocates maintain that change is possible. Facebook whistleblower Frances Haugen famously stated that these companies “choose profit over safety,” but critics argue that platforms can still implement meaningful reforms.

In the immediate term, this would require substantial investment in content moderation during critical periods to shield users from harmful content. Longer-term solutions would involve fundamental changes to the business models that currently profit from amplifying controversial and divisive content.

As the world enters this consequential election year, the response from social media platforms will likely have profound implications for the health of democratic processes globally and the safety of billions of users who depend on these digital spaces for information.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

9 Comments

  1. This is a concerning development, as the spread of hate speech and disinformation on social media can have serious implications for democratic processes worldwide. Platforms need to do more to proactively address these issues, especially ahead of critical elections in 2024.

  2. The findings of this study are deeply concerning. Social media platforms wield immense influence over public discourse, and they must be held accountable for ensuring their services do not become conduits for the spread of harmful content. Urgent action is required to address these shortcomings.

  3. Noah Hernandez on

    This is a complex issue with no easy solutions. While platforms have a responsibility to combat hate speech and disinformation, they must also balance user privacy and free speech considerations. Transparent and collaborative approaches involving governments, civil society, and tech companies may be the way forward.

  4. Jennifer X. Lee on

    It’s alarming to see major platforms scaling back on teams responsible for election safety and human rights. This could open the door for bad actors to exploit vulnerabilities and sow further division. Stronger safeguards are needed to prevent such scenarios.

  5. In an era of heightened political polarization, the need for social media platforms to act as responsible stewards of online discourse has never been greater. I hope this study serves as a wake-up call for companies to invest in more robust content moderation and election integrity measures.

    • Jennifer Martin on

      Well said. Maintaining the health and resilience of democratic institutions should be a top priority for these platforms, even if it requires significant financial and operational investments.

  6. Noah Thompson on

    It’s disappointing to see social media companies prioritize cost-cutting over user safety, especially in the lead-up to critical elections. Disinformation and hate speech can have devastating real-world consequences, and platforms have a moral obligation to address these issues proactively.

  7. While I understand the need for cost-cutting measures, compromising content moderation teams could prove disastrous. Platforms must prioritize user safety and the integrity of democratic discourse over short-term financial considerations.

    • Agreed. Protecting free and fair elections should be a top priority for social media companies, even if it means maintaining robust moderation teams.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.