Listen to the article
Meta Abandons Fact-Checking, Raising Concerns About Election Misinformation
Mark Zuckerberg recently announced that Meta, the parent company of Facebook, Instagram, and other services, will no longer fact-check social media content on its platforms. Instead, Meta will implement a crowd-sourced system similar to X’s Community Notes, where approved users can add context or corrections to posts containing potential misinformation.
The timing of this decision is notable, coming as disinformation about Los Angeles wildfires spread across social media platforms and shortly before President Trump’s inauguration. The move raises significant questions about the future of information integrity on platforms used by billions of people worldwide.
Fact-checking has shown measurable, if imperfect, effectiveness in combating false information. Sander van der Linden, a social psychologist at the University of Cambridge who advised Facebook on its fact-checking program in 2022, notes that “Studies provide very consistent evidence that fact-checking does at least partially reduce misperceptions about false claims.”
However, experts acknowledge limitations. Jay Van Bavel from New York University points out that fact-checking becomes less effective when dealing with highly polarizing issues. A 2019 meta-analysis examining 30 individual studies found that while fact-checking positively influences political beliefs overall, its effectiveness varies depending on individuals’ preexisting ideologies and beliefs.
Research on Community Notes, the system Meta plans to adopt, raises questions about its efficacy. A 2024 study analyzing X’s implementation found no evidence that Community Notes significantly reduced engagement with misleading posts. Researchers concluded the system “might be too slow to effectively reduce engagement with misinformation in the early (and most viral) stage of diffusion.” Another study found Community Notes ineffective in combating false election narratives, with many accurate corrections never shown to users.
Meta’s reversal appears connected to political pressure. President Trump and X owner Elon Musk have consistently criticized social media fact-checking programs, claiming they suppress free speech. Trump publicly praised Meta’s decision, stating the company has “come a long way” and acknowledging his criticism was “probably” behind the changes.
The company’s recent actions show a clear realignment. Meta donated $1 million to Trump’s inauguration fund, appointed Trump supporter Dana White to its board, and selected Republican lobbyist Joel Kaplan as chief global affairs officer. Zuckerberg met with Trump at Mar-a-Lago after the election and attended his inauguration alongside other tech leaders.
The implications for election misinformation could be significant. Meta established its fact-checking program following widespread criticism about 2016 election misinformation on its platforms. After the 2021 Capitol riot, Facebook suspended thousands of accounts and removed posts supporting the attack. However, platforms were largely unresponsive to misinformation after the attempted assassination of Donald Trump.
Tech watchdog groups warn that ending fact-checking may trigger a “surge in disinformation.” Valeria Wirtschafter of The Brookings Institution believes the change will “likely make the information environment worse,” particularly concerning electoral politics. The Brookings Institution argues disinformation defined the 2024 election landscape by shaping perceptions on issues like immigration, crime, and the economy.
While users express concern—65% of respondents in one poll believed election misinformation had worsened since 2020—millions still rely on social media for news. Pew Research Center found that over half of US adults sometimes get news from social media platforms, with about one-third regularly accessing news through YouTube or Facebook.
Certain communities face disproportionate impacts. A 2024 Free Press poll found Black and Latino respondents more likely to access news on Facebook and YouTube. Research by Onyx Impact indicates at least 40 million Americans are targeted by disinformation in Black online spaces, with Black Americans “disproportionately encountering misinformation.”
The risks extend beyond information quality to democratic institutions themselves. Disinformation fuels election denial movements and contributes to threats against election officials and high turnover in these positions. If President Trump implements plans proposed by Project 2025, he could encourage Congress and the FCC to penalize social media companies that restrict content related to “core political viewpoints,” potentially further limiting platforms’ ability to address harmful content.
As institutional guardrails against misinformation weaken, the responsibility increasingly falls to individual users to identify misleading content independently—a challenging task in today’s complex information environment.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


11 Comments
As someone who works in the mining and energy sectors, I’m concerned about the potential for increased disinformation around critical topics like climate change, resource extraction, and energy transition. Fact-checking has been a helpful tool, and I hope Meta reconsiders this decision.
You raise a valid point. Misinformation around mining, energy, and climate issues can have significant real-world impacts. Fact-checking has been an important counterbalance, so Meta’s move is concerning from an industry perspective.
This is a concerning decision by Meta, particularly given the upcoming US election cycle. Fact-checking, while imperfect, has been shown to reduce the spread of misinformation. Shifting to a crowd-sourced model raises risks of bias and manipulation.
I agree, this change seems ill-timed and potentially dangerous. Fact-checking, despite its limitations, provides an important safeguard against the rapid proliferation of falsehoods on social media. Replacing it with a crowd-sourced model is worrying.
As someone who follows the mining and commodities space closely, I’m worried about the potential for increased disinformation around these industries if Meta’s fact-checking program is indeed halted. Fact-checking has been a valuable tool, and I hope the company reconsiders this decision.
I share your concerns. Reliable information is crucial for making informed decisions in the mining and commodities sectors. Fact-checking, while imperfect, has been an important safeguard against the spread of false claims. Meta’s move is worrying.
This is a concerning move by Meta, especially given the importance of maintaining information integrity on social media platforms. Fact-checking has demonstrated its effectiveness, despite limitations. Relying on crowd-sourcing raises significant risks that need to be carefully considered.
Interesting move by Meta, though it raises concerns about disinformation risks, especially around important events like elections. Crowd-sourcing has its merits but may be more prone to bias and manipulation. Fact-checking has proven effective, so this change is worrying.
I share your concerns. Platforms need to balance free speech with mitigating the spread of harmful falsehoods. It’s a delicate balance, but fact-checking seems a more reliable approach than open-ended crowd-sourcing.
This is a concerning development, especially given the potential for disinformation around elections and other important issues. While crowd-sourcing has some merits, fact-checking has proven more effective at reducing the spread of false claims. I hope Meta reconsiders this decision.
As a concerned citizen, I find this change by Meta quite troubling. Fact-checking, while imperfect, has been a valuable tool in combating the spread of misinformation. Shifting to a crowd-sourced model raises the risk of bias and manipulation. I hope Meta will reconsider this decision.