Listen to the article

0:00
0:00

The retreat from fact-checking by social media giants has sparked concern across Europe, as Meta recently revealed disappointing results from its experiment with community-based verification systems. According to Meta’s chief information security officer, the company’s Community Notes feature produced just 900 visible notes during its first six months in the United States—a stark contrast to the approximately 35 million fact-checking labels Meta applied to Facebook posts in the European Union during a comparable period.

This shift represents part of a broader post-2024 U.S. election trend among tech platforms. Several major companies have withdrawn from fact-checking commitments outlined in the EU Code of Conduct on Disinformation. Google has significantly reduced its support for information integrity efforts in Europe, while misinformation continues to proliferate across most platforms.

This corporate retreat coincides with what many observers characterize as unfounded American criticism of Europe’s information integrity community. Critics point to a paradoxical situation: while the U.S. government imposes restrictions on free speech domestically, it simultaneously invokes free speech principles to undermine European legislation and discredit fact-checkers and watchdog organizations abroad.

European policymakers have compelling reasons to maintain their stance against misinformation. The consequences extend far beyond politics, imposing substantial costs on the European economy and public health systems. Disinformation now constitutes a recognized strategic threat to businesses, with global economic losses estimated in the tens of billions annually. Strategic sectors including renewable energy, 5G networks, and electric mobility frequently face coordinated misinformation campaigns, while health-related falsehoods create enormous economic drains on healthcare systems.

Though Community Notes were intended to “democratize” content moderation, research indicates their limited effectiveness. On X (formerly Twitter), approximately only 10% of proposed notes ever become visible to users, with even lower rates for polarizing topics where fact-checking is most crucial.

A fundamental flaw in the system lies in its “consensus-based” methodology. By requiring agreement from users with opposing viewpoints, platforms effectively allow partisan interests to obstruct factual information. Facts don’t become more or less accurate based on consensus votes, yet the current system suppresses valid information if it disproportionately benefits one side of a partisan debate. Additionally, notes often appear too late to counter viral misinformation, have limited visibility, and increasingly rely on AI-generated content as human contributors abandon the platform.

Professional fact-checking, when properly implemented, delivers superior results. However, experts aren’t advocating for abandoning community participation entirely. The European Fact-Checking Standards Network (EFCSN) suggests that both approaches can coexist and even complement each other. On X, community notes that reference professional fact-checking articles generate twice the consensus on usefulness compared to those without such references.

The EFCSN has proposed seven recommendations for integrating professional fact-checking with community notes. These include having fact-checkers verify community notes to accelerate their visibility, creating “fast lanes” for certified fact-checkers to bypass lengthy consensus processes, establishing early warning systems for emerging misinformation, using AI to scale fact-checking efforts, increasing transparency about platform partnerships with fact-checkers, ensuring quality through certification standards, and maintaining fact-checkers’ editorial independence.

Under the Digital Services Act (DSA), large platforms must implement “reasonable, proportionate and effective” measures to mitigate systemic risks, including those related to disinformation. While community notes may constitute one such measure, they cannot stand alone.

The European Commission’s investigation and enforcement actions regarding platform compliance have progressed slowly and with limited transparency. Although strong evidence is necessary for successful court proceedings, fact-checking organizations have already provided sophisticated documentation of various platform infractions. Currently, the primary challenge appears to be enforcement rather than insufficient evidence.

As platforms retreat from their misinformation commitments, they impose significant economic, health, and democratic costs on users and societies. Experts argue that combining professional fact-checking with crowdsourced approaches offers the most promising path to protect democratic institutions and the freedoms they safeguard.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

23 Comments

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.