Listen to the article

0:00
0:00

Peer-Driven Fact-Checking Proves Effective in Reducing Misinformation on Social Media

New research reveals that social media users are more likely to delete misleading posts when called out by their peers rather than by platform algorithms or experts. The study provides compelling evidence that “crowdchecking” approaches, like X’s Community Notes feature, can significantly reduce the spread of misinformation online.

When X (formerly Twitter) introduced a system allowing users to flag misleading content, critics were skeptical. Many doubted that the same public responsible for spreading misinformation could effectively police it. However, researchers from the University of Rochester, the University of Illinois Urbana-Champaign, and the University of Virginia have found that this collaborative fact-checking model is surprisingly effective.

Published in the journal Information Systems Research, the study demonstrates that when community-generated notes questioning a post’s accuracy appear beneath content, authors are far more likely to voluntarily remove their statements.

“Trying to define objectively what is misinformation and then removing that content is controversial and may even backfire,” explained Huaxia Rui, a professor of information systems and technology at the University of Rochester’s Simon Business School and co-author of the study. “In the long run, I think a better way for misleading posts to disappear is for the authors themselves to remove those posts.”

Using a causal inference method called regression discontinuity, researchers analyzed 264,600 posts on X that received at least one community note during two distinct time periods. The first period covered June to August 2024, before a U.S. presidential election—typically a peak time for misinformation. The second spanned January to February 2025, two months after the election.

The Community Notes system operates on a threshold mechanism where corrective notes must achieve a “helpfulness” score of at least 0.4 to appear publicly. Notes falling below this threshold remain hidden from general users. This design created a natural experiment that allowed researchers to compare outcomes between posts with publicly visible notes and those with notes visible only to contributors.

The results were remarkable. Posts with publicly displayed correction notes were 32 percent more likely to be deleted by their authors than those with private notes. This effect remained consistent across both study periods, demonstrating that social accountability can be a powerful driver of voluntary content retraction.

The research team discovered that an author’s decision to delete misleading content is primarily driven by concerns about online reputation. “You worry that it’s going to hurt your online reputation if others find your information misleading,” Rui noted.

Timing also proved crucial. Among posts that were eventually deleted, those that received public notes faster were removed sooner, highlighting the importance of swift corrections in the fast-moving social media landscape where misinformation tends to spread more rapidly than corrections.

Users with verified accounts—those with blue check marks—showed particular sensitivity to public notes, deleting their flagged content more quickly than average users. This suggests that individuals with larger followings and greater visibility face heightened reputational risks when sharing inaccurate information.

What makes the Community Notes system effective is its emphasis on diversity and consensus. The algorithm prioritizes ratings from users who have disagreed in past evaluations, helping prevent partisan manipulation of which notes become visible. This approach creates a system that “strikes a balance between protecting First Amendment rights and the urgent need to curb misinformation,” according to the researchers.

Rui admitted initial skepticism about the study’s potential findings. “For people to be willing to retract, it’s like admitting their mistakes or wrongdoing, which is difficult for anyone, especially in today’s super polarized environment with all its echo chambers,” he said.

The researchers had considered whether public corrections might actually backfire, causing users to become defensive and double down on misleading claims. Instead, they found the opposite—social accountability effectively nudges users toward better information practices.

“Ultimately,” Rui concluded, “the voluntary removal of misleading or false information is a more civic and possibly more sustainable way to resolve problems” than top-down content moderation approaches that have proven controversial across social media platforms.

The findings offer a promising direction for social media companies grappling with misinformation while navigating complex debates about free speech and censorship.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

8 Comments

  1. As someone interested in mining and metals, I can see both the potential and pitfalls of this crowdsourced fact-checking model. It could be a valuable way to challenge dubious claims, but the risk of ‘mob rule’ is real. Will be interesting to see how it evolves.

  2. Patricia P. Miller on

    Peer-driven fact-checking is an intriguing idea. In the mining and commodities space, it could help counter misleading promotional claims about projects and resources. But the system would need robust safeguards to prevent abuse or inaccuracies.

    • Agreed. Careful design and moderation would be critical for this approach to work effectively in specialized domains like mining and energy.

  3. Interesting study. Crowdsourcing fact-checking seems like a promising way to combat online misinformation. I wonder if this approach could be applied to mining and commodities news as well, where claims about projects and resources can sometimes be misleading.

    • Isabella Moore on

      That’s a good point. Misinformation about mining and energy developments can have real impacts, so community-driven fact-checking could be valuable in those domains too.

  4. Jennifer Jackson on

    Social media users policing each other’s posts – it’s an innovative concept. I can see the potential benefits, but also concerns about abuse or lack of expertise. Curious to see how this approach evolves and performs over time.

    • Liam C. Garcia on

      Valid concerns. The effectiveness will depend on how well the system is designed and moderated to ensure credible fact-checking, not just popular opinion.

  5. Isabella Martin on

    I’m generally skeptical of crowdsourcing for complex topics like mining and energy. But this study suggests the ‘crowdchecking’ model could be a useful tool, if implemented thoughtfully. Worth keeping an eye on as the technology develops.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.