Listen to the article
In a major breakthrough for digital truth-telling, a new study published in the prestigious journal PNAS has demonstrated that crowd-sourced fact-checking can dramatically halt the spread of misinformation on social media platforms.
The research comes at a critical time when scrolling through social media has become increasingly treacherous, with users constantly encountering potentially misleading or AI-generated content. Despite the growing problem, many social media companies have recently eliminated professional fact-checking teams, coinciding with a documented surge in misinformation across platforms.
Led by Johan Ugander, associate professor of statistics and data science at Yale and deputy director of the Yale Institute for Foundations in Data Science, the research team conducted a comprehensive analysis of X’s (formerly Twitter) Community Notes feature. This system allows regular users to propose and rate notes that add context to potentially misleading posts.
“We’ve known for a while that rumors and falsehoods travel faster and farther than the truth,” explained Ugander. “Rumors are exciting, and often surprising. Flagging such content seems like a good idea. But what we didn’t know was if and when such interventions are actually effective in keeping it from spreading.”
The researchers collected minute-by-minute data for over 40,000 posts with proposed Community Notes between March and June 2023. Of these, 6,757 posts successfully received an attached note, forming the study’s “treatment group.” The remaining posts where notes were proposed but didn’t pass the platform’s algorithm requirements became the comparison “donor pool.”
Using sophisticated “synthetic control methods,” the team created digital twins for each post that received a note. These counterparts represented what would have happened without fact-checking intervention, allowing researchers to measure the precise causal effect of the Community Notes.
The results were remarkable. Once a Community Note appeared, engagement with misleading content plummeted immediately – reposts and likes dropped by 40%, while views fell by 13%.
“When misinformation gets labeled, it stops going as deep,” Ugander noted. “It’s like a bush that grows wider, but not higher.”
Timing proved critical to the effectiveness of fact-checking efforts. Notes attached within the first 12 hours of a post’s life reduced future reposts by nearly 25%. However, notes that appeared after 48 hours showed minimal impact. In fact, late-arriving fact checks produced a counterintuitive “backfire effect” – while they still decreased likes and reposts, they actually increased the post’s views and replies, potentially giving more visibility to the original misinformation.
The research also revealed that different types of corrections had varying impacts. Notes identifying altered or fake images showed the strongest effect in slowing spread, while corrections of outdated information had more modest results.
Community Notes uses what researchers call a “bridging-based” algorithm to determine which corrections get published. This clever system only promotes notes that users from different ideological perspectives both rate as helpful, ensuring that fact-checking doesn’t simply become another partisan battlefield.
While not perfect, this study suggests that platforms may benefit from expanding crowdsourced fact-checking systems. The research comes as traditional fact-checking faces increasing challenges, with several major social media companies reducing their professional verification teams amid rising misinformation.
The findings have significant implications for how digital platforms might combat false information going forward. Instead of relying solely on professional fact-checkers or automated systems, engaging ordinary users in the verification process could create a more scalable solution to tackle the overwhelming volume of potentially misleading content.
“Labeling seems to have a significant effect, but time is of the essence,” Ugander emphasized. “Faster labeling should be a top priority for platforms.”
The researchers acknowledge certain limitations to their study, but the evidence strongly suggests that harnessing “the wisdom of the crowd” could become a crucial weapon in the ongoing battle against online misinformation – a fight that grows increasingly urgent in today’s complex information landscape.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


10 Comments
As someone who follows the mining and energy industries closely, I welcome any tools that can help cut through the hype and misinformation that often surrounds these sectors. Fact-checking at scale could be a game-changer.
Agreed, this is a welcome development for investors and industry followers who want access to reliable, fact-based information. Cutting through the noise is crucial in these complex, often volatile markets.
The rise of AI-generated content is a real concern, as it can be extremely difficult for the average user to distinguish from genuine, human-created posts. Robust fact-checking systems will be essential to maintain the integrity of social media discourse.
You make a good point. The proliferation of AI-powered misinformation is a significant challenge that requires proactive solutions like the one described in this study. Empowering users to identify and flag suspicious content is a step in the right direction.
Fascinating study on the impact of crowdsourced fact-checking on social media misinformation. It’s encouraging to see research quantifying the benefits of empowering users to provide context and counter misleading posts.
You’re right, this is an important step in the fight against the spread of online misinformation. Putting more verification tools in the hands of the community is a smart approach.
As an investor focused on the mining and energy sectors, I’m curious to see how this could impact the spread of misinformation around commodity stocks and related topics. Fact-checking could help cut through the noise and hype.
Good point. Unchecked misinformation can certainly distort perceptions and decisions in the resource markets. Improved fact-checking should promote more informed and rational investing.
While this is a positive development, I wonder if it will be enough to stem the tide of misinformation, especially around sensitive or politicized topics. Social media platforms need to do more to elevate credible sources and demote harmful content.
You raise a fair concern. Fact-checking alone may not be sufficient without broader platform policies and enforcement to prioritize quality information. Ongoing vigilance and a multi-pronged approach will be key.