Listen to the article
Experimental ‘Trust’ Buttons on Social Media Could Cut Misinformation in Half, UCL Study Finds
Adding “trust” and “distrust” buttons to social media platforms alongside traditional “like” options could significantly reduce the spread of false information online, according to groundbreaking research from University College London (UCL).
The study, published in the journal eLife, found that this simple innovation cut the reach of false posts by approximately 50 percent in experimental conditions, potentially offering social media companies a straightforward tool to combat the rising tide of misinformation.
“Over the past few years, the spread of misinformation, or ‘fake news’, has skyrocketed, contributing to the polarization of the political sphere and affecting people’s beliefs on anything from vaccine safety to climate change to tolerance of diversity,” said Professor Tali Sharot, co-lead author of the study from UCL Psychology & Language Sciences and the Max Planck UCL Centre for Computational Psychiatry and Ageing Research.
Current strategies to fight misinformation, such as fact-checking and flagging inaccurate content, have shown limited effectiveness. The researchers identified a fundamental problem with social media’s reward structure: users receive positive reinforcement through likes and shares regardless of whether their posts contain accurate information.
“Part of why misinformation spreads so readily is that users are rewarded with ‘likes’ and ‘shares’ for popular posts, but without much incentive to share only what’s true,” Professor Sharot explained. “Here, we have designed a simple way to incentivize trustworthiness.”
The research team conducted six experiments involving 951 participants who used a simulated social media platform designed for the study. Users could share news articles—half of which contained inaccurate information—while others could react with various buttons. In some versions of the experiment, traditional “like” and “dislike” options were supplemented with “trust” and “distrust” reactions.
Results showed that participants preferred using the trust/distrust buttons over the like/dislike options. More importantly, the presence of these trustworthiness indicators changed user behavior. Participants began posting more accurate information to gain positive “trust” reactions from others.
Computational modeling revealed that participants using platforms with trust/distrust buttons paid significantly more attention to the reliability of news stories when deciding whether to repost them. The study also found that after using these modified platforms, participants developed more accurate beliefs about the topics covered in the news articles.
This research builds on the team’s previous findings, published in the journal Cognition, which demonstrated that people are more likely to share information they’ve been repeatedly exposed to, as familiarity creates a false sense of accuracy—highlighting why misinformation can spread so effectively through repetition.
“Buttons indicating the trustworthiness of information could easily be incorporated into existing social media platforms, and our findings suggest they could be worthwhile to reduce the spread of misinformation without reducing user engagement,” said PhD student Laura Globig, co-lead author from UCL Psychology & Language Sciences.
While the researchers acknowledge that real-world implementation would face additional complexities, they believe this approach could complement existing efforts to combat online falsehoods.
The study comes at a critical time when major social media platforms face mounting pressure to address misinformation. Facebook, Twitter (now X), and others have implemented various fact-checking systems with mixed results. Unlike more intrusive moderation strategies that can trigger free speech concerns, this behavioral economics approach works by realigning user incentives rather than censoring content.
Misinformation experts not involved in the study have noted that its strength lies in harnessing social dynamics rather than fighting against them, potentially offering a more sustainable solution to a problem that threatens public discourse on critical issues from public health to democratic processes.
“While it’s difficult to predict how this would play out in the real world with a wider range of influences,” Globig cautioned, “given the grave risks of online misinformation, this could be a valuable addition to ongoing efforts to combat misinformation.”
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


10 Comments
Tackling the spread of misinformation is such a critical issue. I’m glad to see researchers exploring novel solutions like the trust button approach. Even if adoption is challenging, it’s worth trying new ideas to empower users and curb the problem.
Adding trust/distrust signals could be a valuable addition to social media platforms. However, I’m curious about potential unintended consequences, like users gaming the system or the buttons being misused. Rigorous testing will be key.
While the trust button idea is thought-provoking, I have some concerns about its real-world effectiveness. Social media users may not reliably use such features, and bad actors could find ways to game the system. More research is needed.
This is an intriguing concept that could help address the misinformation crisis on social media. Quantifying a 50% reduction in false content spread is quite significant. I wonder if this could work across different platforms and content types.
Intriguing study on leveraging ‘trust’ buttons to curb social media misinformation. Seems like a simple yet potentially impactful solution to a growing problem. I’m curious to see if this approach gains traction with major platforms.
Interesting study! I’m curious to see if major social media platforms would be willing to implement trust buttons, given the potential impact on user engagement metrics. Balancing user empowerment and platform business models could be a challenge.
The trust button proposal is an innovative way to address misinformation, but I wonder about potential downsides. Could it create new avenues for manipulation or lead to increased polarization? Careful design and testing will be essential.
Kudos to the UCL researchers for exploring creative solutions to the misinformation crisis. The trust button concept is thought-provoking, but I’m curious to see how it would hold up in practice. Rigorous testing and iteration will be key.
This is an interesting proposal to combat the spread of false information online. I wonder how effective these trust buttons would be in practice, and if users would actually utilize them. Skepticism and fact-checking will likely still be important.
Anything to reduce the reach of misinformation is a step in the right direction. The ‘trust’ button concept seems worth exploring, but I imagine it would face challenges in implementation and user adoption. Cautiously optimistic about this study’s findings.