Listen to the article
Social Media Habits Drive Misinformation Sharing, Yale Study Finds
In the chaotic early days of the COVID-19 pandemic, social media platforms were flooded with unverified natural remedies for the virus—from steam inhalation to ginger—that spread rapidly through likes and shares. This phenomenon has puzzled researchers trying to understand why misinformation travels so efficiently through digital networks.
New research from Yale School of Management has uncovered a surprising culprit behind the spread of false information online: the reward systems of social media platforms themselves. The study, led by postdoctoral scholar Gizem Ceylan in collaboration with Ian Anderson and Wendy Wood from the University of Southern California, suggests that habitual social media users are inadvertently becoming vessels for misinformation.
“It’s not that people are lazy or don’t want to know the truth,” explains Ceylan. “The platforms’ reward systems are wrong.”
The research team conducted a series of experiments examining how different types of social media users interact with both true and false headlines. In their first experiment, participants reviewed sixteen headlines—eight true and eight false—deciding which ones to share on a simulated Facebook feed. Researchers also assessed participants’ Facebook usage habits, including time spent on the platform and their automatic sharing tendencies.
Results revealed a stark difference between casual and habitual users. While participants generally shared more true headlines than false ones, the most habitual Facebook users—those who spent the most time on the platform and shared content automatically—showed markedly different behavior. These power users shared a nearly equal percentage of true headlines (43%) and false ones (38%). By contrast, less frequent users shared 15% of true headlines but only 6% of false ones.
Perhaps most concerning, the research team calculated that just 15% of the most habitual Facebook users were responsible for 37% of all false headlines shared in the study. This suggests that a relatively small number of habitual users can significantly distort the online information ecosystem.
The researchers then investigated whether this pattern reflected users’ political beliefs. In another experiment, they presented participants with politically biased headlines and examined whether users were more likely to share content aligned with their own political views.
While less frequent users strongly favored sharing headlines that matched their political leanings, habitual users showed less political discrimination. These frequent users shared more content overall, regardless of whether it aligned with or contradicted their own political beliefs—further evidence that automated habit, not ideology, was driving their behavior.
“This was kind of a shocker for the misinformation research community,” Ceylan notes. “What we showed is that, if people are habitual sharers, they’ll share any type of information, because they don’t care about the content. All they care about is likes and attention.”
The research suggests that over time, habitual users develop mental representations focused on content that generates engagement. The platforms’ reward mechanisms—likes, comments, and reshares—condition these users to prioritize potential engagement over accuracy or ideological consistency.
However, the study also points to a potential solution. In a final experiment, researchers simulated a new reward structure that specifically incentivized accuracy, awarding points redeemable for Amazon gift cards when participants shared accurate information.
The results were dramatic. Under this new reward system, even the most habitual social media users shared significantly more true headlines and fewer false ones. Their previous social media habits no longer influenced their sharing behavior.
“When you reward people for accuracy, people learn the type of content that gets rewards and build habits for sharing that content,” Ceylan explains. Even more promising, participants continued sharing accurate headlines after researchers removed the accuracy incentives, suggesting that users can develop new, positive sharing habits when properly motivated.
For Ceylan, these findings highlight how social media platforms have shaped user behavior through their pursuit of engagement and profit. By rewarding any form of interaction regardless of content quality, platforms have inadvertently created users who share indiscriminately.
“It’s a system issue, not an individual issue,” she emphasizes. “We need to create better environments on social media platforms to help people make better decisions. We cannot keep blaming users for showing political biases or being lazy for the misinformation problem. We have to change the reward structure on these platforms.”
The research provides valuable insights for platform designers, policymakers, and educators looking to address the spread of misinformation in our increasingly digital information ecosystem.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


12 Comments
This is a thorny issue with no easy solutions. Social media platforms are designed to maximize user engagement, which unfortunately aligns poorly with promoting truthful information. Rethinking these core design principles is critical to addressing the misinformation problem.
Exactly. Prioritizing quality over quantity of engagement, even if it means lower user metrics in the short term, could go a long way in curbing the spread of false information online.
This study highlights an important dynamic that deserves more attention. Social media’s addictive and share-driven nature can turn users into unwitting conduits for false information. Platforms need to find ways to reward quality over quantity of engagement.
Agreed. Prioritizing trustworthy content and slowing the pace of sharing could help, even if it means lower user metrics in the short term.
This is a troubling trend. Social media platforms need to find better ways to identify and limit the spread of misinformation, even if it means sacrificing some user engagement metrics. Public trust and accurate information should be the top priority.
I agree completely. Social responsibility should take precedence over short-term growth for these companies.
The findings are concerning but not entirely surprising. Social media’s incentive structures often prioritize engagement over accuracy, which can lead to the rapid spread of misinformation. Tackling this will require a fundamental rethinking of platform design and business models.
The findings of this study are a sobering reminder of the unintended consequences that can arise from social media’s reward structures. Platforms need to find ways to incentivize the sharing of verified, accurate information over sensational or false content. This will require rethinking fundamental design choices.
Interesting study on the role social media platforms play in spreading misinformation. The incentive structures seem to encourage rapid sharing over fact-checking, which is concerning. I wonder if platform design changes could help address this issue.
You raise a good point. Tweaking reward systems to prioritize verified information could be one way to curb the spread of false content online.
It’s a complex issue without easy solutions, but the core problem seems to be that social media incentives don’t align with truth-seeking. Addressing that root cause is key to making progress on this challenge.
Absolutely. Rethinking the core business models and design principles of these platforms could go a long way in curbing misinformation.