Listen to the article
Social Media’s Reward Structure Fuels Misinformation Spread, USC Study Finds
A groundbreaking study from University of Southern California researchers has identified what may be the primary driver behind the proliferation of fake news online: social media platforms’ fundamental design that rewards users for habitually sharing attention-grabbing content.
Published Monday in the Proceedings of the National Academy of Sciences, the findings challenge long-held assumptions that misinformation spreads primarily due to users’ lack of critical thinking skills or because of political bias clouding judgment.
The research team discovered that just 15% of the most habitual news sharers were responsible for disseminating between 30% and 40% of all fake news. This concentrated effect prompted researchers to examine the underlying motivations driving these users’ behavior.
“Our findings show that misinformation isn’t spread through a deficit of users. It’s really a function of the structure of the social media sites themselves,” explained Wendy Wood, USC emerita Provost Professor of psychology and business and an expert on habit formation.
The study, which involved 2,476 active Facebook users ranging from 18 to 89 years old, found that social media platforms function similarly to video games, with built-in reward systems encouraging continued engagement. Users who frequently post and share eye-catching information receive validation through likes, shares and comments – reinforcing the behavior regardless of content accuracy.
“Due to the reward-based learning systems on social media, users form habits of sharing information that gets recognition from others,” the researchers noted. “Once habits form, information sharing is automatically activated by cues on the platform without users considering critical response outcomes, such as spreading misinformation.”
Gizem Ceylan, who led the study during her doctorate at USC Marshall and now works as a postdoctoral researcher at Yale School of Management, emphasized that while individual attributes contribute to misinformation spread, platform design plays a more significant role.
“We know from prior research that some people don’t process information critically, and others form opinions based on political biases, which also affects their ability to recognize false stories online,” said Ceylan. “However, we show that the reward structure of social media platforms plays a bigger role when it comes to misinformation spread.”
The impact of these sharing habits proved substantial. The researchers found that users’ social media habits doubled or even tripled the amount of fake news they shared, outweighing other factors including political beliefs and critical reasoning abilities. Most strikingly, frequent, habitual users forwarded six times more fake news than occasional or new users.
“This type of behavior has been rewarded in the past by algorithms that prioritize engagement when selecting which posts users see in their news feed, and by the structure and design of the sites themselves,” explained Ian A. Anderson, a behavioral scientist and doctoral candidate at USC Dornsife who co-authored the study.
Through several experiments, the team uncovered additional insights. They found that habitual users share both true and false news indiscriminately, demonstrating a general insensitivity to content accuracy. Interestingly, these users shared politically discordant news—stories challenging their own political beliefs—as frequently as content aligning with their viewpoints.
The researchers also tested alternative reward structures, discovering that incentivizing accuracy rather than popularity doubled the amount of truthful content users shared. This finding suggests misinformation spread isn’t inevitable but rather a product of current platform design choices.
The study offers three significant conclusions: habitual sharing of misinformation can be changed; users could be incentivized to develop sharing habits favoring truthful content; and effectively reducing misinformation would require restructuring online environments that currently facilitate its spread.
These findings point to a potential solution beyond content moderation. Rather than focusing exclusively on which information is posted, social media platforms could implement structural changes to their reward mechanisms to limit misinformation proliferation.
The research was supported by the USC Dornsife College of Letters, Arts and Sciences Department of Psychology, the USC Marshall School of Business, and the Yale University School of Management.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


22 Comments
As an avid social media user, I’m not surprised to see these findings. The constant pressure to share attention-grabbing content is a major problem that needs to be addressed.
Agree completely. Social media platforms need to rethink their design and incentive structures to promote more responsible and thoughtful sharing of information.
Fascinating research that sheds light on a complex problem. The findings around habitual sharers being responsible for a large portion of fake news spread are particularly concerning.
Agreed. This study highlights the need for more targeted interventions to address the behavior of the most prolific spreaders of misinformation.
This research provides valuable insights into the mechanics behind the misinformation epidemic. Targeting the most habitual sharers could be an effective way to curb the problem.
Absolutely. Focusing on the concentrated effect of the heaviest sharers is a smart strategy. Platforms should explore ways to reduce the incentives for this type of behavior.
Interesting to see the researchers challenge the common assumption that misinformation spreads due to user bias or lack of critical thinking. The platform design seems to be a key driver.
Yes, this study highlights the importance of examining the systemic factors that enable the rapid spread of fake news, rather than just blaming individual users.
This research provides a nuanced understanding of the misinformation crisis. It’s not just about user bias or lack of critical thinking – the platform design itself is a major factor.
Agreed. Platforms need to take a hard look at how their algorithms and reward structures may be inadvertently incentivizing the spread of fake news.
This study underscores the importance of rethinking social media design to discourage the rapid dissemination of misinformation. Platforms must take responsibility for their role in this issue.
Well said. Tackling the structural incentives that fuel the problem should be a top priority for social media companies.
Very interesting findings. I’m glad to see researchers delving into the systemic drivers behind the misinformation problem, rather than just pointing fingers at users.
Exactly. This study highlights the need for a more holistic approach to addressing the spread of fake news on social media platforms.
It’s concerning to see that a small percentage of users are responsible for such a large portion of fake news dissemination. Platforms must find ways to limit the influence of these heavy sharers.
Absolutely. Targeting the most prolific sharers of misinformation could be an effective strategy to combat the problem at its source.
This study is a wake-up call for social media companies. They can’t just rely on user education – the core platform design needs to change to curb the spread of misinformation.
Well said. Shifting the reward structure and disincentivizing habitual sharing of sensational content should be a top priority for these platforms.
Fascinating study! The reward structure of social media does seem to be a major factor in the spread of misinformation. Habitual sharing of attention-grabbing content is clearly a problem that needs to be addressed.
Agreed. Platforms need to rethink their design to discourage the rapid spread of fake news. Promoting critical thinking and responsible sharing should be a priority.
The finding that just 15% of users are responsible for 30-40% of fake news spread is quite alarming. This underscores the need for better platform design and user education.
Definitely a concerning statistic. Platforms need to take a hard look at their algorithms and reward structures to discourage the amplification of misinformation.