Listen to the article
The spread of false information on social media continues to outpace truth, with human users—not automated bots—serving as the primary drivers of misinformation, according to a comprehensive new study published in Science.
Researchers analyzing 12 years of Twitter data discovered that false news stories reach approximately 1,500 people six times faster than truthful information. The stark difference became particularly evident when examining political content, raising significant concerns about social media’s role in public discourse.
The study’s origins trace back to the 2013 Boston Marathon bombing, when lead author Soroush Vosoughi, a data scientist at the Massachusetts Institute of Technology, observed widespread misinformation circulating online. False reports implicated a missing Brown University student in the attack, though he had actually committed suicide for unrelated reasons.
“These rumors aren’t just fun things on Twitter, they really can have effects on people’s lives and hurt them really badly,” Vosoughi noted, explaining how this incident prompted him to refocus his doctoral research on detecting and analyzing misinformation patterns.
The research team collected data from Twitter’s inception in 2006, examining 126,000 news items shared 4.5 million times by 3 million users. They cross-referenced these posts against investigations by six independent fact-checking organizations, including PolitiFact, Snopes, and FactCheck.org.
Their findings revealed a significant disparity in reach: while truthful information rarely reached more than 1,000 Twitter users, the most viral false stories routinely exceeded 10,000 people. One notable example involved fabricated claims that boxer Floyd Mayweather wore a hijab to a Donald Trump rally, challenging people to fight him—a hoax that originated on a sports comedy website but quickly gained traction across social media platforms.
Initially suspecting automated accounts might be responsible, researchers employed advanced bot-detection technology to filter out non-human interactions. Surprisingly, their conclusions remained unchanged—humans themselves were primarily responsible for spreading falsehoods.
Further analysis revealed that users sharing misinformation typically had fewer followers than those sharing accurate information, contradicting assumptions that influential accounts were driving the phenomenon.
The key distinction emerged in the content itself. False information typically contained novel elements—content users hadn’t previously encountered—and generated stronger emotional responses, particularly surprise and disgust. These characteristics appeared to fuel higher sharing rates.
“If something sounds crazy stupid you wouldn’t think it would get that much traction,” said Alex Kasprak, a fact-checking journalist at Snopes. “But those are the ones that go massively viral.”
The study contradicts popular narratives about Russian bots and automated systems manipulating online discourse. While bots certainly exist on social platforms, they distributed both true and false news at roughly equal rates, suggesting their influence may be overstated in public conversation.
“Bots are so new that we don’t have a clear sense of what they’re doing and how big of an impact they’re making,” explained Shawn Dorius, a social scientist at Iowa State University who wasn’t involved in the research. At least in this study, they weren’t skewing headlines toward false news.
The findings present troubling implications for information literacy in the digital age. David Lazer, a computational social scientist at Northeastern University who co-authored a related policy perspective in Science, argues that technology companies must implement stronger safeguards to mitigate the problem.
“The Facebooks, Googles, and Twitters of the world need to do more,” Lazer stated, while emphasizing that long-term solutions require deeper scientific understanding of misinformation dynamics. “If we don’t understand where fake news comes from and how it spreads, then how can we possibly combat it?”
As social media platforms continue to serve as primary news sources for millions, the research underscores how human psychology—not just technological manipulation—contributes to the proliferation of false information online, creating significant challenges for maintaining an accurately informed public.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


10 Comments
The researchers’ observation about the impact of misinformation on people’s lives is a sobering reminder of the real-world consequences of fake news. This underscores the urgent need for solutions that prioritize truth and accountability online.
Absolutely. Fact-based, ethical journalism and digital literacy education should be at the forefront of efforts to address this challenge. Misinformation can have devastating impacts, so a comprehensive, multi-stakeholder approach is vital.
While the speed of false news spreading is alarming, I’m curious to know if the study also looked at the lifespan and reach of truthful information versus misinformation. Understanding those dynamics could help shape more effective interventions.
That’s a great point. The longevity and pervasiveness of false narratives versus factual reporting is an important factor to consider. Nuanced analysis of these patterns could yield valuable insights for combating the spread of misinformation.
Interesting, but not surprising. Humans are often more eager to spread sensational or attention-grabbing content, even if it’s not fully verified. This study highlights the need for better digital literacy and critical thinking when consuming social media.
Agreed. Fact-checking and source verification are crucial to combat the spread of misinformation. Social media platforms also have a responsibility to curb the amplification of false narratives.
This study reinforces the idea that human psychology and behavior, not just technological factors, are central to the misinformation problem. Developing a deeper understanding of the social and cognitive drivers behind the spread of fake news will be crucial.
Agreed. Tackling misinformation requires a holistic approach that goes beyond just technical solutions. Incorporating insights from psychology, sociology, and communication studies can help craft more effective interventions.
The finding that humans, not bots, are the primary drivers of fake news is troubling. It speaks to our innate human tendencies to share provocative content, even if it lacks credibility. Improving media literacy is key to addressing this issue.
You’re right. This study underscores the need for a multifaceted approach – empowering users to be more discerning consumers of online information, while also pushing for platform accountability and transparency.