Listen to the article
In a landmark study spanning over a decade, researchers at the Massachusetts Institute of Technology have discovered that false information spreads significantly faster and more broadly on Twitter than factual content, raising new concerns about the viral nature of misinformation in the digital age.
The comprehensive analysis examined 126,000 rumors and false stories shared on Twitter over an 11-year period, revealing that misinformation consistently outperformed accurate news in terms of reach and engagement. Perhaps most surprisingly, the research team found that humans, not automated bots, were primarily responsible for the rapid spread of false content.
“False news is more novel, and people are more likely to share novel information,” explained Professor Sinan Aral, one of the study’s co-authors. The research team noted that while novelty alone wasn’t definitively established as the cause, false news typically contained more surprising elements than factual reporting, potentially making it more share-worthy among users.
Political misinformation emerged as the most prevalent category of false content, a finding that takes on particular significance amid growing concerns about social media’s role in election interference and political polarization worldwide. The study also identified several other popular subjects for misinformation, including urban legends, business news, terrorism, science, entertainment, and natural disasters.
The findings challenge a common narrative that bot networks are primarily responsible for the spread of misinformation online. Instead, the research suggests that human psychology and social dynamics play a more fundamental role in the dissemination of false content than previously understood.
Twitter, which provided its data for the study, has acknowledged the challenges presented by misinformation on its platform. The company informed the BBC that it is working to develop a “health check” system to evaluate its contribution to public discourse—part of a broader industry response to mounting pressure from regulators and users concerned about social media’s impact on society.
The MIT study comes at a critical moment for social media companies, which face increasing scrutiny over their responsibility to moderate content and limit the spread of harmful misinformation. Recent years have seen widespread concern about the role of platforms like Twitter in amplifying conspiracy theories, political propaganda, and health misinformation, particularly during the COVID-19 pandemic.
Social media experts point out that the study’s findings reflect the fundamental challenge of combating misinformation in digital spaces. Unlike traditional media, where editorial gatekeeping helps filter out false claims before publication, social platforms rely primarily on user judgment and algorithmic distribution, which can prioritize engagement over accuracy.
“This research confirms what many have suspected—that our information ecosystem isn’t just vulnerable to misinformation; it actively rewards it,” said Dr. Claire Wardle, a disinformation researcher not involved in the study. “The novelty factor is particularly concerning because it suggests false information has inherent advantages in capturing attention in our current media environment.”
The implications extend beyond Twitter to other social platforms and raise important questions about potential regulatory approaches and technological solutions. Some experts suggest that slowing down the sharing process through friction-adding features could help users evaluate content more critically before spreading it further.
For everyday users, the research underscores the importance of digital literacy and critical thinking when consuming and sharing content online. As false news continues to demonstrate superior “virality,” the responsibility falls increasingly on individual users to verify information before contributing to its spread.
Twitter has stated that addressing these challenges remains a priority, though the platform continues to navigate the delicate balance between fostering free expression and limiting harmful misinformation—a tension that defines the broader conversation about social media’s place in democratic societies.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
This study is a sobering reminder of the power of social media to amplify misinformation. It’s alarming that even well-intentioned people can unwittingly contribute to the problem. We must find ways to counter this trend without stifling legitimate discourse.
Agreed. Striking the right balance between free speech and misinformation control is no easy task. Perhaps a combination of digital literacy campaigns, platform reforms, and fact-checking initiatives could help curb the spread of fake news.
While the findings are concerning, I’m not shocked. Humans are biased towards attention-grabbing narratives, even if they lack factual basis. Addressing this challenge requires a cultural shift towards more thoughtful, discerning information consumption.
Absolutely. Educating the public on spotting misinformation and verifying sources is crucial. Tech companies also have a responsibility to design platforms that incentivize the spread of truth over falsehoods.
The findings that humans, not bots, are primarily responsible for the rapid spread of false content is particularly concerning. It suggests that our own cognitive biases and social behaviors are major contributors to the misinformation crisis. We have work to do.
Fascinating study, but not entirely surprising. The human tendency to share novel, sensational content is a major driver behind the rapid spread of misinformation. This underscores the importance of media literacy and critical thinking skills.
Agreed. Curbing the spread of fake news will require a multi-pronged approach that addresses both individual behavior and systemic issues around social media platforms and information ecosystems.
This study underscores the urgent need for a societal reckoning with the dynamics of information sharing in the digital age. Combating misinformation will require a holistic approach that addresses both individual and systemic factors driving its spread.