Listen to the article
The rise of social media has fundamentally transformed how information spreads through society, creating unprecedented vulnerabilities in human cognition that can be exploited at scale. Researchers at the University of Warwick in England and Indiana University Bloomington’s Observatory on Social Media (OSoMe) have been studying these cognitive blind spots, developing tools to identify manipulation and understand how misinformation proliferates online.
Consider the case of Andy, who became increasingly skeptical about COVID-19 as the pandemic threatened his livelihood. Initially dismissive of claims that the pandemic was overblown, Andy’s perspective shifted as financial concerns mounted. After joining online groups of similarly worried individuals, he eventually embraced the conviction that COVID was a hoax, attending maskless rallies that reinforced his new beliefs.
This pattern illustrates several cognitive biases that once served evolutionary purposes but now make us vulnerable in the digital age. We naturally prefer information from people we trust, pay closer attention to potential threats, and seek out content that confirms existing beliefs. While these tendencies historically helped humans survive, modern technology now amplifies these biases to dangerous degrees.
“These mental shortcuts influence which information we search for, comprehend, remember and repeat to a harmful extent,” explains the research team. Search engines and social media platforms compound the problem by serving personalized content that reflects users’ existing preferences, creating feedback loops that reinforce beliefs regardless of accuracy.
Information overload has become a critical factor in this ecosystem. With limitless content competing for limited attention, high-quality information struggles to stand out. Researchers at OSoMe demonstrated through computer modeling that even when users prefer quality content, the statistical consequences of information proliferation in networks with limited attention inevitably leads to the sharing of low-quality or false information.
Social conformity exacerbates these problems. Studies show that when people can observe others’ choices, they tend to conform to popular behaviors, creating pressure that can override individual preferences. On social media, indicators of popularity such as likes and shares serve as quality signals, despite offering no independent assessment of accuracy or value.
“Few people realize that these cues do not provide independent assessments of quality,” the researchers note. “In fact, programmers who design the algorithms for ranking memes on social media assume that the ‘wisdom of crowds’ will quickly identify high-quality items; they use popularity as a proxy for quality.”
This dynamic enables the formation of echo chambers – segregated communities where like-minded people reinforce each other’s views while insulating themselves from contrary perspectives. The OSoMe team’s simulation tool, EchoDemo, demonstrates how social influence and unfollowing behaviors can rapidly accelerate political polarization, creating environments where misinformation thrives unchallenged.
Perhaps most concerning is the role of automated accounts, or “bots,” in manipulating these cognitive vulnerabilities. Using Botometer, a tool developed to detect social bots, OSoMe researchers estimated that up to 15% of active Twitter accounts were automated during the 2016 U.S. election period. These bots played a significant role in spreading misinformation by creating the illusion of popularity around certain content, triggering human users’ tendency to trust widely-shared information.
“Bots can effectively suppress the entire ecosystem’s information quality by infiltrating only a small fraction of the network,” the researchers found. Some manipulators operate bots on opposing sides of political divides, deliberately driving polarization or generating revenue through ad traffic.
To combat these challenges, researchers have developed several tools for public use. Fakey, a mobile app, helps users identify misinformation by simulating a social media feed. Hoaxy visualizes how content spreads through Twitter networks, highlighting bot activity. BotSlayer flags trending topics likely being amplified by coordinated inauthentic accounts.
Beyond these technological solutions, researchers suggest structural changes to the information ecosystem. Adding “friction” to sharing – through small fees, time investments, or cognitive tasks – could discourage the rapid spread of low-quality content. Some platforms have already implemented measures like CAPTCHAs and limits on automated posting.
“Free communication is not free,” the researchers conclude. “By decreasing the cost of information, we have decreased its value and invited its adulteration.” Understanding and addressing these cognitive vulnerabilities represents one of the most significant challenges in preserving healthy democratic discourse in the digital age.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
This is a timely and important issue, especially as misinformation around topics like the pandemic and elections can have real-world consequences. I’m curious to learn more about the tools researchers are developing to identify manipulation and understand the dynamics of online misinformation spread.
The study highlights how cognitive biases that evolved for survival purposes can now make us vulnerable to fake news in the digital age. It’s a sobering reminder that technology is a double-edged sword that requires careful stewardship.
Agreed. As technology continues to advance, we’ll need to find ways to harness its benefits while also protecting against the risks of misinformation and manipulation. It’s a complex challenge with no easy solutions.
The case study of Andy illustrates how financial pressures and social media echo chambers can lead people down the path of conspiracy theories. Striking a balance between addressing legitimate concerns and countering false narratives is a real challenge.
Absolutely. Fact-checking and media literacy education will be crucial to help people navigate the information landscape and avoid falling for misinformation, especially during times of uncertainty.
This is a concerning trend that undermines public discourse and trust in institutions. I hope the research leads to more effective interventions to counter the spread of misinformation and help people develop critical thinking skills to navigate the digital landscape.
Interesting research on how information overload and social media can fuel the spread of misinformation. It highlights how cognitive biases and the desire for certainty in uncertain times can make people vulnerable to fake news. A concerning trend that platforms need to address.
Agreed. Platforms need to do more to combat the rapid spread of misinformation, while also empowering users to think critically about the content they consume online.