Listen to the article
Social Media Platforms Can Combat Misinformation Without Censorship, Study Finds
The battle against misinformation on social media has traditionally focused on content moderation, fact-checking, and user education. However, a new study suggests that simple structural changes to how content spreads could be equally effective without raising thorny issues of censorship.
Research published in the Proceedings of the National Academy of Sciences by Duke University economist David McAdams and his colleagues demonstrates that social networks can significantly reduce false information by implementing caps on message sharing—either by limiting how far messages can travel or restricting how broadly they can be distributed.
“A tacit assumption has been that censorship, fact-checking and education are the only tools to fight misinformation,” says McAdams, who holds faculty positions in Duke’s economics department and the Fuqua School of Business. “We show that caps on either how many times messages can be forwarded or the number of others to whom messages can be forwarded increase the relative number of true versus false messages circulating in a network.”
The research suggests platforms could implement these limitations without determining what content is true or false, sidestepping controversial decisions about who should police content. For instance, Twitter (now X) could restrict how many users see any given retweet in their feeds, effectively limiting the “network breadth” of content sharing.
Some major platforms have already experimented with similar approaches. In 2020, Facebook implemented restrictions that capped message forwarding at five people or groups, partly to combat COVID-19 and election misinformation. WhatsApp introduced comparable limits earlier that year after false information spreading on the platform was linked by Indian officials to more than a dozen deaths.
These real-world applications align with the theoretical models developed by McAdams and his co-authors, Stanford economist Matthew Jackson and Cornell economist Suraj Malladi.
The research acknowledges that misinformation spreads through social networks can cause significant harm. “Some people might start believing things that are false and that can harm them or others,” McAdams explains. Beyond direct harm, persistent misinformation erodes trust in platforms, potentially causing users to dismiss accurate, helpful information.
However, the researchers emphasize that any approach to limiting content spread must be carefully balanced. “If you limit sharing, you could also be limiting the spread of good information, so you might be throwing the baby out with the bathwater,” McAdams cautions. “Our analysis explores how to strike that balance.”
The study comes at a critical moment for social media platforms, which face mounting pressure from governments worldwide to better control harmful content without overreaching into censorship. Meta, Google, and X have all invested heavily in content moderation systems, but these approaches remain controversial and expensive.
The structural approach suggested by the research offers platforms a potentially less contentious alternative that doesn’t require making subjective judgments about content truthfulness. Instead, it focuses on how information propagates through networks, regardless of its veracity.
While not eliminating misinformation entirely, these sharing limitations could reduce its prevalence and impact until more comprehensive solutions are developed. The researchers suggest this approach serves as an important intermediary step in addressing what has become one of the most challenging aspects of our digital information ecosystem.
As platforms continue to evolve their approaches to content moderation, this research provides a framework for considering how network structure itself—not just content policies—can play a crucial role in maintaining information integrity online.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

