Listen to the article
Social media platforms face a complex challenge in combating misinformation, according to Harvard Law School lecturer evelyn douek, who specializes in global regulation of online speech and private content moderation design.
In a recent interview, douek emphasized that “fake news” is a problematic term that has been co-opted to simply mean “news I don’t like.” She suggests a more nuanced approach is necessary, referencing the taxonomy developed by Wardle and Derakhshan that separates problematic content into three categories: disinformation (deliberately false information intended to cause harm), misinformation (false information without harmful intent), and mal-information (factual information shared to cause harm).
“Different types of problematic content may require different interventions,” douek explains. “The response to profit-driven clickbait should differ from approaches to state-backed information operations targeting electoral processes.”
Rather than pursuing simplistic solutions that require platforms to adjudicate truth across all content, douek advocates for moving beyond the binary “take-down or leave-up” paradigm. She recommends leveraging a variety of interventions available to platforms, including content labels, providing context, reducing virality through downranking, highlighting authoritative information, and facilitating effective counter-messaging.
“Requiring platforms to determine whether all content is true or false and removing anything deemed ‘false’ is not practically possible, endangers legitimate freedom of expression, and may not effectively correct false beliefs,” douek notes. She stresses that addressing misinformation requires “imagination, experimentation, and independent empirical studies” to assess intervention effectiveness.
When discussing institutional structures for content moderation, douek highlights the importance of process over substance. “Substantive disputes around the proper limits of free speech are intractable and have been debated for centuries,” she says. Given these ongoing disagreements, douek suggests focusing on ensuring rules adhere to fundamental principles like transparency and due process.
For platforms, this means providing detailed explanations of their rules and the rationale behind them. Equally important is establishing accountability mechanisms to evaluate enforcement practices, identify biases, and assess false positives and negatives in content moderation at scale.
The development of appropriate institutional frameworks remains an evolving landscape. “Social media companies are experimenting, governments are writing reports and passing new laws, and civil society groups are working on recommendations,” douek observes, pointing to initiatives like the Santa Clara Principles as valuable starting points.
Regarding Facebook’s Oversight Board, douek expresses cautious optimism about its potential to improve content moderation. She identifies two main benefits: introducing an independent check on Facebook’s policy decisions that prioritizes public interest over business concerns, and creating a more transparent process for discussing and challenging content moderation rules.
“Having a process for people to challenge decisions and have their viewpoints heard is an important step forward and can help people accept rules even if they disagree with them,” douek explains.
The Facebook Oversight Board represents a strategic gamble by the company—a bet that potential legitimacy gains will outweigh having to abide by occasional unfavorable rulings. Whether other platforms will follow suit depends largely on how this experiment unfolds and whether regulatory pressure increases.
Looking forward, douek raises an intriguing question about whether centralized content moderation across platforms is preferable to multiple competing “laboratories of online governance.” This tension between standardization and diversity in platform governance approaches represents one of the field’s most fascinating challenges, with different content types potentially requiring different solutions.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


9 Comments
This interview highlights the challenges social media platforms face in moderating content. Douek’s taxonomy provides a useful framework for understanding the different types of misinformation and how to tailor responses accordingly.
Definitely, a more nuanced approach is needed rather than just relying on binary choices of take-down or leave-up. Platforms require a variety of interventions.
Douek makes a good point about the term ‘fake news’ being co-opted. Focusing on the intent and impact behind different types of problematic content is a more constructive way to address the issue.
Interesting take on the complexities of combating fake news. Douek’s point about differentiating between disinformation, misinformation, and mal-information is insightful. Nuanced approaches seem necessary rather than one-size-fits-all solutions.
I agree, the distinctions she draws are important. Platforms need flexible tools to address the various forms of problematic content.
This is a thoughtful analysis of the complexities involved in combating misinformation on social media. Douek’s emphasis on tailored interventions based on the nature of the content is an insightful recommendation.
Douek’s taxonomy of disinformation, misinformation, and mal-information is a helpful framework for understanding the different forms of problematic content online. Platforms would do well to consider her recommendations.
This is a valuable discussion on the challenges of combating fake news. Douek’s perspective on the need for flexible, context-specific interventions is thought-provoking.
The interview highlights the need for social media platforms to move beyond simplistic solutions when it comes to moderating content. Douek’s suggestions for a more nuanced approach seem prudent.