Listen to the article
Social media platforms are evolving their approaches to combat misinformation, with varying mechanisms for user reporting and community-based fact-checking across major networks.
X (formerly Twitter) has recently dismantled its direct misinformation reporting feature, pivoting instead to a collaborative content moderation system called “Community Notes.” This system allows eligible users to provide contextual information and fact-checking beneath potentially misleading posts. To participate in this program, users must meet several criteria: their account must be at least six months old, have a verified phone number, and maintain a clean record without recent rule violations.
The removal of X’s direct reporting tool represents a significant shift in how the platform addresses false information, placing greater responsibility on its user community rather than centralized moderation teams. This change comes amid ongoing debates about content moderation responsibilities on social media platforms.
Meta’s family of applications—Facebook, Instagram, and Threads—continues to offer direct reporting options for users who encounter misinformation. On Instagram and Threads, users can report false information by accessing the “More options” menu (via the three dots next to an account name), selecting “Report,” and then choosing “False information.”
Facebook follows a similar but slightly more detailed process. Users must click the three dots by a post to open “Options,” select “Report post,” then “Scam, fraud or false information,” followed by “Sharing false information.” The platform then asks users to specify the type of misinformation they’ve identified.
Meanwhile, Meta has begun implementing its own version of “Community Notes” in the United States, mirroring X’s approach. The system similarly relies on user contributors to collaboratively add context to potentially misleading content. To become a contributor on Meta platforms, users must be at least 18 years old with a verified phone number and a six-month-old account in good standing.
This shift toward community-based fact-checking represents a broader industry trend, with major platforms increasingly engaging users in content verification processes rather than relying solely on internal teams or third-party fact-checkers.
YouTube offers yet another approach to handling misinformation. Users in the United States can report misleading content through a step-by-step process: clicking the three dots below a video, selecting “Report,” choosing “misinformation” from the options, and then providing additional context.
The platform provides an opportunity for users to advocate for stronger measures against specific types of misinformation. For instance, users concerned about climate misinformation can request that YouTube enhance its algorithmic controls, update its content policies, and collaborate with independent fact-checkers to inform viewers who have engaged with problematic content.
These varying approaches reflect the complex challenges social media companies face in balancing free expression with responsibility for potentially harmful content. As misinformation continues to present significant societal challenges, from election interference to public health concerns, platforms appear to be experimenting with different models that distribute the work of content verification across their user communities.
The effectiveness of these community-based approaches remains to be seen, particularly as they rely heavily on user participation and goodwill. Critics argue that offloading moderation responsibilities to users may allow platforms to avoid accountability, while supporters suggest it creates more transparent and democratic systems for addressing false information.
As social media continues to serve as a primary information source for billions of people worldwide, these evolving approaches to misinformation will likely remain at the center of ongoing discussions about digital literacy, platform responsibility, and the future of online discourse.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


12 Comments
Glad to see Meta’s apps still offer direct reporting for misinformation. Maintaining those centralized moderation tools seems important, even as platforms experiment with more decentralized approaches.
Agreed. The success of community-driven moderation will depend a lot on user participation and the system’s ability to surface credible fact-checks.
The shift away from centralized moderation towards more user-driven fact-checking is an interesting development. Will be watching closely to see if it proves effective.
Thoughtful piece on the challenges of tackling climate misinformation online. Glad to see platforms testing different approaches, though the effectiveness will be key.
Absolutely. Transparency around how these systems work and their actual impact will be important for users to assess their utility.
Thoughtful analysis of the challenges and tradeoffs involved in moderating climate-related misinformation on social media. Eager to see how these evolving approaches perform in practice.
Appreciate the overview of how major social media platforms are evolving their strategies to combat climate misinformation. Curious to see which approaches prove most successful.
Interesting approach from X to rely more on user-driven fact-checking. Curious to see how effective the Community Notes system will be in combating climate misinformation. Seems like a shift in responsibility away from the platform itself.
This is a complex issue without easy solutions. Platforms need to balance free speech with limiting the spread of harmful falsehoods. Curious to see how these evolving strategies play out.
Interesting to see the different approaches taken by X and Meta’s apps. Maintaining direct reporting tools while also experimenting with community-driven moderation could be a balanced approach.
Agreed, a mix of centralized and decentralized moderation mechanisms may be the way forward to tackle this complex challenge.
The article provides a helpful overview of the shifting landscape of social media content moderation. Curious to see how these new strategies fare in the real-world battle against misinformation.