Listen to the article
Majority of Americans Support Removing Health Misinformation from Social Media, Study Finds
Nearly three-quarters of Americans believe social media platforms should remove inaccurate health information, according to a new survey from Boston University that reveals rare cross-partisan agreement on content moderation.
The poll, conducted by researchers at the university’s Communication Research Center, found that 72% of Americans support the removal of false public health information from social platforms—with 85% of Democrats, 70% of Independents, and 61% of Republicans in agreement.
“The integrity of public discourse is at risk as political leaders push the boundaries of truth,” said Michelle Amazeen, associate professor at Boston University’s College of Communication and director of the Communication Research Center. “With social media companies abandoning their fact-checking programs, it is more urgent than ever for these platforms to take meaningful action, given their pivotal role in shaping the national conversation.”
The survey also revealed substantial support for other moderation approaches, with 65% of respondents approving of platforms “downranking” or reducing the visibility of health misinformation. Similarly, 63% supported the use of independent fact-checking organizations to verify social media content related to public health issues.
However, the “community notes” model—where users write and rate notes that appear alongside posts—received considerably less support, with only 48% of respondents expressing approval. This lukewarm reception crossed political lines, suggesting Americans prefer more structured approaches to content verification.
“The results so far of social media platforms relying on users to rate the accuracy of posts are sobering,” Amazeen noted. “Despite the presence of the community notes programs, social media platforms that use this model remain rife with misinformation.”
The findings come at a critical moment for social media governance, as several major platforms have scaled back their fact-checking initiatives in recent years. Twitter (now X) dismantled much of its trust and safety infrastructure following Elon Musk’s acquisition, while Meta has reduced investments in content moderation across Facebook and Instagram.
Public health experts have expressed growing concern about the spread of health misinformation, particularly after witnessing its impact during the COVID-19 pandemic. The World Health Organization has previously identified health misinformation as a serious threat to global public health, capable of undermining confidence in medical interventions and contributing to preventable harm.
The Boston University researchers also explored public willingness to financially support fact-checking initiatives. When asked if they would donate $1 to fund independent fact-checking through a crowdfunding campaign, only 32% of respondents indicated they would, while 36% said they would not. The remaining 32% were undecided.
“Shifting content moderation responsibilities onto users is yet another instance of platforms avoiding their obligation to ensure the safety of their digital products,” Amazeen added. “Neglecting content moderation puts social media platforms at risk of amplifying disinformation from those in power. Implementing effective accountability measures is crucial, particularly as a new administration with a track record of using disinformation as a tool assumes office.”
The Media & Technology Survey was conducted online on January 15-16, 2025, with a sample of 1,003 respondents. The data were weighted to match U.S. population demographics by region, gender, age, and education, with a credibility interval of plus or minus 3.5 percentage points.
As platforms continue to navigate the complex landscape of content moderation, this research suggests that despite the polarized political climate in the United States, there remains substantial public consensus that harmful health misinformation warrants intervention—a finding that could inform future policy decisions by both social media companies and regulators.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


10 Comments
With political polarization on the rise, it’s encouraging to see some issues where there is common ground between Democrats, Independents and Republicans. Tackling health misinformation seems like a sensible area for bipartisan cooperation.
I agree, public health should not be a partisan issue. Hopefully this survey will motivate social media companies to take more decisive action against the spread of harmful misinformation.
This is an important survey that underscores the public’s growing concerns about the spread of health-related misinformation online. Removing false claims seems like a reasonable step, though the platforms will need to do so carefully to avoid stifling legitimate debate.
Interesting to see broad public support for removing health misinformation from social media, even across party lines. This highlights the importance of maintaining accurate information, especially on critical public health issues.
Yes, it’s a complex issue but the public seems to recognize the risks of unchecked misinformation. Platforms will need to balance free speech with their responsibility to curb the spread of falsehoods.
It’s promising to see such strong bipartisan support for removing false health information from social media. This could be an area where policymakers find common ground and push for meaningful reforms.
The findings highlight the public’s desire for social media platforms to take a more active role in moderating health-related content. It will be interesting to see if this translates into concrete policy changes by the major tech companies.
Indeed, the challenge will be crafting moderation policies that effectively tackle misinformation without unduly restricting free speech. Balancing those priorities is no easy task.
This survey underscores the public’s growing awareness of the dangers posed by online health misinformation. Platforms will need to take decisive action, but must do so in a way that respects free expression.
Absolutely, striking the right balance between content moderation and free speech will be critical. Nuanced policymaking will be needed to address this complex issue effectively.