Listen to the article
Americans Divided on Social Media’s Use of AI to Fight Misinformation
About seven-in-ten Americans use social media platforms to connect with others, share aspects of their lives, and consume information. But the content they encounter is increasingly shaped not just by their own choices, but by sophisticated algorithms and artificial intelligence technologies deployed by these platforms.
A new Pew Research Center survey reveals Americans are deeply split over social media companies’ use of algorithms to identify false information—with 38% viewing this as a positive development for society, 31% considering it negative, and a similar share uncertain about its impact.
These algorithms serve multiple functions on social media platforms: determining what content users see in their feeds, targeting advertisements, making content recommendations, and helping identify and remove problematic content including hate speech and misinformation.
The technology is clearly visible to most users—74% of social media users report having seen content flagged or labeled as false on these platforms. Three-quarters of Americans say they’ve heard at least something about these computer programs, though only 24% report having heard a lot about them.
Sharp partisan divides characterize attitudes toward these tools. Republicans are significantly more likely than Democrats to view algorithmic content moderation as harmful. For example, 84% of Republicans believe political viewpoints are being censored due to these algorithms, compared to 56% of Democrats. Similarly, 81% of Republicans versus 55% of Democrats believe news and information are being wrongly removed from platforms.
“Even with some across-the-aisle agreement, there are stark partisan differences on the potential impacts we explored,” noted the researchers. “Republicans are far more likely than Democrats to say the widespread use of these algorithms is a bad idea for society.”
The research found a widespread belief that algorithms are negatively impacting the online information environment. Seven-in-ten adults say political viewpoints are being censored, and 69% believe legitimate news and information are being wrongly removed. Meanwhile, only about four-in-ten believe these technologies are enabling more meaningful online conversations or making it easier to find trustworthy information.
When it comes to how these systems should operate, Americans express clear preferences. A strong majority (69%) believe accuracy should be prioritized over speed in content moderation decisions, even if it means some false information remains online longer. Only 28% prioritize quick decisions, even at the risk of mistakenly removing accurate information.
Most Americans (71%) also believe decisions about what constitutes false information should involve human judgment—either exclusively or in combination with algorithms. Only 6% think these decisions should be made primarily by computer programs alone.
The survey reveals limited confidence in social media companies’ use of these tools, with 72% of Americans expressing little or no confidence that platforms will use algorithms appropriately to identify false information. A significant majority (53%) worry government regulators will not go far enough in regulating these technologies, while 44% fear excessive regulation.
Looking beyond social media to other algorithmic decision-making contexts, Americans express strong opposition to fully automated systems making final decisions in several high-stakes situations: 70% oppose algorithms having the final say on medical treatments, 64% on parole decisions, 60% on job applicant screenings, and 56% on mortgage approvals.
The findings highlight the complex challenges facing both technology companies and policymakers as they navigate public concerns about algorithmic systems that increasingly shape our information environment and many aspects of daily life.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


14 Comments
The fact that three-quarters of Americans have heard about these content moderation algorithms suggests they are a hot-button issue. I’m curious to learn more about the specific concerns people have and how the technology could be improved to address them.
I agree, transparency around how these algorithms work and their accuracy/reliability will be key to building public trust. Striking the right balance between combating misinformation and preserving free expression is no easy task.
While I can see the value in using AI to flag potentially false content, I’m wary of social media companies having too much unchecked power to determine what information gets amplified or suppressed. This is a complex issue without easy answers.
The public debate over social media’s use of AI-powered content moderation highlights just how divisive and consequential this issue is. Reasonable people can disagree on where to draw the line between combating misinformation and preserving free expression.
Agreed. This is a nuanced topic without easy answers. Ongoing public discourse and oversight will be crucial as these technologies continue to evolve.
The idea of social media platforms using algorithms to identify and flag false information is a complex one. While it may help address the spread of misinformation, there are legitimate worries about unintended consequences and overreach. More transparency is needed.
It’s interesting to see the public so evenly split on this issue. The potential benefits of using AI to curb misinformation have to be weighed against concerns over censorship and lack of human oversight. No easy answers here.
Absolutely. Finding the right balance between limiting harmful misinformation and preserving free speech is a delicate challenge with no clear solutions. Ongoing public discussion will be critical.
It’s understandable that Americans are split on this issue. While AI-powered content moderation may help limit misinformation, there are valid concerns about the potential for overreach and unintended consequences. Careful implementation and strong guardrails will be key.
It’s understandable that Americans are divided on this issue. Algorithms can help identify misinformation, but they also raise concerns about censorship and biased content curation. More public dialogue and oversight may be needed.
Agreed. Transparency around how these systems work and ongoing public input will be crucial as social media platforms continue to leverage AI for content moderation.
Interesting to see the public divided on social media’s use of AI to combat misinformation. While the technology is clearly visible to most users, the benefits and drawbacks seem to be a complex issue without a clear consensus.
I can understand the hesitation around giving social media platforms more control over content curation. There are valid concerns about overreach and unintended consequences.
The use of AI to fight misinformation is a double-edged sword. While it may help limit the spread of false narratives, there are valid worries about these algorithms overstepping and stifling legitimate debate. A nuanced approach is needed.