Listen to the article
AI Fact-Checkers More Effective at Combating Misinformation, But Political Divide Affects Trust
New research suggests that artificial intelligence tools used to combat misinformation on social media platforms may not work equally well across the political spectrum, revealing an unexpected partisan divide in how users evaluate fact-checking sources.
The study, forthcoming in MIS Quarterly, found that AI fact-checkers generally outperform human counterparts in discouraging belief in false news—but primarily among progressive social media users. Conservative users showed similar responses to both AI and human fact-checking efforts, often prioritizing the news source’s reputation over the fact-checker’s identity.
“People that are conservative trust humans because they’re predictable, they’re reliable, they’re familiar, whereas perhaps progressives trust the technology,” explained Jason Thatcher, professor of information systems at the Leeds School of Business and co-author of the study.
The researchers conducted two large online experiments involving 370 active social media users in the United States and United Kingdom during the 2020 and 2022 news cycles. Rather than simply determining whether AI or human fact-checkers were more effective, the team focused on how users evaluated the source of fact-checks.
“We weren’t interested in which was more effective,” Thatcher noted. “We were interested in how people evaluated who did the rating.”
Participants were shown posts designed to mimic authentic social media content similar to what appears on Facebook or Reddit. The posts covered polarizing topics where misinformation frequently spreads, including climate change, vaccines, immigration, and taxes—with deliberately mixed accurate and false information reflecting real-world social media environments.
The researchers manipulated several variables: whether posts were fact-checked by AI systems, human fact-checkers, or not at all, and whether content appeared to come from high or low-reputation sources. By having participants self-identify their political leanings, the team could compare responses across the political spectrum.
After viewing each post, participants rated its believability and indicated whether they would discuss, comment on, or share it. The experiment was replicated across both countries and during different news cycles to ensure the findings weren’t limited to one political context or time period.
The results revealed a consistent pattern: AI fact-checkers proved more effective overall at reducing belief in false information, but this effect was significantly stronger among progressive users. Conservative participants showed similar responses to both AI and human fact-checkers and placed greater emphasis on the original news source’s reputation when evaluating content.
This political divide in fact-checker perception presents a growing challenge for social media companies like Meta, Twitter (now X), and YouTube, which have increasingly relied on automated systems to flag misinformation at scale. These platforms have invested heavily in AI moderation tools to address content concerns while reducing dependence on human moderators.
Industry analysts note that the findings could influence how platforms design and deploy fact-checking systems. The political asymmetry in responses suggests that a one-size-fits-all approach may be ineffective in the increasingly polarized information landscape.
The research also highlighted complications that arise when false claims come from well-established or trusted sources, particularly when human fact-checkers are involved. This creates additional challenges for content moderation systems that must consider both the accuracy of information and its source.
“One fact-checking system is probably not going to work for everyone,” Thatcher concluded. “The solution is having more than one way of providing evidence, considering the source of information and helping people reach their own conclusions.”
The study comes at a critical time when social media platforms face mounting pressure to address misinformation while navigating accusations of political bias from various stakeholders. As AI tools become more central to content moderation strategies, understanding these perception gaps may prove essential for developing more effective and widely trusted systems.
The research team included Guohou Shan of Northeastern University’s D’Amore-McKim School of Business and Sunil Wattal of Temple University’s Fox School of Business, highlighting the cross-institutional interest in addressing this complex intersection of technology, psychology, and politics in the fight against misinformation.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


10 Comments
This study serves as a good reminder that fact-checking, even when employing advanced AI, is not a silver bullet for combating misinformation. Understanding and accounting for user biases is crucial for developing effective and equitable solutions.
Well said. Fact-checking is a complex challenge that requires nuanced, multi-faceted approaches to address the varied perspectives and needs of different user groups.
It’s interesting to see the differing levels of trust in AI vs. human fact-checkers across the political spectrum. This highlights the importance of transparency and accountability in the development and deployment of these technologies.
Interesting findings on the partisan divide in how users evaluate fact-checking sources. Seems like AI tools are more effective for progressive users, while conservatives tend to trust human fact-checkers more. Curious to see how these trends might shift over time.
That’s a good point. The political biases of fact-checking tools are an important consideration, especially as they become more widely used.
This study highlights the challenges of building trust and credibility in online fact-checking, particularly across different ideological groups. Efforts to address misinformation need to account for these nuanced user perceptions.
Absolutely. Effective fact-checking requires balancing impartiality, transparency, and an understanding of how users from different backgrounds perceive and engage with these tools.
The finding that AI fact-checkers are more effective for progressive users, but not conservatives, is quite thought-provoking. It raises questions about the broader societal implications as these technologies become more prevalent.
I’m curious to know more about the potential reasons behind the partisan divide in how users respond to AI vs. human fact-checkers. Is it a matter of familiarity, perceived objectivity, or something else? Seems like an important area for further research.
Agreed, digging deeper into the psychological and social factors driving these differences could yield valuable insights for improving fact-checking approaches.