Listen to the article
Tech and AI Join Forces Against Health Misinformation
In an era where health misinformation spreads at unprecedented speeds through digital channels, researchers and tech companies are developing technological solutions to combat false information online. As generative AI increases both the volume and sophistication of health misinformation, the race is on to create equally advanced countermeasures.
Misinformation now proliferates through single clicks, bot networks, and AI-generated deepfakes. Some bad actors have even created deepfake versions of renowned doctors to lend credibility to fake treatments. The World Health Organization has raised alarms about these developments, particularly regarding their potential impact on vaccine trust and broader public health outcomes.
While evidence suggests technological solutions can effectively combat social media misinformation, researchers are concerned that major platforms’ interest in developing and refining these tools has diminished in recent years.
Current technologies for fighting misinformation range from algorithmic labeling of inaccurate content to downranking posts deemed false by AI systems. Mass awareness campaigns that encourage critical thinking among users are also being deployed as a complementary strategy.
Cameron Martel, assistant professor of marketing at Johns Hopkins Carey Business School, explains that in the late 2010s and early 2020s, major platforms like Facebook and Twitter actively used algorithms to identify potentially false content and engaged third-party fact-checkers for verification.
In 2023, Martel led a large-scale study examining the effectiveness of warning labels on misinformation. The research, involving over 14,000 U.S. participants, found that fact-checking labels reduced belief in false information by nearly 28% and decreased the likelihood of sharing misinformation by roughly 25% compared to a control group. Notably, even among those with low trust in fact-checkers, warning labels still reduced misinformation sharing by more than 16%.
Despite these promising results, Meta announced in January 2025 that it would end its partnership with third-party fact-checkers. Instead, the company has adopted community notes, allowing everyday users to comment on information accuracy. Comments that receive upvotes from people across the political spectrum appear prominently on posts.
Martel suggests that such community-driven approaches can be effective if the process behind them is transparent and reasonable. His research published last year found that while users generally prefer expert fact-checkers, they can view “juries” of laypeople as equally or more trustworthy than experts under certain conditions—particularly when these groups are large enough, have consulted with each other, and include equal representation across political groups.
The rise of AI fact-checking tools presents both opportunities and challenges. A recent preprint study indicates that large language models (LLMs) like Perplexity and Grok generally align with community note decisions regarding misleading posts. However, these AI systems incorrectly labeled 21% to 28% of posts as true that community notes had identified as misleading.
Researchers have also observed that the launch of Grok on X in early March 2025 coincided with a substantial reduction in community note submissions, suggesting users may view AI as a replacement rather than a complement to human fact-checking efforts.
“Large language models don’t have any existing corpus of information about what’s happening currently,” Martel explains, noting the significant limitations of AI fact-checking during breaking news events. For example, Al Jazeera reported that Grok struggled to recognize AI-generated media in conflict situations and made numerous factual errors when addressing breaking news.
Martel believes that democratized fact-checking, AI systems, and professional fact-checkers “have great promise” when used in combination. AI could refer complex claims to human fact-checkers, while user feedback could help refine AI systems. However, he remains pessimistic about implementation, noting: “Right now, it seems like there is no corporate will to invest heavily in these types of content moderation practices.”
Hause Lin, a researcher at MIT and Cornell University who also works as a data scientist at the World Bank, advocates for “content-neutral” interventions that promote critical thinking skills. These approaches aim to help users identify propaganda tactics and misinformation patterns rather than targeting specific content.
Lin and his colleagues tested Facebook and Twitter ads that encouraged users to consider information accuracy before sharing it. Their Facebook study of 33 million users found that such prompts led to a 2.6% reduction in misinformation sharing among those who had previously shared false content. On Twitter, with data from over 157,000 users, accuracy prompts resulted in up to a 6.3% reduction in misinformation sharing.
While these percentages might seem small, Lin emphasizes that when applied to millions of users, the impact becomes significant. The interventions work by jolting users from emotional, reactive states to more reflective thinking modes.
However, Lin acknowledges that large-scale content moderation efforts may conflict with platforms’ profit motives. In a separate study on countering ethnic hate speech in Nigeria, Lin found that “prosocial” celebrity messages reduced hate content sharing but also decreased overall time spent on the platform.
As evidence grows that multipronged approaches can effectively combat health misinformation, the question remains whether social media companies will prioritize these initiatives for the public good or continue to place profit considerations first.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
Interesting to see how tech and AI are being leveraged to combat health misinformation. Curious to learn more about the specific algorithmic and AI-powered tools being developed to identify and downrank false content. It’s a complex challenge but important for public trust and wellbeing.
While the potential of technological solutions like algorithmic labeling and downranking is promising, I’m a bit skeptical about platforms’ commitment to really prioritizing this. Their business models often seem at odds with aggressively fighting misinformation. Curious to see if new regulations could help drive stronger action.
The rise of AI-generated deepfakes and bot networks is really concerning, especially when it comes to health information. I hope researchers can stay ahead of those bad actors and create effective countermeasures. Fact-checking is critical but needs the support of advanced tech.
Agreed, the potential impact of health misinformation spread through deepfakes and bots is alarming. Developing robust AI-powered detection and mitigation tools will be crucial to protect the public. It’s good to see efforts underway, but the challenge is formidable.
The pandemic has shown how quickly and widely health misinformation can spread online, fueling distrust and harming public health. I’m glad to see efforts to leverage AI and other tech to combat this, but agree it will take a concerted, sustained effort to stay ahead of bad actors. Curious to learn more about the specific tools and approaches being developed.
The race between misinformation and effective countermeasures is really concerning, especially when it comes to public health. I appreciate the efforts to leverage AI and other tech to identify and suppress false content, but agree that platforms’ incentives don’t always align with this goal. Curious to see if regulatory action could help shift the dynamic.
Combating health misinformation is such a complex challenge, but I’m encouraged to see researchers and tech companies working to develop more advanced AI-powered tools. Algorithmic labeling and downranking could be helpful, but will require continued refinement and vigilance. Curious to see what other innovative approaches emerge.
This is a really important issue that deserves more attention. The proliferation of AI-generated health misinformation, especially through platforms like social media, is a major threat. Robust technological solutions will be critical, but I hope they’re paired with stronger content moderation policies and user education efforts.