Listen to the article

0:00
0:00

New Tool Tackles Dangerous Health Misinformation in Social Media and AI

A groundbreaking tool developed by University College London (UCL) researchers aims to combat potentially life-threatening health misinformation circulating on social media platforms and appearing in AI search summaries.

Unlike conventional fact-checking systems that simply categorize content as “true” or “false,” this innovative approach identifies content that may not be overtly false but still carries significant potential to mislead vulnerable individuals about diet, nutrition and vaccines.

The World Health Organization has classified health misinformation as a major public health threat. From extreme fasting regimens to dangerous dietary supplements, misleading health information can lead to serious harm. Studies suggest herbal and dietary supplements alone account for approximately 20 percent of drug-induced liver injury cases and send around 23,000 Americans to emergency rooms annually.

“When it comes to diet and nutrition, misinformation often operates through selective framing that masks potential health risks,” explained lead author and developer Alex Ruani from UCL. “Harmful misleading content tends to fly under fact-checkers’ radars and escape meaningful oversight until high-profile cases make the headlines.”

The team has documented numerous alarming examples of misinformation-related health incidents. In one 2025 case, doctors diagnosed cholesterol-induced skin lesions in a man who had adopted a carnivore diet — a trend researchers note is disproportionately amplified by social media algorithms, particularly within “manosphere” communities.

In another troubling instance, a person required hospitalization after following incorrect AI-generated advice that suggested replacing sodium chloride (table salt) with sodium bromide, a toxic substance with no dietary role. The researchers also pointed to cases where cancer patients abandoned life-saving treatments after encountering unproven dietary alternatives online.

Named the Diet-Nutrition Misinformation Risk Assessment Tool (Diet-MisRAT), the system analyzes content and evaluates how likely it would mislead consumers. It then assigns a weighted misinformation risk score with corresponding color-coded rankings: green, amber, or red.

For instance, content claiming “it is safer to give your child high-dose vitamin A than the MMR vaccine” would receive a critical risk classification (red) due to its false safety framing that could lead parents to avoid essential vaccinations.

The tool’s reliability has been validated against the judgments of nearly 60 specialists in dietetics, nutrition, and public health, according to the study published in the journal Scientific Reports.

“When AI chatbots speak confidently, users may assume their advice is safe,” Ruani noted. “If we can properly measure how misleading a piece of advice is and how much harm it may pose, we can build stronger safeguards into models and AI agents before deployment rather than reacting after harm occurs.”

The implications extend beyond just identifying problematic content. Study co-author Professor Michael Reiss from UCL emphasized the educational value of the tool: “By spelling out the typical patterns that distort diet, nutrition or supplement information, the tool’s risk assessment criteria can be taught and applied in education and professional training. This will help learners understand not just whether something is wrong, but how and why it can skew judgment, equipping them to recognize and challenge it.”

The researchers hope their innovation will assist policymakers, digital platform operators, and regulators in implementing more effective safeguards against health misinformation. As social media algorithms continue to amplify sensational health claims and AI systems sometimes generate inaccurate medical advice, tools like Diet-MisRAT could play a crucial role in protecting public health.

The timing is particularly significant as health authorities worldwide grapple with the proliferation of misleading information across rapidly evolving digital environments where traditional fact-checking mechanisms often prove inadequate.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

22 Comments

  1. Tackling misinformation around diets and vaccines is so important. This new tool from UCL researchers sounds like a promising development in that effort.

  2. Michael Thompson on

    Addressing the growing problem of health misinformation online is so important. This UCL-developed tool looks like a welcome addition to conventional fact-checking methods.

  3. William E. Thomas on

    Innovative approaches to combating online health misinformation are crucial. Glad to see UCL researchers developing tools that go beyond simple true/false categorization.

    • Mary P. Moore on

      Identifying content that may not be outright false but could still mislead people is a valuable capability. This tool sounds like a promising step forward.

  4. Amelia Thompson on

    Great to see efforts to address the problem of online health misinformation. This tool from UCL researchers sounds like an important step forward.

    • James Hernandez on

      Identifying content that may not be overtly false but still potentially misleading is a smart focus. Looking forward to seeing the real-world impact of this tool.

  5. Elijah Brown on

    Glad to see efforts to address the growing issue of health misinformation circulating on social media. This tool sounds like a valuable addition to conventional fact-checking methods.

    • William Davis on

      Selective framing that masks health risks is a concerning tactic. Looking forward to seeing how this new tool performs in identifying such content.

  6. Isabella Thompson on

    This is an interesting initiative to counter health misinformation online. Misleading claims about diets and vaccines can have serious consequences, so a more nuanced approach to identifying problematic content is welcome.

    • James G. Thompson on

      Identifying content that may not be overtly false but still potentially misleading is a smart approach. Misinformation can be subtle and hard to detect.

  7. Emma Hernandez on

    Combating health misinformation circulating on social media platforms is a critical issue. This UCL-developed tool seems like a valuable new approach.

  8. Oliver Hernandez on

    Misleading information about diets, supplements and vaccines can have very real and dangerous consequences. This UCL-developed tool seems like an important step in the right direction.

  9. Combating health misinformation online is crucial. Glad to see innovative approaches like this one that go beyond simple true/false categorization.

    • Elijah Lopez on

      Identifying content that may not be outright false but could still mislead vulnerable people is a valuable capability. Looking forward to seeing the impact of this tool.

  10. Amelia R. Smith on

    Misinformation about health topics like diets and vaccines can be really dangerous. Glad to see efforts to develop more advanced tools to combat this issue.

    • Oliver C. Taylor on

      The ability to identify selectively framed content that masks health risks is a key capability. Looking forward to seeing how effective this tool is.

  11. Oliver Williams on

    Glad to see efforts to address the growing problem of online health misinformation. This tool from UCL researchers sounds like a valuable new capability.

    • Liam Williams on

      The ability to identify selectively framed content that masks health risks is an important feature. Looking forward to seeing how effective this tool is.

  12. Mary Williams on

    Misleading information about diets, supplements and vaccines can have real and serious impacts. This new tool from UCL researchers is a much-needed innovation.

    • Patricia Hernandez on

      The ability to identify subtle misinformation tactics like selective framing is crucial. Looking forward to seeing how effective this tool is in practice.

  13. Isabella Davis on

    Misleading health claims can have serious consequences, so innovative approaches to combating this issue are very much needed. This UCL-developed tool seems promising.

  14. Michael T. Thomas on

    This is an important initiative. Misleading health claims on social media can have serious consequences, so a more nuanced approach to identifying problematic content is very much needed.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.