Listen to the article
STUDY FINDS AI CHATBOTS SPREAD INACCURATE MEDICAL INFORMATION
A new analysis has raised serious concerns about the reliability of AI chatbots when answering health and medicine questions, with researchers discovering that a significant portion of medical information provided by these systems is inaccurate and incomplete.
The study, published in The British Medical Journal (BMJ) Open, evaluated five popular AI chatbots and found that nearly half of their responses contained problematic elements. These included presenting false equivalences between scientific and non-scientific claims, potentially directing users toward ineffective treatments, or providing guidance that could be harmful if followed without professional medical consultation.
Researchers from The Lundquist Institute for Biomedical Innovation at Harbor-University of California Los Angeles (UCLA) Medical Center led the investigation, examining how chatbots handled common health queries across multiple medical domains.
“As generative AI chatbots rapidly gain adoption across research, marketing, and medicine—with many people now using them as alternative search engines—their continued deployment without proper public education and oversight risks amplifying dangerous misinformation,” the researchers warned.
The study tested five widely used AI systems: Google’s Gemini, High-Flyer’s DeepSeek, Meta AI by Meta, OpenAI’s ChatGPT, and Grok by xAI. Each chatbot was presented with identical sets of questions spanning five critical health categories: cancer, vaccines, stem cells, nutrition, and athletic performance.
Researchers crafted their prompts to resemble typical information-seeking queries that average users might ask about health and medical topics. The questions incorporated language commonly found in online misinformation as well as in academic discourse, deliberately designed to test the AI systems’ vulnerabilities by “straining” them toward potentially providing misinformation or inappropriate advice.
Using a pre-defined objective criteria, the researchers classified responses as non-problematic, somewhat problematic, or highly problematic. They evaluated each answer based on accuracy and completeness of information, with special attention to instances where chatbots presented “false balance”—giving equal weight to scientifically validated information and unproven claims regardless of the strength of evidence.
This research comes at a critical moment when AI systems are increasingly integrating into healthcare settings. Medical professionals have expressed growing concern about patients arriving at appointments with incorrect information obtained from AI sources. Unlike traditional search engines that primarily direct users to existing websites, generative AI can synthesize and present information in ways that may obscure the original sources or scientific consensus.
The healthcare sector has been particularly vulnerable to misinformation in recent years, especially during the COVID-19 pandemic, when conspiracy theories about vaccines and treatments spread rapidly through social media. AI chatbots potentially add another layer of complexity to this problem by delivering seemingly authoritative answers that may lack proper medical foundation.
Industry observers note that AI companies have been rushing to implement safeguards in their systems, including disclaimers about medical advice and improved content filtering. However, this study suggests current measures remain insufficient to prevent the dissemination of problematic health information.
The findings highlight the need for stronger guardrails around AI-generated health content, improved transparency about the limitations of these systems, and better education for the public about when to seek professional medical advice rather than relying on AI responses.
Medical experts emphasize that while AI tools can serve as helpful starting points for health information, they should not replace consultation with qualified healthcare providers, particularly for serious medical conditions or when making treatment decisions.
As AI systems continue to evolve and improve, ongoing independent evaluation will be essential to ensure that the information they provide meets appropriate standards for medical accuracy and safety.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


12 Comments
This study highlights the need for stronger regulations and oversight around the use of AI in healthcare. The public’s wellbeing should be the top priority.
It’s good to see this issue being highlighted. We need stronger safeguards and transparency around the medical claims made by AI systems. Public trust is at stake.
Agreed. Healthcare is too important to leave to unregulated AI. Proper oversight and accountability is critical.
This study underscores the limitations of current AI technology when it comes to complex medical topics. More research is needed to improve the reliability and safety of these systems.
Absolutely. Healthcare providers and regulatory bodies should be closely involved in the development and deployment of AI chatbots for medical use.
This is an important wake-up call. We need to be vigilant about the limitations of AI, especially when it comes to sensitive domains like healthcare.
The inaccuracies found in this study are troubling. AI chatbots should not be treated as a substitute for professional medical consultation and diagnosis.
You’re right. These systems need much more robust testing and validation before being used for sensitive health information.
It’s worrying that nearly half of the medical advice from these chatbots contained problematic elements. Clearly more work is needed to improve their reliability.
While AI has many potential benefits, this research shows we can’t blindly trust these systems with our health. Robust safeguards are essential.
This is concerning. AI chatbots need to be more rigorously tested before being used for medical advice. Providing inaccurate health information could have serious consequences for users.
While AI can be a useful tool, relying on it for medical advice without proper safeguards is risky. Patients deserve accurate information from qualified professionals, not chatbots.