Listen to the article
Medical researchers are raising serious concerns about the reliability of ChatGPT and similar AI tools when they provide health information, according to recent studies examining the accuracy of AI-generated medical advice.
A comprehensive study published in the journal Urology evaluated ChatGPT’s responses to 13 guideline-based questions about urological conditions and treatments. The researchers found that only 60 percent of the AI’s responses were correct, meaning nearly half contained errors or inaccuracies that could potentially mislead patients seeking medical information.
The authors highlighted several troubling patterns in how ChatGPT handles medical queries. They documented instances where the AI system “misinterprets clinical care guidelines, dismisses important contextual information, conceals its sources, and provides inappropriate references,” raising concerns about the potential for patient harm when relying on these platforms for medical guidance.
This finding aligns with another peer-reviewed study published in the National Library of Medicine, which investigated AI’s ability to provide accurate citations for its medical claims. That research revealed that between 50 and 90 percent of AI responses were “not fully supported, and sometimes contradicted, by the sources they cite.” This suggests the AI’s confident presentation of information creates an illusion of reliability even when the content is incorrect.
The deceptive nature of these errors presents a particular challenge for users, according to Alex Ruani, a health misinformation researcher. “Chatbots give the answers we seek but not always the ones we need,” Ruani writes, pointing to the subtle distinction between information that feels satisfying versus information that is medically sound.
Medical professionals are particularly concerned about what they describe as a “false sense of security” that develops during AI interactions. Because ChatGPT provides many accurate responses alongside inaccurate ones, users typically develop increasing trust in the system over time. This gradual trust-building makes people more likely to accept incorrect information without verification when it eventually appears.
The timing of these findings coincides with explosive growth in the use of generative AI tools for health information. A recent survey by the Pew Research Center found that nearly 30 percent of Americans have used AI chatbots to search for health information in the past year, with higher usage rates among younger demographics.
Healthcare organizations are now grappling with how to respond to this trend. The American Medical Association recently released guidance urging physicians to discuss AI use with patients and to encourage critical evaluation of AI-generated health advice. Meanwhile, several major hospital systems have begun developing educational materials to help patients distinguish between reliable and unreliable AI health information.
These studies arrive amid broader regulatory discussions about AI in healthcare. The FDA has announced plans to develop a framework for evaluating AI-driven health applications, while the European Medicines Agency is working on similar guidelines expected to be released later this year.
For patients, experts recommend consulting with qualified healthcare providers before acting on any medical information obtained from AI systems, regardless of how convincing it may appear. They also suggest cross-checking information against established medical resources such as the National Institutes of Health, Mayo Clinic, or Cleveland Clinic websites, which maintain rigorous editorial standards for their health content.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


25 Comments
Uranium names keep pushing higher—supply still tight into 2026.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Silver leverage is strong here; beta cuts both ways though.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Uranium names keep pushing higher—supply still tight into 2026.
Production mix shifting toward News might help margins if metals stay firm.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Exploration results look promising, but permitting will be the key risk.
Good point. Watching costs and grades closely.
Silver leverage is strong here; beta cuts both ways though.
Interesting update on AI Chatbots Found Spreading Health Misinformation, Study Shows. Curious how the grades will trend next quarter.
Good point. Watching costs and grades closely.
I like the balance sheet here—less leverage than peers.
Good point. Watching costs and grades closely.
Interesting update on AI Chatbots Found Spreading Health Misinformation, Study Shows. Curious how the grades will trend next quarter.
I like the balance sheet here—less leverage than peers.
Good point. Watching costs and grades closely.
I like the balance sheet here—less leverage than peers.
Good point. Watching costs and grades closely.
Production mix shifting toward News might help margins if metals stay firm.
Good point. Watching costs and grades closely.
If AISC keeps dropping, this becomes investable for me.