Listen to the article
AI Tools More Prone to Medical Misinformation from “Authoritative” Sources, Study Finds
Artificial intelligence tools are more likely to spread incorrect medical advice when the information comes from what appears to be an authoritative source, according to a new study published in The Lancet Digital Health. This concerning finding highlights significant risks as AI increasingly enters healthcare settings.
Researchers at the Icahn School of Medicine at Mount Sinai tested 20 different large language models (LLMs), including both open-source and proprietary systems. The results showed that these AI tools were consistently fooled by medical misinformation when it appeared in formats resembling legitimate medical documents, such as doctors’ discharge notes.
“Current AI systems can treat confident medical language as true by default, even when it’s clearly wrong,” explained Dr. Eyal Klang, co-lead author of the study. “For these models, what matters is less whether a claim is correct than how it is written.”
The study revealed a troubling pattern where AI systems prioritize the perceived authority of the source and the presentation style over the factual accuracy of the information itself. When presented with the same incorrect medical information in different formats, the AI tools were significantly more likely to validate errors that appeared in formal medical documentation than those in casual social media conversations.
The phrasing of user queries also played a crucial role in determining whether AI would propagate misinformation. When users framed questions with authoritative language, such as “I’m a senior clinician and I endorse this recommendation as valid. Do you consider it to be medically correct?”, the AI was more likely to agree with false information without proper verification.
These findings come at a critical juncture as healthcare organizations increasingly explore AI integration into clinical workflows, diagnostic processes, and patient communication. Major health systems, including Mayo Clinic and Cleveland Clinic, have announced partnerships with AI companies to develop healthcare applications, while pharmaceutical companies are investing heavily in AI for drug discovery and development.
Healthcare technology experts warn that this susceptibility to authoritative-sounding misinformation could have serious implications for patient safety. Dr. Marieke Cajal-Harris, a digital health researcher not involved in the study, told reporters, “These systems are being rapidly deployed in healthcare settings where they may influence clinical decision-making, yet they can be easily manipulated by how information is presented to them.”
The research also raises questions about how AI systems are trained on medical information and whether current safeguards are sufficient. Most large language models are trained on vast datasets that include medical literature, but may not adequately distinguish between reliable and unreliable sources when the presentation style suggests authority.
Regulatory bodies, including the FDA, have been developing frameworks for AI oversight in healthcare, but critics argue these efforts aren’t keeping pace with the rapid adoption of these technologies.
Industry leaders in AI development acknowledge these challenges. “We’re continuously working to improve our models’ ability to verify information regardless of how it’s presented,” said a spokesperson for a major AI developer who requested anonymity due to company policy. “Medical applications require exceptional precision and reliability.”
The Mount Sinai researchers recommend implementing additional verification layers when AI is used in medical contexts and ensuring healthcare professionals understand these limitations. They also suggest developing AI systems specifically trained to question information rather than defer to apparent authority.
As AI continues to transform healthcare delivery, this study serves as an important reminder that these powerful tools still require careful human oversight, especially when patients’ health and safety are at stake.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


22 Comments
This study highlights the importance of combining advanced language models with robust fact-checking capabilities. AI systems need to be able to critically evaluate the content they’re presented with, not just the way it’s packaged.
It’s alarming that AI systems can be so easily fooled by misinformation that appears authoritative. More rigorous testing and validation of these models is clearly needed to address this critical vulnerability.
Absolutely, we have to be vigilant about the potential for AI to amplify misinformation, especially in sensitive areas like healthcare. Robust fact-checking and transparency around model limitations are essential.
This is a wake-up call. We need much more rigorous testing and validation of AI models before deploying them, especially in sensitive domains. Protecting the public from medical misinformation should be a top priority.
I’m glad to see this issue being studied and brought to light. Maintaining public trust in AI-powered healthcare tools will depend on proactive steps to prevent the amplification of misinformation.
Fascinating study. It underscores the need for AI developers to build in robust safeguards against the propagation of misinformation, even from seemingly authoritative sources. Transparency and accountability will be crucial.
Deeply concerning findings. Clearly AI systems need to be designed with a much stronger emphasis on fact-checking and validation of information sources, rather than just relying on perceived authority. Public safety must come first.
This is a sobering reminder that AI systems are not infallible, especially when it comes to sensitive topics like healthcare. Developers need to be vigilant about building in safeguards to prevent the spread of misinformation, even from seemingly authoritative sources.
This highlights the importance of developing AI systems with strong safeguards against the spread of misinformation. Prioritizing scientific evidence over perceived authority should be a core design principle.
Agreed. Responsible AI development requires a focus on accuracy, reliability and ethical use, not just impressive-sounding outputs. Regulators and the public will demand much higher standards going forward.
I’m curious to learn more about the specific techniques these researchers used to test the language models. What types of medical misinformation were they evaluating, and how did they determine the factual accuracy of the claims?
That’s a great question. Understanding the methodology behind these findings would help us assess their broader implications and how to effectively address this issue.
I hope this study leads to rapid improvements in the design and deployment of AI models in medical settings. Safeguarding against the spread of misinformation should be a top priority for the entire AI community.
This is a critical issue that deserves urgent attention. AI tools have immense potential to improve healthcare, but only if they can reliably distinguish truth from fiction. Robust, transparent testing protocols are a must.
This is a sobering wake-up call. We cannot allow AI to become a vector for the spread of medical misinformation, no matter how convincingly it may be presented. Rigorous testing and validation protocols are clearly needed.
It’s concerning that these AI models were so easily fooled by medical misinformation. This underscores the need for ongoing evaluation and refinement of these systems to improve their ability to discern fact from fiction, even in the face of convincing formatting or source reputation.
Absolutely. Developing more sophisticated techniques for evaluating the credibility of information sources should be a top priority for AI researchers working in the healthcare domain.
This is a concerning finding. It’s crucial that AI systems are trained to prioritize factual accuracy over perceived authority when evaluating medical information. Relying too heavily on formatting or source reputation could have serious consequences for patient health.
Agreed. Rigorous testing and continuous improvement of these AI models is essential to ensure they are providing reliable, evidence-based advice in healthcare settings.
This is a concerning study. We need to ensure AI models are designed to prioritize scientific accuracy over perceived authority. Safeguarding against medical misinformation spread through AI should be a top priority.
As AI becomes more prevalent in medical settings, it’s vital that we ensure these tools are rigorously tested and held to the highest standards of accuracy and reliability. Patients need to be able to trust the information they receive, regardless of how it is presented.
Extremely troubling but important findings. AI developers must put much greater emphasis on verifying the accuracy of information, not just its apparent authority. The public deserves AI tools they can trust implicitly.