Listen to the article
Google’s AI Creates False Suspension Claims About UK Doctor
A UK physician and popular medical YouTuber has become the latest victim of AI hallucination, as Google’s artificial intelligence system fabricated serious professional misconduct allegations against him that could potentially damage his medical career.
Dr. Ed Hope, who runs the “Dr. Hope’s Sick Notes” YouTube channel with nearly 500,000 subscribers, discovered that Google’s AI search feature was generating detailed false claims stating he had been suspended by the General Medical Council in mid-2025 for unethical practices.
“This is just about the most serious allegation you can get as a doctor. You basically aren’t fit to practice medicine,” Hope said in a video addressing the issue. He emphasized that throughout his decade-long medical career, he has never faced any investigations, complaints, or sanctions.
The AI-generated response claimed that Hope was “spearheading a company that provided sick notes (fit notes), essentially selling them rather than providing them as part of proper patient care,” and had been “exploiting patients for personal gain.” The response even fabricated specific details, including a supposed June 2025 suspension date—a date that hasn’t even occurred yet.
Hope believes Google’s AI erroneously connected several unrelated pieces of information to create this false narrative. His channel name “Sick Notes,” his recent absence from YouTube, and a separate scandal involving another doctor named Asif Munaf may have been conflated by the algorithm.
What makes this case particularly concerning is that the AI presented these allegations as established facts rather than speculation. The response included detailed sections like “What Happened” and “The Controversy,” giving the fabricated claims an air of authority and credibility to users who might not question their accuracy.
Legal experts suggest this incident raises important questions about potential defamation liability. While Section 230 of the Communications Decency Act typically shields platforms from responsibility for third-party content, some argue that AI-generated outputs may not qualify for this protection since they aren’t truly third-party speech but rather content created by the platform itself.
The incident highlights growing concerns about AI hallucinations—instances where AI systems confidently generate false information—particularly when they target private individuals and could cause real-world harm. Unlike traditional search results that link to sources, AI overviews present information directly with an authoritative tone, making it difficult for users to verify claims or understand their origins.
After Hope brought attention to the issue, Google appears to have modified its response. Current searches for information about Dr. Hope produce a much vaguer answer that avoids making specific claims about professional misconduct.
This case comes amid increasing scrutiny of how tech companies deploy generative AI in consumer-facing products. The ability of these systems to fabricate convincing but entirely false narratives presents significant challenges for reputation management, especially for professionals whose careers depend on public trust.
For medical professionals like Hope, such false claims could be particularly damaging, potentially affecting patient trust and professional standing in a field where reputation is paramount. The incident underscores the need for more robust safeguards and accountability mechanisms as AI becomes more deeply integrated into information discovery systems.
Neither Google nor the General Medical Council have issued formal statements addressing this specific incident, but the case will likely inform ongoing discussions about responsible AI deployment and the potential need for new regulatory frameworks governing AI-generated content.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
Generating false claims of professional misconduct through an AI system is extremely troubling. This kind of disinformation could seriously damage a doctor’s reputation and career. Developers need to prioritize safety and accountability for these AI models.
This is deeply concerning. Fabricating claims of professional misconduct against doctors could seriously damage their reputations and careers. AI systems must be more carefully designed to prevent such harmful misinformation.
I’m concerned to see an AI system fabricating such serious allegations against a doctor. Generating false claims about professional misconduct is unacceptable and could have severe repercussions. Significant improvements are needed to ensure AI does not spread disinformation.
I agree completely. AI systems must be held to high standards of accuracy and integrity, especially when it comes to sensitive information about individuals and their professions.
Wow, this is a pretty egregious example of an AI system hallucinating false information. Generating fake misconduct allegations against a medical professional is extremely problematic and could ruin someone’s career. AI developers need to address these issues.
This is a concerning example of an AI system producing harmful misinformation. Fabricating allegations of professional misconduct against a doctor is completely unacceptable. Stricter oversight and validation processes are clearly needed to prevent such damaging AI-driven falsehoods.
It’s alarming that Google’s AI is generating false claims about a doctor’s credentials and ethics. This kind of AI-driven disinformation could have severe consequences for individuals. More oversight and accountability is needed.
This is a troubling development. Spreading false claims about a doctor’s professional conduct through an AI system is highly irresponsible and potentially very damaging. Rigorous testing and validation of AI models is clearly required to prevent such harmful misinformation.