Listen to the article

0:00
0:00

AI Deepfakes of Medical Professionals Spread Health Misinformation Across Social Media

A troubling trend has emerged across social media platforms, where AI-generated deepfakes are impersonating credible medical professionals to promote unverified health claims and supplements. According to recent investigations by UK fact-checking charity Full Fact, these sophisticated forgeries are becoming increasingly difficult to identify and pose significant public health concerns.

The Guardian reported on December 5, 2025, that numerous videos featuring AI-manipulated images and voices of real doctors and academics have proliferated across TikTok, Facebook, X, and YouTube. These videos typically present “new” health conditions or miracle cures, ultimately directing viewers to purchase specific supplements through affiliate marketing links.

One particular network of accounts investigated by Full Fact took genuine footage of respected professors and doctors, transforming their presentations into endorsements for products they never actually supported. In one case, a single manipulated video amassed over 365,000 views before TikTok permanently removed the account. The investigation revealed these deepfakes consistently directed viewers to the same supplement seller through various accounts.

“These deepfakes represent a public health threat,” said Dr. Sean Mackey of Stanford, who discovered his own likeness being used without consent in these deceptive videos. The British Medical Journal (BMJ) had previously warned about this emerging issue in 2024, noting that deepfake doctor videos were already successfully deceiving viewers and steering them toward expensive, unproven remedies.

The effectiveness of these deepfakes stems from their exploitation of human psychology. People inherently trust familiar faces and established experts, particularly when seeking relief for health concerns. The videos are strategically designed to target vulnerable individuals searching for solutions to conditions like menopause symptoms, chronic pain, or sleep disorders.

Duncan Selbie, former head of Public Health England, expressed alarm after seeing a deepfake of himself: “People who know me could have been taken in by it. It wasn’t funny in the sense that people pay attention to these things.” Beyond driving supplement sales, these manipulated videos erode public trust in legitimate medical advice and institutions.

The technical sophistication behind these forgeries has reached concerning levels. The videos often feature nearly perfect lip synchronization, natural voice replication, and professional-looking production values that make detection challenging for average viewers. Only subtle inconsistencies in mouth movements or voice patterns might reveal their fraudulent nature.

Platform responses to the problem have been inconsistent. TikTok eventually removed identified deepfakes but admitted to delayed moderation in some cases. YouTube has implemented labels for “altered or synthetic content,” while Meta claims to remove harmful misinformation, though Full Fact reported mixed enforcement. The rapid view accumulation in the critical early hours after posting often means the damage is done before content moderation takes effect.

Health misinformation experts recommend several strategies for identifying potential deepfakes. Viewers should verify claims through independent medical sources, search for the exact quotes outside the platform, and be particularly cautious of content that promotes “miracle cures” or newly named conditions not recognized in mainstream medicine. Checking the account history for inconsistencies in posting patterns or sudden changes in subject matter can also reveal suspicious activity.

The affiliate marketing structure behind these videos creates powerful financial incentives for their continued production. Creators earn commissions when viewers purchase products through their links, regardless of whether the seller claims any official connection to the content. This revenue model helps explain why identical products frequently appear across different accounts posting similar deepfaked clips.

As AI technology continues to advance, the barrier to creating convincing deepfakes has lowered dramatically. What once required sophisticated equipment and technical expertise can now be accomplished with widely available software tools, making the volume of potential deepfakes difficult to combat through traditional content moderation alone.

Health advocates stress the importance of consulting qualified medical professionals before trying supplements or treatments promoted through social media, particularly those making extraordinary claims. The most effective defense remains critical media literacy and a healthy skepticism toward content that triggers emotional responses or promises quick fixes to complex health issues.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

13 Comments

  1. James I. Taylor on

    This is a worrying development that highlights the dark side of AI technology. While the potential benefits of AI are promising, we must remain vigilant against its misuse to spread misinformation and exploit vulnerable people.

    • Patricia Thompson on

      Absolutely. The rapid advancement of AI also means rapid advancement of the tools to abuse it. Policymakers and tech companies have a responsibility to stay ahead of these threats.

  2. Ava G. Thompson on

    While the potential of AI is exciting, this incident shows the urgent need for robust safeguards and oversight. The integrity of public health information must be protected, even as technology continues to advance.

    • Well said. As AI capabilities grow, we must ensure they are developed and deployed responsibly, with strong ethical guidelines and regulatory frameworks in place.

  3. Lucas Rodriguez on

    I’m troubled by how quickly these manipulated videos can spread and gain traction, especially on social media. Robust fact-checking and content moderation are crucial to combating this threat to public trust and well-being.

    • Agreed. The speed and scale at which misinformation can now spread online is alarming. Stronger regulations and enforcement are needed to hold platforms accountable and protect vulnerable consumers.

  4. Impersonating medical professionals to peddle unproven supplements is a despicable practice. I hope the relevant authorities take swift action against those responsible for these deceptive and potentially harmful campaigns.

  5. Patricia Thompson on

    This is a concerning development that highlights the need for greater transparency and accountability around AI-generated content. The public deserves access to truthful, verified information, especially on sensitive topics like healthcare.

  6. As an avid consumer of online health information, I’m alarmed by the idea that I could be inadvertently exposed to dangerous misinformation from AI-generated deepfakes. This underscores the need for greater digital literacy and critical thinking skills.

  7. Deepfakes are a serious threat to public health and safety. I hope regulators take strong action to hold social media platforms accountable for the proliferation of this type of manipulated content on their sites.

    • Agreed. Platforms can no longer hide behind the excuse of ‘free speech’ when their algorithms are actively amplifying harmful misinformation. Decisive steps are needed to address this crisis.

  8. Liam B. Jackson on

    This is a concerning trend that highlights the need for greater regulation and oversight of AI-generated content online. The public deserves access to accurate, science-based health information from verified sources, not manipulated videos that could endanger lives.

    • Absolutely. Deepfakes pose serious risks, especially when it comes to sensitive topics like public health. Platforms need to invest more in detection and takedown of this kind of misinformation.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.