Listen to the article
AI Deepfakes of Medical Professionals Used to Push Fake Health Products
AI-generated deepfake videos featuring doctors and academics are being weaponized to promote fraudulent health claims and supplement products, according to an investigation by British charity Full Fact.
The investigation uncovered multiple social media accounts using sophisticated AI technology to create videos of health experts apparently endorsing supplements from US company Wellness Nest. These digitally manipulated videos have reached hundreds of thousands of viewers, raising serious concerns about misinformation in healthcare.
Professor David Taylor-Robinson, a children’s public health doctor from the University of Liverpool, was among those impersonated in these deceptive videos. In one instance, footage of him speaking at a legitimate Public Health England conference was altered to show him discussing “thermometer leg” – a fabricated menopausal symptom where women supposedly extend one leg beyond blankets when overheated at night.
“One of my friends said his wife had seen it and was almost taken in by it, until their daughter said it’s obviously been faked,” Taylor-Robinson told Full Fact. “People who know me could have been taken in by it. That is concerning.”
The manipulated content showed Taylor-Robinson supposedly recommending a “natural probiotic” containing “ten science-backed plant extracts” including turmeric, black cohosh, and moringa, specifically formulated for menopausal symptoms. The fake testimonial claimed women reported “deeper sleep, fewer hot flushes, and brighter mornings within weeks.”
Before its eventual removal, the video amassed more than 365,000 views, nearly 7,700 likes, and was bookmarked almost 2,900 times – demonstrating the potentially wide reach of such deepfake content.
The University of Liverpool’s communications team reported the videos to TikTok, but initially the platform claimed no community guidelines had been violated. Only after Taylor-Robinson and his family filed additional reports did TikTok acknowledge the content violated its policies. Even then, the platform initially only restricted the videos’ visibility rather than removing them completely.
TikTok later apologized for what it described as a “moderating error” and removed both the posts and the account sharing them. The company acknowledged it had made a mistake in not deleting the content immediately.
The account in question, operating under the handle @better_healthy_life, also posted fabricated videos featuring Russian economist Natalia Zubarevich, British cardiologist Dr. Aseem Malhotra, and Duncan Selbie, the former chief executive of Public Health England.
Selbie, whose likeness was manipulated to discuss menopause symptoms and remedies, expressed his concern to Full Fact, stating, “It wasn’t funny in the sense that people pay attention to these things.”
These deceptive videos typically concluded by directing viewers to purchase supplements from Wellness Nest, including probiotics and Himalayan shilajit that weren’t actually listed on the company’s official website. Earlier this year, similar deepfake videos impersonating the late TV doctor Michael Mosley and Dr. Idrees Mughal were discovered promoting products allegedly from the same US-based company.
When contacted about the videos, Wellness Nest denied any involvement, telling Full Fact that they had “never used AI-generated content” and that the videos were “100% unaffiliated” with their business.
This growing trend of using AI to create convincing but fraudulent medical endorsements highlights the increasing challenge social media platforms face in identifying and removing sophisticated deepfakes. It also underscores the potential dangers for consumers who may make health decisions based on seemingly credible but entirely fabricated medical advice.
As AI technology becomes more accessible and deepfakes more convincing, the incident raises urgent questions about how to protect public health information integrity and prevent misuse of medical professionals’ identities in the digital age.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


11 Comments
The cost guidance is better than expected. If they deliver, the stock could rerate.
Good point. Watching costs and grades closely.
Exploration results look promising, but permitting will be the key risk.
Good point. Watching costs and grades closely.
Nice to see insider buying—usually a good signal in this space.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Exploration results look promising, but permitting will be the key risk.
Interesting update on Investigation Uncovers AI Deepfake Doctors Spreading Misinformation, Endorsing Supplements. Curious how the grades will trend next quarter.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.