Listen to the article
AI-Generated Deepfakes Promote Unverified Health Products on Social Media
Social media platforms including TikTok, Facebook, YouTube, and X are circulating hundreds of AI-generated deepfake videos featuring manipulated images of respected medical professionals who appear to be endorsing supplements and spreading health misinformation, according to an investigation by the factchecking organization Full Fact.
The investigation revealed that these deepfakes use real footage of health experts taken from legitimate sources, but artificially alter their words and images to make it appear as if they’re recommending products from Wellness Nest, a US-based supplements company.
“This is certainly a sinister and worrying new tactic,” said Leo Benedictus, the factchecker who conducted the investigation. He noted that the creators deploy AI so that “someone well-respected or with a big audience appears to be endorsing these supplements to treat a range of ailments.”
Among those whose image has been manipulated is Professor David Taylor-Robinson, an expert in health inequalities at Liverpool University. In August, he discovered 14 doctored videos on TikTok showing him apparently recommending products with unproven benefits. Despite being a specialist in children’s health, one deepfake featured him discussing “thermometer leg,” an alleged menopause side-effect, and directing women to purchase Wellness Nest’s “natural probiotic.”
“It was really confusing to begin with – all quite surreal,” Taylor-Robinson said. “My kids thought it was hilarious. I didn’t feel desperately violated, but I did become more and more irritated at the idea of people selling products off the back of my work and the health misinformation involved.”
The footage used to create these deepfakes came from a Public Health England conference in 2017 and a parliamentary hearing on child poverty where Taylor-Robinson testified in May. In one particularly misleading video, he was depicted making misogynistic comments while discussing menopause.
Despite his complaints, TikTok took six weeks to remove the videos. “Initially, they said some of the videos violated their guidelines but some were fine. That was absurd – and weird – because I was in all of them and they were all deepfakes,” he said.
Full Fact’s investigation also uncovered deepfakes of Duncan Selbie, former chief executive of Public Health England, who called one deepfake about “thermometer leg” using his likeness “an amazing imitation.” Selbie noted it was “a complete fake from beginning to end” and expressed concern that “people pay attention to these things.”
The investigation extended to other platforms where similar deepfakes were found linked to Wellness Nest or its British counterpart, Wellness Nest UK. These included apparent deepfakes of high-profile medical personalities like Professor Tim Spector and the late Dr. Michael Mosley.
When contacted, Wellness Nest claimed the deepfake videos were “100% unaffiliated” with its business. The company stated it had “never used AI-generated content” but “cannot control or monitor affiliates around the world” – suggesting the possibility that third-party marketers might be creating these videos to earn affiliate commissions.
The revelations have prompted calls for stronger regulation and faster action from social media companies. Liberal Democrat health spokesperson Helen Morgan compared the situation to fraudulent impersonation of medical professionals, which would typically face criminal prosecution.
“If these were individuals fraudulently pretending to be doctors they would face criminal prosecution. Why is the digital equivalent being tolerated?” Morgan asked. She called for “AI deepfakes posing as medical professionals to be stamped out,” with clinically approved tools promoted instead.
A TikTok spokesperson stated they had removed content related to Taylor-Robinson and Selbie for breaking rules against harmful misinformation and impersonation, acknowledging that “harmfully misleading AI-generated content is an industry-wide challenge.” The platform claims to be investing in new detection and removal methods for content violating community guidelines.
As AI technology becomes more sophisticated, the challenge of regulating deceptive content grows, potentially putting vulnerable consumers at risk of making healthcare decisions based on fabricated endorsements from trusted authorities.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


12 Comments
This is a concerning development. Misusing the authority and credibility of respected medical experts through AI-generated deepfakes to promote dubious health products is unethical and dangerous. Strong action is needed to address this issue.
Absolutely. Platforms and authorities must take firm steps to identify and remove these manipulated videos before they can cause real harm to public health.
Deepfakes of doctors spreading health misinformation is extremely concerning. It’s a dangerous new tactic to mislead people using AI-generated fakes. Fact-checking is crucial to combat the spread of this kind of disinformation.
I agree, the use of AI to create these deceptive videos is very worrying. Platforms need to improve moderation and detection to stop the spread of this kind of manipulated content.
This is a worrying new tactic in the ongoing battle against misinformation. The use of AI to create deepfakes of respected health professionals is a particularly insidious form of deception. Platforms and authorities must act quickly to address this threat.
Agreed. The ability of AI to manipulate video and audio in this way is a significant challenge for content moderation. Developing effective detection methods and policies to counter this type of disinformation should be a top priority.
The misuse of AI to deceive the public about medical advice is extremely concerning. These deepfakes undermine trust in legitimate health experts and put vulnerable people at risk. Decisive action is needed to address this emerging threat.
I agree. The spread of health misinformation through manipulated videos is a serious problem that requires a multi-pronged response from platforms, regulators, and the medical community.
I’m curious to know more about the specific tactics used to create these deepfake videos. How advanced is the AI technology being employed, and what can be done to improve detection and prevention?
That’s a great question. Understanding the technical capabilities behind these deepfakes is crucial to developing effective countermeasures. Increased transparency and collaboration between platforms, researchers, and fact-checkers will be key.
As someone with a background in medical science, I find this development deeply troubling. The potential impact on public health from these AI-generated deepfakes is alarming and demands urgent attention.
Thank you for sharing your perspective as an expert in this field. Your insight is valuable in understanding the gravity of this issue and the need for robust solutions to protect the public.