Listen to the article
Canadian researchers are sounding the alarm about a dangerous new form of medical misinformation: AI-generated videos that target seniors with fake health advice. A recent investigation by McGill University has uncovered a network of YouTube channels using artificial intelligence to create convincing but entirely fabricated medical guidance.
Science communicator Jonathan Jarry, who led the research, initially focused on a channel called “Senior Secrets” that had amassed over 300,000 subscribers and 17 million views before YouTube removed it following his investigation. The channel regularly posted videos with sensational health claims, including tips on “how to live another 40 years” and exercises that supposedly “double leg strength.”
Jarry’s analysis revealed numerous red flags in these productions. The channel’s most popular video, viewed by 3.5 million people, claimed to feature recommendations from a “top heart surgeon” who advised viewers to skip walking in favor of five alternative exercises. The content relied heavily on stock footage and simplistic cartoons, with narration that sounded robotic and unnatural.
More concerning was the complete fabrication of scientific evidence. The video referenced a “groundbreaking” 2024 study from Copenhagen that doesn’t exist. While the description included a citation to a legitimate article from the Scandinavian Journal of Medicine & Science in Sports, the video itself never actually referenced this research.
“These videos are deliberately designed to appear authoritative while spreading completely unfounded medical advice,” Jarry wrote in his report, which was originally published by McGill in December and recently highlighted in a condensed version by TVO.
The problem extends far beyond a single channel. Jarry documented dozens of similar operations, including “Senior Book,” “Senior Wellness,” “Dr. Reeves,” “Ageless Vitality,” “DR. NERITA,” and “WISE ADVICE.” A deeper analysis of top videos from four of these channels revealed that out of 65 references to supposed medical studies or institutions, only five were real – and even those were improperly attributed.
When attempting to contact these channels, Jarry received responses in Vietnamese. His geolocation research suggests many are operated by content farms likely based in Vietnam, despite claiming to be located in various U.S. cities – some of which no longer exist.
What makes this trend particularly dangerous is its targeting of seniors, who may be less familiar with identifying AI-generated content. Age-related declines in vision, hearing, and cognitive abilities can make it harder to notice the telltale signs of synthetic media. Unlike obviously fake AI movie trailers, these videos address serious health matters, potentially leading viewers to make harmful medical decisions.
Jarry criticized YouTube’s response to the problem, noting that the platform made only “minor” policy updates last year targeting “mass-produced and repetitious content.” According to his research, AI content is still allowed as long as it doesn’t meet specific standards of mass production and repetitiveness. Meanwhile, these deceptive videos continue generating ad revenue based on viewership.
The research highlights a growing concern in the medical community about how AI tools are being weaponized to spread health misinformation at scale. Unlike traditional text-based misinformation, these videos can appear more credible by incorporating visual elements and seemingly authoritative narration.
For vulnerable populations seeking health advice online, Jarry recommends heightened vigilance. “Do not trust random videos for health information,” he advises. “Make sure the host is human and credentialed. Look up their medical license on the website of their medical college to see if they exist.”
He suggests putting more trust in face-to-face interactions with healthcare providers than in online content. Developing the reflex to question whether a video might be AI-generated is also crucial, especially for older adults who may be less familiar with these technologies.
As AI tools become more sophisticated and accessible, the challenge of distinguishing legitimate health information from dangerous fabrications will likely intensify, requiring greater awareness and potentially stronger regulatory approaches from platforms and governments alike.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


9 Comments
While AI and automation can be powerful tools, it’s clear they can also be misused to spread dangerous falsehoods. I hope this research leads to meaningful action to address these emerging threats.
This is a really troubling development. Medical misinformation can have serious consequences, especially for vulnerable populations. We need stronger regulations and enforcement to protect people online.
This is really concerning. Using AI to spread medical misinformation is downright dangerous, especially when targeting vulnerable seniors. Fact-checking and media literacy are so important in the digital age.
Absolutely. It’s crucial that we stay vigilant and call out these kinds of manipulative tactics. Quality healthcare information should come from trusted, reputable sources.
This is a sobering reminder of the potential downsides of advanced AI technology. While the benefits are clear, bad actors will always try to misuse it. Vigorous fact-checking and content moderation are essential safeguards.
It’s disheartening to see AI exploited in this way. Seniors deserve access to accurate, trustworthy health information, not fabricated advice that could put their wellbeing at risk. More oversight is clearly needed.
I’m not surprised that AI-generated content is being used to spread misinformation. The technology can be easily abused, and it’s up to platforms and researchers to stay ahead of these threats.
Agreed. More robust safeguards and content moderation are needed to prevent these kinds of harmful videos from gaining traction, especially on platforms popular with seniors.
Kudos to the researchers for uncovering this issue. Increasing public awareness and media literacy around AI-generated content is crucial to combat the spread of misinformation. We can’t let these tactics continue unchecked.