Listen to the article

0:00
0:00

Rise in AI Deepfakes Fuels Health Misinformation Crisis

False and misleading health information is proliferating online at an alarming rate, powered by rapid advancements in deepfake technology and generative artificial intelligence. These sophisticated tools allow scammers to manipulate videos, photos, and audio of respected health professionals, making it appear as though they are endorsing fake products or requesting sensitive health information from unsuspecting Australians.

The trend comes at a time when Australians increasingly rely on digital health resources. In 2021, three-quarters of Australian adults reported accessing health services online, including telehealth consultations. More concerning, a 2023 study revealed that 82% of Australian parents consulted social media for health-related issues alongside traditional doctor visits.

“The worldwide growth in health-related misinformation and disinformation is exponential,” explains Dr. Samantha Richards, digital health researcher at the University of Melbourne. “From Medicare phishing scams to counterfeit pharmaceuticals, Australians risk both financial loss and potential health damage by following false advice.”

Deepfakes represent a particularly insidious threat in this landscape. Unlike traditional photo or video manipulation, modern AI tools can create hyper-realistic content at unprecedented speed and quality. When shared through social media platforms, which significantly amplify misinformation, the potential for harm increases dramatically.

A prominent example emerged in December 2024 when Diabetes Victoria issued an urgent warning about deepfake videos showing experts from The Baker Heart and Diabetes Institute in Melbourne falsely promoting diabetes supplements. Neither organization endorsed these products, and the physician portrayed in the video had to personally alert his patients about the scam.

This incident follows similar cases, including one in April 2024 where scammers deployed deepfake images of popular science communicator Dr. Karl Kruszelnicki to sell questionable health products to Australians via Facebook. Despite user reports, the platform initially claimed the ads didn’t violate its standards.

Social media marketplace TikTok Shop also faced scrutiny in 2023 when sellers manipulated legitimate doctor videos to falsely endorse products, garnering over 10 million views.

“The sophistication of these deepfakes makes them increasingly difficult for the average consumer to identify,” notes cybersecurity expert Michael Thompson. “What makes them particularly dangerous is how they exploit the trust people place in medical professionals.”

Australia’s eSafety Commissioner recommends several strategies for identifying potential deepfakes. Consumers should consider context first—asking whether the content aligns with what you would expect from that person in that setting. Additionally, careful examination may reveal technical flaws such as blurring, skin inconsistencies, video glitches, poorly synchronized audio, unnatural movements, or content gaps.

For those who find their own images or voices manipulated, the eSafety Commissioner provides direct assistance for content removal. The British Medical Journal also advises contacting the purported endorser to verify legitimacy, leaving public comments questioning dubious claims, using platform reporting tools, and encouraging critical thinking among others.

“As with all health information, consulting qualified healthcare professionals remains essential,” emphasizes Dr. Helen Wong, medical ethicist at the University of Sydney. “No matter how convincing an online endorsement might seem, decisions about medications or treatments should always involve your doctor, pharmacist, or other healthcare provider.”

The Australian government acknowledges the growing threat. In February 2025, the long-awaited Online Safety Review recommended adopting duty of care legislation to address “harms to mental and physical wellbeing” and the “instruction or promotion of harmful practices.” Given the potentially devastating consequences of following deepfake health advice, such legislation appears increasingly necessary to protect Australians and support informed healthcare decisions.

As AI technologies continue to advance, the line between authentic and manipulated content will likely blur further, making digital literacy and healthy skepticism essential skills for navigating the online health information landscape.

Verify This Yourself

Use these professional tools to fact-check and investigate claims independently

Reverse Image Search

Check if this image has been used elsewhere or in different contexts

Ask Our AI About This Claim

Get instant answers with web-powered AI analysis

👋 Hi! I can help you understand this fact-check better. Ask me anything about this claim, related context, or how to verify similar content.

Related Fact-Checks

See what other fact-checkers have said about similar claims

Loading fact-checks...

Want More Verification Tools?

Access our full suite of professional disinformation monitoring and investigation tools

11 Comments

  1. This is a worrying trend. We need to be vigilant in verifying the source and legitimacy of any health information online, especially with the rise of AI-generated deepfakes. Fact-checking and consulting trusted medical professionals is crucial.

  2. Relying on social media for health advice is risky, as it can be a breeding ground for misinformation. I hope more Australians learn to distinguish credible sources from scams, to protect their financial and physical wellbeing.

  3. Relying on social media for health advice is risky, as it can be a breeding ground for misinformation. I hope more Australians learn to distinguish credible sources from scams, to protect their financial and physical wellbeing.

  4. This is a serious issue that requires a multi-pronged approach – from media literacy education to stricter regulations on AI-generated content. Protecting public health should be a top priority.

  5. Jennifer Lopez on

    Scammers are getting more sophisticated, leveraging AI to target vulnerable people. It’s a good reminder to be cautious about any unsolicited health claims or requests for personal information, and to always verify the source.

  6. Patricia Jones on

    The rise of AI-generated deepfakes is a real threat to public health. Australians need to be extremely cautious about any health claims or requests for information they see online, and always verify the source.

  7. I’m concerned about the impact this could have, especially on the elderly and those with limited digital literacy. More needs to be done to empower people to identify and avoid these scams.

    • Absolutely. Vulnerable groups are often the target, so outreach and education programs are critical. We can’t let scammers exploit people’s health concerns.

  8. James O. Davis on

    This is a worrying development. Deepfakes could undermine trust in legitimate health information and lead to real harm. We need stronger safeguards and digital literacy efforts to protect the public.

  9. The growth of health misinformation is alarming, but not surprising given the power of generative AI. Deepfakes can be extremely convincing, so we must all develop critical thinking skills to spot fake content online.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved. Designed By Sawah Solutions.