Listen to the article
In a bold move addressing the growing concern of AI-generated medical misinformation, global healthcare communications agency Havas Lynx has unveiled “CHAT GP,” a reactive out-of-home campaign that cleverly redirects patients from artificial intelligence to general practitioners for trusted health advice.
The initiative, launched across Manchester, strategically hijacks advertisements from a prominent AI company by transforming their question “When you can ask AI anything, do you trust it with your health?” into a straightforward message urging the public to consult their GP for reliable medical information.
This campaign emerges directly from Havas Lynx’s recently published “Doctored Truths” white paper, which investigates the dangerous proliferation of health misinformation in the digital age. The timing appears particularly relevant as healthcare systems worldwide grapple with the rapid integration of AI technologies and their potential impacts on patient care and information access.
Recent data from Point.1 reveals a concerning trend: while nearly half of the public now turns to AI for medical advice, 44% harbor doubts about its reliability. This contradiction highlights a growing tension between technological convenience and healthcare trust that Havas Lynx aims to address.
“The rise of AI tools presents a new challenge in the fight against health misinformation,” explained Claire Knapp, CEO at Havas Lynx. “Whilst AI offers huge opportunities to modernise our healthcare service, our ‘CHAT GP’ activation is a vital reminder that AI is not a substitute for human medical expertise.”
Knapp emphasized the fundamental problem with using generative AI for medical advice: “These tools are built for engagement, not patient safety, and relying on them for our health advice poses severe risks.”
The campaign arrives at a critical moment for healthcare communications. The World Economic Forum has identified health misinformation as the world’s top short-term risk, with tangible consequences already evident in vaccine hesitancy, treatment refusals, and delayed diagnoses. The pharmaceutical industry faces mounting pressure to combat this threat through clearer, more accessible patient information.
Healthcare marketing experts note that Havas Lynx’s approach represents a growing trend of agencies taking public-facing stands on healthcare issues rather than focusing exclusively on behind-the-scenes pharmaceutical marketing. This shift reflects the industry’s recognition that maintaining public trust in medical expertise has become a central challenge.
Dr. Emma Richardson, a healthcare communication researcher at Manchester University, commented on the initiative: “What’s interesting about this campaign is how it acknowledges the public’s curiosity about AI while firmly redirecting them to qualified medical professionals. It’s not anti-technology, but rather pro-responsible healthcare advice.”
The pharmaceutical industry has particular reason for concern regarding AI-generated health information. With complex medications and treatments frequently misrepresented online, companies face the dual challenge of combating misinformation while establishing themselves as trustworthy voices in an increasingly noisy digital landscape.
Havas Lynx has called for broader industry participation, urging pharmaceutical companies and the wider healthcare ecosystem to leverage their scientific expertise in countering misinformation. The agency suggests that coordinated efforts between healthcare providers, communications professionals, and drug manufacturers will be essential to reestablish a foundation of trust in medical information.
As AI capabilities continue to advance, the “CHAT GP” campaign stands as a reminder that the ultimate responsibility for health decisions remains with qualified healthcare professionals – a message that resonates beyond Manchester to healthcare systems worldwide navigating the complex intersection of technology and patient care.
Verify This Yourself
Use these professional tools to fact-check and investigate claims independently
Reverse Image Search
Check if this image has been used elsewhere or in different contexts
Ask Our AI About This Claim
Get instant answers with web-powered AI analysis
Related Fact-Checks
See what other fact-checkers have said about similar claims
Want More Verification Tools?
Access our full suite of professional disinformation monitoring and investigation tools


11 Comments
The timing of this campaign is crucial as healthcare systems grapple with the integration of AI. Encouraging patients to consult their GP is a sensible approach.
This campaign highlights the dangers of AI-generated medical misinformation. Directing patients to trusted healthcare providers is a responsible approach.
Hijacking AI advertising to promote consulting a GP is a clever tactic. Educating the public on the limitations of AI in healthcare is an important initiative.
Agreed. Raising awareness of the risks of relying on AI for medical advice is crucial.
This campaign to counter AI-driven medical misinformation is a proactive and much-needed step. Consulting a qualified healthcare professional for trusted health advice is always the safest approach.
The growth of AI-driven health advice is a worrying trend. This campaign rightly encourages patients to seek out their GP for reliable, human-centered medical information.
While AI can provide convenient access to information, it’s concerning that so many people are relying on it for important medical decisions. Prioritizing professional medical guidance is critical.
Absolutely. AI should never replace the expertise and personalized care of a real doctor.
While AI can be a useful tool, it should never replace the human expertise and personalized care of a medical professional. This campaign sends the right message.
Exactly. Promoting GP consultations over AI-driven health advice is the responsible way forward.
Addressing the proliferation of health misinformation in the digital age is a complex challenge. This campaign takes a proactive step in the right direction.