Listen to the article

0:00
0:00

Researcher’s Fake Eye Disease Exposes Dangerous AI Medical Misinformation Spread

A sophisticated experiment by a University of Gothenburg researcher has revealed alarming vulnerabilities in artificial intelligence systems that could have serious implications for public health information.

Almira Osmanovic Thunström deliberately created a fictitious medical condition called “bixonimania” to test how easily AI platforms might spread medical misinformation. The results proved concerning, as multiple major AI systems quickly began presenting the fabricated disease as medical fact to users.

The experiment began on March 15, 2024, when Osmanovic Thunström published two blog posts on Medium describing a non-existent condition characterized by sore, tired, and pink eyes supposedly resulting from excessive blue light exposure. She followed this with two papers published on the academic network SciProfiles in April and May 2024 under the pseudonym Lazljiv Izgubljenovic, accompanied by an AI-generated photo.

Despite deliberately including numerous red flags that should have alerted AI systems to the deception, major platforms failed to identify the fabrication. Osmanovic Thunström had carefully chosen the name “bixonimania” to signal the hoax to medical professionals, as she later explained to Nature magazine, which first reported the story.

“I wanted to be really clear to any physician or any medical staff that this is a made-up condition, because no eye condition would be called mania – that’s a psychiatric term,” she told Nature.

The researcher went even further, referencing fictional institutions like “Asteria Horizon University,” “The Starfleet Academy,” and the “University of Fellowship of the Ring” as academic affiliations. She even directly stated within the papers that “this entire paper is made up” and referenced “fifty made-up individuals” in her methodology.

Nevertheless, by mid-April 2024, Microsoft’s Bing Copilot was describing bixonimania as a “rare condition,” while Google’s Gemini advised users with itchy eyes to consult ophthalmologists about potential bixonimania. Perplexity AI claimed the fictional disease affected one in 90,000 individuals, and OpenAI’s ChatGPT began offering diagnoses to users reporting eye symptoms.

The misinformation spread reached a critical point when the fake condition breached the peer-review process, appearing in a scientific journal called Cureus, published by the respected academic publisher Springer Nature. The journal published research citing the fabricated papers as legitimate sources, representing a significant failure in scientific gatekeeping. After being alerted by Nature magazine, Cureus retracted the paper on March 30, 2026.

Alex Ruani, a doctoral researcher specializing in health misinformation at University College London, expressed grave concerns about the implications. “If the scientific process itself and the systems that support that process are skilled, and they aren’t capturing and filtering out chunks like these, we’re doomed,” Ruani stated. “This is a masterclass on how mis- and disinformation operates.”

When contacted by Nature and Breitbart, the AI companies involved either did not respond or claimed their technology had significantly improved since the experiment took place. However, the incident raises profound questions about AI’s role in medical information dissemination and the potential dangers of algorithmic amplification of false health claims.

The experiment highlights growing concerns in the medical and tech communities about AI’s ability to evaluate the credibility of health information sources. As AI systems increasingly serve as information gatekeepers for millions of users, their inability to detect even obvious scientific fabrications poses serious risks to public health and scientific integrity.

For the general public, the lesson is clear: AI-generated health information, even from leading platforms, should be approached with caution and verified through established medical sources before making health decisions based on such advice.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

12 Comments

  1. Emma T. Smith on

    The fact that multiple major AI platforms failed to identify this fabricated disease is deeply concerning. Clearly more work is needed to improve the ability of these systems to detect misinformation and ensure they don’t inadvertently promote false medical claims.

  2. This is a sobering example of how AI can be exploited to spread dangerous falsehoods. Developers must prioritize accuracy, transparency, and trustworthiness to ensure these systems don’t do more harm than good.

    • I agree completely. AI systems need to be held to the highest standards when it comes to medical information. Oversight and accountability are key to preventing public harm.

  3. I’m curious to know more about the specific red flags the researcher included that the AI systems failed to catch. What can be done to improve the ability of these systems to identify fabricated medical claims?

  4. Elijah Taylor on

    While it’s positive that the researcher exposed these flaws, the implications are troubling. AI-powered platforms must be held to rigorous standards when it comes to verifying medical information to protect public health.

  5. Liam X. Jackson on

    Kudos to the researcher for shedding light on this critical issue. Regulators and platform owners need to take this seriously and develop robust safeguards to prevent the amplification of medical misinformation by AI systems.

  6. Robert Thompson on

    This is a concerning case that underscores the urgent need for stronger safeguards and oversight of AI systems, especially when it comes to sensitive medical information. Developers must prioritize accuracy, transparency, and public safety above all else.

  7. Patricia A. Brown on

    This experiment highlights serious flaws in AI systems that need to be addressed. Spreading misinformation, even unintentionally, can have dangerous public health implications. Developers must prioritize accuracy and integrity over speed.

  8. Jennifer Z. Smith on

    While it’s good the researcher exposed these vulnerabilities, it’s concerning that real-world AI platforms were unable to detect such an obvious hoax. This underscores the importance of rigorous testing and oversight of AI medical information.

  9. James Miller on

    Kudos to the researcher for shedding light on this critical issue. It’s alarming that major AI systems were unable to identify the fabricated ‘bixonimania’ condition. Robust validation measures are clearly needed to prevent the amplification of medical misinformation.

  10. Lucas Rodriguez on

    This is a worrying example of the real-world consequences that can arise from vulnerabilities in AI systems. Developers must take proactive steps to shore up these weaknesses and prevent the spread of dangerous medical misinformation.

  11. Michael Thomas on

    Wow, this is a concerning case of AI medical misinformation. It’s alarming that major platforms failed to detect the fabricated ‘bixonimania’ condition. Rigorous validation is clearly needed to prevent the spread of false health claims.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.