AI Hallucination Exposed: How ChatGPT, Gemini & Copilot Fell for a Fake Disease “Bixonimania” | AI Hallucination | ChatGPT | Gemini AI | Machine Learning | Tech News |

 The “Bixonimania” Experiment: How AI Chatbots Were Tricked by a Fake Medical Condition



Artificial intelligence has transformed how we access information. But alongside its impressive capabilities lies a well-known issue—AI hallucinations, where models generate confident yet incorrect answers.

A recent experiment has taken this concern to a new level, showing how easily AI systems can be misled—and how quickly misinformation can spread.

A Researcher’s Unusual Experiment

Almira Osmanovic Thunstrom, a researcher at the University of Gothenburg, set out to test a simple but powerful question:

What happens if AI models are deliberately fed false information?

To find out, she invented a completely fake eye condition called “Bixonimania.”

She then published two research papers about this condition on a preprint server—platforms where studies are shared before formal peer review.

What Happened Next Was Surprising

Within weeks, major AI chatbots began treating “Bixonimania” as a real medical condition.

Several well-known platforms responded as follows:

  • Microsoft Copilot described it as an “intriguing and relatively rare condition.”
  • Gemini linked it to excessive blue light exposure.
  • Perplexity even claimed that 1 in 90,000 people were affected.
  • ChatGPT listed symptoms and explained it like a genuine disease.

In short, multiple AI systems confidently repeated something that never existed.

The Intent Behind the Experiment

Thunstrom explained that her goal was to test whether large language models (LLMs) would:

  • Accept false data
  • Treat it as credible
  • Reproduce it as factual information

Interestingly, she intentionally chose the name “Bixonimania” because it sounded unrealistic.

She even pointed out that the word “mania” is typically associated with psychiatric conditions—not eye diseases—making the name itself a clear red flag for medical professionals.

Obvious Clues—Still Ignored

To make it clear that the research was fake, Thunstrom planted several obvious hints:

  • A fictional author: Lazljiv Izgubljenovic
  • A non-existent institution: Asteria Horizon University
  • A fake location: Nova City, California
  • Statements like:
    • “This entire paper is made up”
    • “Fifty made-up individuals were recruited”

Despite these clues, AI models still picked up the information and presented it as real.

A More Concerning Twist

The experiment didn’t just fool AI—it also affected human researchers.

A study from Maharishi Markandeshwar Institute of Medical Sciences and Research cited the fake preprints in a paper published in the journal Cureus.

This suggests that some researchers may be relying on AI-generated references without proper verification.

Why Did This Happen?

This incident highlights a core limitation of AI systems:

  • They don’t “understand” truth the way humans do
  • They rely on patterns and available data
  • If misinformation enters the system, it can be repeated confidently

Even more importantly, AI models often struggle to:

  • Distinguish verified research from unverified sources
  • Identify satire, fake content, or intentional misinformation

After the Experiment

Once Thunstrom publicly revealed her findings, AI platforms began adjusting their responses.

When questioned about the issue, a spokesperson from Google stated that the results reflected the behavior of an older model version, implying improvements have since been made.

What This Means for Users

The “Bixonimania” experiment serves as a powerful reminder:

AI is a tool—not a source of absolute truth.

For users, especially in sensitive areas like health, it’s important to:

  • Cross-check information with trusted sources
  • Avoid relying solely on AI for medical advice
  • Be cautious of overly confident answers

Final Thoughts

The rise of AI has made information more accessible than ever—but also more complex to navigate.

This experiment shows that even advanced systems like ChatGPT, Gemini, and others can be misled under certain conditions.

The real takeaway isn’t that AI is unreliable—it’s that critical thinking is more important than ever.

As AI continues to evolve, both developers and users must work together to ensure that accuracy, verification, and responsibility remain at the core of this technology.

No comments

Powered by Blogger.