Home > News

“Two-way communication breakdown” Study reveals AI chatbots shouldn’t be relied on for health advice

Who gives better health advice - an AI chatbot or a doctor?
Last Updated on
“Two-way communication breakdown” Study reveals AI chatbots shouldn’t be relied on for health advice
PC Guide is reader-supported. When you buy through links on our site, we may earn an affiliate commission. Read More

The internet has become a go-to source for just about everything, including healthcare. And it’s hard to find someone who hasn't Googled their symptoms at some point to figure out what might be wrong? But the results you get from search engines are often confusing at best and flat-out terrifying at worst. In fact, a quick search can easily convince you that a minor headache means you've only got two weeks to live.

As an alternative to regular search engines, many people are now turning to AI-powered chatbots like ChatGPT for self-diagnosis, as these tools are designed to provide clear, data-driven answers based on massive amounts of information. However, putting too much trust in their answers can be just as risky as relying on Google. A recent study led by the Oxford Internet Institute has revealed that AI chatbots struggle to provide people with useful health advice.

The AI isn’t the only one at fault

For the study, the researchers recruited around 1,300 people in the U.K. and presented them with medical scenarios to identify potential health conditions based on these and to determine possible courses of action, such as whether to visit a doctor or go to the hospital, using both chatbots and their usual methods. The AI models used in the study included GPT-4o, Cohere's Command R+, and Meta's Llama 3.

The study revealed a two-way communication breakdown. The responses they received frequently combined good and poor recommendations.

Adam Mahdi, Co-author of the study via TechCrunch

And the results revealed flaws on both sides. According to the authors, the chatbots not only made participants less likely to identify the correct health condition, but also led them to underestimate the severity of the conditions they did recognize. On the other hand, participants often failed to provide key details in their queries or received responses that were unclear or difficult to interpret.

Is AI ready for healthcare?

With these findings laid out, it makes sense to question the use of AI in higher-risk health applications and whether it's truly ready to assist with clinical decisions. And as much as we agree with major AI companies warning against making diagnoses based solely on their chatbots’ outputs and stressing the importance of relying on trusted sources for healthcare decisions, there's another side to the story. A recent Reddit post went viral where a user claimed that ChatGPT helped resolve a chronic medical issue in under a minute.

The user had been dealing with persistent jaw clicking for five years that remained unresolved despite visits to an ENT specialist, two MRIs, and a referral to a maxillofacial expert. Frustrated, they decided to try ChatGPT, which suggested that the issue could be a slightly displaced but movable disc in the jaw and recommended a mouth-opening technique. “After following the instructions for maybe a minute max, suddenly… no click,” the user said.

About the Author

Hassam boasts over seven years of professional experience as a dedicated PC hardware reviewer and writer.