AI chatbots miss more than half of medical diagnoses, study finds


Although Chat bots and Large linguistic models It can answer a large number of everyday questions, and it should not be the first place you turn to for medical advice, according to a new study from the scientific journal Natural medicine He appears.

During the study, 1,298 UK participants were asked to use a large language model, e.g ChatGPT or Metta Lama 3for medical advice. When used in this way, the LLM correctly identified medical conditions in less than 34.5% of cases.

How did the LLMs perform in the study?

Artificial intelligence atlas badge art

The study acknowledged that LLM holders now score on medical knowledge standards similar to passing the US Medical Licensing Examination, and that clinical documents from LLMs “are rated as equivalent or better than those written by doctors.”

However, a problem was revealed when study participants attempted to obtain the same results by asking LLM questions but were unsuccessful. The study found that this is because users often do not provide enough information. Reports indicate that in 16 of the 30 interactions sampled, the initial messages contained only partial information.

“In two cases, the LLMs initially provided correct responses but added new, incorrect responses after users added additional details,” the study said, suggesting that talking more with chatbots did not improve the likelihood of receiving a correct medical diagnosis.

After the initial diagnosis, the MBA provided the person with the correct follow-up steps only 44.2% of the time.

A phone with the Meta Llama logo on the screen

Meta’s Llama 3 was one of the large language models used in the study.

Suba Images/Getty Images

How often do people use chatbots to get medical advice?

According to a survey he conducted OpenAIwhich owns ChatGPT, 3 in 5 American adults report using AI for health. “They use AI to get information when they first feel unwell, they consult it to prepare for their visits with their doctors, and they use it to better understand patients’ instructions and recommendations,” OpenAI said.

Read more: ChatGPT for self-diagnosis: AI is changing the way we answer our health questions

Although there is a small disclaimer on the ChatGPT website that states: “ChatGPT can make errors. Check important information,” many people take the chatbot’s word for it.

The study serves as a reminder that ChatGPT and similar chatbots should not be relied upon for medical guidance, especially in serious situations.



Leave a Reply

Your email address will not be published. Required fields are marked *