AI

Google’s AI chatbot passed the U.S. medical licensing exam.

Published

on

The specialized artificial intelligence (AI) chatbot for the medical field, Med-PaLM, developed by Google, has passed the U.S. medical licensing exam. However, the responses provided by this chatbot are still considered not on par with those of human doctors. This conclusion was drawn from a study that was peer-reviewed and published in the journal Nature on July 12th.

Google first revealed information about this new chatbot in a research paper published in December 2022, but its widespread application has not been implemented yet. The chatbot, named Med-PaLM, is developed based on PaLM, Google’s latest and most advanced Large Language Model (LLM).

With its medical specialization, Med-PaLM is believed to provide more high-quality medical responses compared to other conventional chatbots. Some experts believe that Med-PaLM could be highly useful in countries with “limited access to healthcare professionals and doctors.”

According to the research published in Nature, Med-PaLM achieved a score of 67.6% on the U.S. Medical Licensing Examination (USMLE) multiple-choice test, surpassing the minimum passing rate of 60%. The study highlights that Med-PaLM’s medical expertise is commendable but still not equivalent to that of human doctors.

Google asserts that Med-PaLM is the first Large Language Model (LLM) tool to pass the USMLE. A study published in May reported that Med-PaLM 2 achieved a score of 86.5% in the USMLE test, which is higher than the original version of the chatbot. However, this research has not been independently verified by other experts to confirm its accuracy. Additionally, apart from Med-PaLM, OpenAI’s chatbot, ChatGPT, is also believed to come close to passing this exam.

OpenAI and ChatGPT

Computer science expert at the University of Bath in the United Kingdom, James Davenport, emphasizes that there is a significant difference between answering medical questions and handling real-world situations, including diagnosis and treatment decisions.

On the other hand, AI expert from the University of Leeds, Anthony Cohn, believes that there is still a considerable risk of chatbots providing inaccurate information due to the statistical nature of LLM-based systems. Therefore, he suggests that these chatbots should be used as assistants rather than real doctors with the authority to make treatment decisions for patients.

1 Comment

  1. Pingback: AI Television Host in India: Multilingual, Cost-Efficient, and Fatigue-Free - 89crypto.com

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version