Tech

ChatGPT can help physicians choose the right imaging test to order

ChatGPT 4 determined the appropriate imaging test for breast pain 98.4% of the time, study found.
article cover

Ponywang/Getty Images

3 min read

Navigate the healthcare industry

Healthcare Brew covers pharmaceutical developments, health startups, the latest tech, and how it impacts hospitals and providers to keep administrators and providers informed.

Artificial intelligence (AI) tools such as ChatGPT may not be able to replace physicians, but they may be able to help them make decisions regarding breast cancer screenings and imaging breast pain, according to a new study from researchers at Harvard and Mass General Brigham.

The study, which is one of the first to show that the large language model ChatGPT can support the clinical decision-making process, analyzed the AI tool’s ability to choose the appropriate imaging test for patients with breast pain. Both ChatGPT 3.5 and the more advanced ChatGPT 4 selected the appropriate imaging test the majority of the time, offering the potential to optimize workflows and reduce administrative time, the study found.

“I see [ChatGPT] acting like a bridge between the referring healthcare professional and the expert radiologist—stepping in as a trained consultant to recommend the right imaging test at the point of care, without delay,” study coauthor Marc Succi said in a statement.

The researchers asked both ChatGPT 3.5 and 4 to determine what kind of imaging a patient with breast pain would need—a mammogram, ultrasound, MRI, or other type of imaging test—based on the Appropriateness Criteria from the American College of Radiology (ACR).

These criteria help radiologists choose the best course of action based on a patient’s age and symptoms. For example, a mammogram for a patient in their 30s with clinically significant breast pain would be “usually appropriate” for initial imaging, according to the guidelines.

In 21 fictitious patient scenarios, ChatGPT 3.5 chose the appropriate imaging test an average of 88.9% of the time, while ChatGPT 4 answered 98.4% of the scenarios correctly, on average, the study found.

“This is purely an additive study, so we are not arguing that the AI is better than your doctor at choosing an imaging test but can be an excellent adjunct to optimize a doctor’s time on non-interpretive tasks,” Succi said.

While radiologists are familiar with ACR’s appropriateness guidelines, primary care physicians may not be, and can have trouble identifying the right test for the patient. Lack of familiarity with ACR guidelines can “cause confusion on the patient’s side and can lead to patients getting tests they don’t need or getting the wrong tests,” the researchers found.

Instead, the AI technology could be integrated into the electronic health record (EHR). When a physician enters the patient’s symptoms into the EHR, the AI tool could suggest an appropriate imaging option for the physician to order.

But more research is needed before physicians can use AI more widely in clinical decision-making, the researchers noted. The World Health Organization raised concerns last month about potential biases and patient privacy risks when using AI in clinical settings. Earlier this month, the American Medical Association adopted a proposal to help protect patients from false or misleading medical information from AI tools.

Navigate the healthcare industry

Healthcare Brew covers pharmaceutical developments, health startups, the latest tech, and how it impacts hospitals and providers to keep administrators and providers informed.

H
B