Skip to main content
Tech

Doctors using AI for clinical decision-making viewed negatively by peers, study finds

But clinicians still view AI positively overall.

3 min read

If you’re a clinician, your peers may be judging you for using generative AI.

A study published in August in the medical journal npj Digital Medicine found clinicians negatively viewed their peers who use generative AI in clinical decision-making. The study, conducted by Johns Hopkins researchers, involved 276 practicing clinicians at an unnamed health system.

The clinicians viewed those who use AI to help them make patient care decisions as having a “lack of clinical skill and overall competence, resulting in a diminished perceived quality of patient care,” according to a press release from Johns Hopkins.

Tinglong Dai, Bernard T. Ferrari Professor of Business at Johns Hopkins and co-corresponding author of the study, said in a statement the stigma around AI “may be an obstacle to better care.”

However, while study participants negatively viewed fellow clinicians using the technology for clinical decision-making, the majority said they still believe AI is a beneficial tool, particularly when it’s tailored to a specific health system’s needs.

As of the end of 2024, roughly 85% of healthcare leaders surveyed by consulting firm McKinsey said they were either already using generative AI or were exploring the technology’s use.

Additionally, clinicians who framed their use of the technology as a verification tool rather than one for primary decision-making were viewed more positively, the study found. But those who didn’t use generative AI at all were viewed most positively.

An outside view. Quinn Waeiss, a postdoctoral fellow at Stanford’s Center for Biomedical Ethics and a research associate with the university’s McCoy Family Center for Ethics and Society, told Healthcare Brew the study’s findings align with their research concerning the “uncritical use” of AI.

“I’ve heard both from patients and providers, this concern that providers go to medical school—we have standards for what their host of years and years of training looks like in their capacity of providing care to patients. We expect that they’re drawing on that expertise,” they said.

However, Waeiss added it shouldn’t be up to clinicians themselves to decide how to ethically use AI in patient care.

“Without incorporating the healthcare system perspective, we place a lot of that responsibility on individual providers, and I have concerns that we’re going to end up placing more and more expectations on [providers] as we shift the responsibilities,” they said.

Navigate the healthcare industry

Healthcare Brew covers pharmaceutical developments, health startups, the latest tech, and how it impacts hospitals and providers to keep administrators and providers informed.

Navigate the healthcare industry

Healthcare Brew covers pharmaceutical developments, health startups, the latest tech, and how it impacts hospitals and providers to keep administrators and providers informed.