Artificial intelligence (AI) has become increasingly integrated into healthcare settings, but the shortcomings of the technology mean the potential for errors is more consequential than ever.
One study suggests that the AI healthcare market hit $6.6 billion in 2021. Soon, the University of California, San Francisco (UCSF) Division of Clinical Informatics and Digital Transformation and UCSF Health’s new, real-time and continuous AI monitoring tool—dubbed the Impact Monitoring Platform for AI in Clinical Care (IMPACC)—could help clinicians understand the efficacy, safety, and equity of this new technology in relation to their patients.
IMPACC is designed to report if AI is doing what it is designed to do, and flag any AI-powered tech that could be unsafe or widen health disparities. With the report results, healthcare leaders can then decide if they want to keep using a certain tool or phase it out altogether, Julia Adler-Milstein, professor of medicine and chief of the UCSF Division of Clinical Informatics and Digital Transformation, told Healthcare Brew.
“It’s just something that you would do with any new technology, right? Which is make sure that it’s doing what you want it to do and expect that it would do,” Adler-Milstein said. “With AI, I think there’s just such a critical need to do that in a more granular, real-time way, because we know that the models themselves, their performance can change over time. We also know that our frontline clinicians are working with AI for the first time.”
Keep reading here.—CM
|