Skip to main content
Tech

Healthcare execs talk AI errors and how to address them

Everybody makes mistakes; everybody has those days.

A healthcare cross made up of glowing binary code with a warning sign over the top right corner.

Illustration: Brittany Holloway-Brown

3 min read

Nobody’s perfect, not even AI.

Healthcare professionals are increasingly using artificial intelligence (AI) as part of their workflows, and while the technology can bring increased efficiency, it can also bring a greater risk of errors.

In July, CNN reported the FDA’s AI tool, Elsa, which was made to accelerate drug and medical device approvals, generated fake research studies in its citations. And in early August, the Verge reported Google’s healthcare AI model, Med-Gemini, mentioned a nonexistent body part in a 2024 research paper.

Plus, a study from Mount Sinai health system in New York released on Aug. 2 found AI chatbots are “highly vulnerable” to attacks promoting false medical information.

Given the rising trend of AI use in healthcare, we asked executives from across the industry how leaders should respond when errors occur and what safeguards could be put in place to prevent harm.

Hospitals

Matthew DeCamp, an internal medicine physician and health services researcher at the University of Colorado’s Center for Bioethics and Humanities, told us there’s still “a lot of uncertainty” about who holds responsibility when AI errors occur in healthcare, as it’s not been tested thoroughly in a legal setting.

However, the industry can look to existing protocols for sharing responsibility between different stakeholders, such as the AI developer and the end user, he said.

“Even before AI was involved, we found ways to divvy responsibility between, say, the maker of a CT scanner, the health system who purchased it, the radiologists who use it, and the primary care physicians like myself who might read the final report,” DeCamp said. “I think we can be reassured by this, and perhaps take a similar approach.”

Pharma

Thomas Fuchs, SVP and chief AI officer at pharmaceutical company Eli Lilly, told us the priority for healthcare organizations using AI “should always be patient safety and maintaining trust.”

Navigate the healthcare industry

Healthcare Brew covers pharmaceutical developments, health startups, the latest tech, and how it impacts hospitals and providers to keep administrators and providers informed.

He echoed DeCamp’s sentiment that it’ll take a group effort to mitigate AI errors, adding that the technology developers should “design systems with rigorous validation, transparency, and continuous monitoring.”

Meanwhile, the organizations that use the technology should set up processes to “detect, assess, and respond” to errors quickly and adhere to guidelines like the National Institute of Standards and Technology’s AI Risk Management Framework, Fuchs said, which was designed by the government agency to help manage risks associated with using AI.

Health tech

Ethan Berke, chief medical officer and SVP of integrated care at virtual care company Teladoc Health, said that in the same way healthcare organizations have systems in place to address other kinds of medical errors, they should create systems to “track, classify, and investigate” any safety threats that may arise from using AI. Then they can use that data to prevent potential future errors.

Teladoc has more than 60 proprietary AI models, Berke said, and the company has created a “rigorous” process to test and evaluate them for “accuracy, bias, and safety” before they’re deployed.

“We all have a responsibility to ensure that AI solutions are reliable, ethical, and secure,” he said. “By investing in patient safety and quality, we’ve been able to build a program that can analyze potential safety events more quickly, address root causes, and catch errors before they even reach the patient.”

Navigate the healthcare industry

Healthcare Brew covers pharmaceutical developments, health startups, the latest tech, and how it impacts hospitals and providers to keep administrators and providers informed.