Nobody’s perfect, not even AI. Healthcare professionals are increasingly using artificial intelligence (AI) as part of their workflows, and while the technology can bring increased efficiency, it can also bring a greater risk of errors. In July, CNN reported the FDA’s AI tool, Elsa, which was made to accelerate drug and medical device approvals, generated fake research studies in its citations. And in early August, the Verge reported Google’s healthcare AI model, Med-Gemini, mentioned a nonexistent body part in a 2024 research paper. Plus, a study from Mount Sinai health system in New York released on Aug. 2 found AI chatbots are “highly vulnerable” to attacks promoting false medical information. Given the rising trend of AI use in healthcare, we asked executives from across the industry how leaders should respond when errors occur and what safeguards could be put in place to prevent harm. Here’s what execs had to say about AI errors.—MA |