The number of FDA-cleared artificial intelligence-enabled medical devices has skyrocketed in recent years, nearly doubling from less than 700 in 2023 to 1,247 as of Aug. 28, according to the federal agency’s website.
And healthcare systems are getting on board. In a recent survey, about 88% of 233 health system executives said their system uses AI.
Though only a small percentage have been recalled, 60 AI-enabled devices racked up 182 recorded recall events as of November 2024, according to an Aug. 22 JAMA research letter that examined FDA data.
Throughout these recalls, there was a consistent thread: Publicly traded companies’ devices (about 92%) were recalled more frequently than private companies’ devices (53%).
“That was really something we were surprised by because we were thinking public companies probably have better risk management,” Tinglong Dai, Bernard T. Ferrari professor of business at Johns Hopkins University and one of the paper’s authors, told Healthcare Brew.
Digging deeper. The paper theorizes public companies could feel pressure from investors to put out devices as quickly as possible, perhaps not giving them time for more thorough research.
“Investors, they want to see year after year you have new AI devices cleared by the FDA, you have AI devices being rolled out among healthcare providers, so the pressure itself might have contributed to the recalls,” Dai said.
There were several main reasons for the recalls—a designation that ranges from a quick software update to total market removal—with 109 recalled due to diagnostic or measurement errors, 44 for a functionality delay, 14 for physical hazards, and 13 for biochemical hazards.
“If you have a misdiagnosis, obviously the entire healthcare delivery could be subject to risk because wrong diagnosis means that you could also have the wrong treatment,” Dai said.
Navigate the healthcare industry
Healthcare Brew covers pharmaceutical developments, health startups, the latest tech, and how it impacts hospitals and providers to keep administrators and providers informed.
The real world. The research letter identified another pattern, too. The majority (40% from the private companies, 78% from established public companies, and 97% from smaller public companies) of the recalled devices weren’t clinically validated before they were approved.
Most AI devices are given the greenlight through the FDA’s 510(k) pathway, which clears a device if it proves it is “substantially equivalent” to a previously cleared device. Clinical data is not normally required to get an OK via a 510(k), as opposed to other clearance processes.
This is something health systems and providers should keep in mind when deciding whether to use AI-enabled medical devices, Dai said.
“From the user’s perspective, I think it’s extremely important to understand…the difference between reported performance and real-world performance,” he said.
Even AI programs with strong premarket clinical data “frequently” perform worse in diverse real-world clinical settings, according to a March 2025 review in the journal Healthcare.
This is one of the reasons the American Medical Association uses the term “augmented intelligence” rather than “artificial intelligence,” according to the organization’s website. AMA guidance says that AI is not supposed to call the shots or operate without oversight by a human healthcare professional.
Regulation grows. A Department of Health and Human Services spokesperson told Healthcare Brew the agency “is not able to confirm” the numbers cited in the JAMA study, but “remains committed to ensuring patients have access to safe and effective medical devices, including those enabled with AI.”
In addition to general device software validation and safety requirements, the agency released draft guidance in January specifically for AI-enabled devices, like what data and documentation is needed to show safety and effectiveness.