Skip to main content
Tech

Health systems are adopting AI faster than they’re adopting safeguards

While 88% of health systems are using AI, only 18% have mature governance.

A medical cross made out of binary code held behind yellow and black safety guardrails.

Illustration: Brittany Holloway-Brown

3 min read

Artificial intelligence (AI) is now commonplace in healthcare. Robust rules for the tech? Not so much, according to a new report.

About 88% of 233 health system executives said their system uses AI in some form, but only 18% have a mature governance structure and full AI strategy, according to an Aug. 12 report from the professional membership organization Healthcare Financial Management Association (HFMA) and Eliciting Insights, a healthcare strategy and market research company.

“I think a lot of organizations need to step back and say, ‘Oh, wait, we have 10 departments doing work with AI,’” Richard Gundling, HFMA’s SVP of professional practice, told Healthcare Brew. “We need an overall governance structure now, so each department is using it the same way, the same oversights are done.”

AI’s use grows. Responses from 115 of the sampled healthcare system executives suggest the most common use of AI is for ambient listening and clinical notetaking, with 42% using it for this purpose. Coding is the second most common, at 28%, followed by clinical documentation improvement, denial prediction, and chatbots for patient inquiries.

The good news: Though most surveyed healthcare orgs don’t have a comprehensive strategy, the report shows nearly three-quarters of CFO respondents said they are at least in the early stages of developing an AI governance structure—a set of rules and regulations around how departments should use AI.

Navigate the healthcare industry

Healthcare Brew covers pharmaceutical developments, health startups, the latest tech, and how it impacts hospitals and providers to keep administrators and providers informed.

But how should they go about creating one?

Good governance. Without guardrails in place, AI could hurt rather than help the organization. There have been instances of AI fueling racial bias or hallucinating, for instance. A June study in the journal npj Digital Medicine suggests large language learning models may recommend “inferior” psychiatric treatments when told a patient’s race.

One important step in creating these guardrails is to make sure all departments are represented in crafting AI strategy, including financial executives, clinical leaders, and human resources professionals, Gundling said.

The most common duty of governance groups is to determine a data policy for AI, according to the report. The groups are also often used to identify and vet AI vendors.

There’s a growing number of resources to help groups create a governance structure.

The American Medical Association released an AI governance toolkit April 29 with an eight-step module highlighting steps such as designating an executive to be accountable for AI’s outcomes and establishing a vendor evaluation policy.

Other groups like the public-private Coalition for Health AI—a collaboration between providers, tech companies, and other health organizations—have outlined a robust AI implementation and assessment process to ensure AI is developed safely and companies get their money’s worth.“Even if you feel like you’re behind the curve, you can still catch up,” Gundling said.

Navigate the healthcare industry

Healthcare Brew covers pharmaceutical developments, health startups, the latest tech, and how it impacts hospitals and providers to keep administrators and providers informed.