Skip to main content
Tech

What’s the state of healthcare AI regulation?

States have taken the lead in establishing guidelines for using health AI.

6 min read

AI is rapidly becoming a standard part of the healthcare industry, from ambient scribes that help healthcare workers chart patient data to algorithms that can predict disease.

Yet the regulatory environment surrounding health AI is still the Wild West.

“The vast majority of medical AI is never reviewed by a federal regulator—and probably no state regulator,” I. Glenn Cohen, a law professor and faculty director of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard University, told the Harvard Gazette in January.

Regulating health AI is particularly challenging because the technology evolves much faster than the healthcare industry is used to developing regulations, experts told Healthcare Brew. However, 83% of 277 polled healthcare workers say AI needs more regulation, according to a 2025 Morning Brew Inc. survey.

States take the lead

Health AI regulation has primarily come from states, as the federal government has largely taken an antiregulatory approach.

Congress hasn’t passed any legislation that regulates health AI directly, according to consulting firm Manatt Health. However, the Trump administration has released several executive orders involving AI.

In 2025, 47 states introduced more than 250 bills including health AI regulation, according to policy and advisory firm Manatt Health’s AI policy tracker. Of those, 33 became law in 21 states. For example, California passed a law that took effect Jan. 1, 2026, requiring chatbots to make it clear if they are powered by AI and bans chatbots that don’t have a protocol to prevent content around suicide or suicidal ideation, per Manatt.

Ohio introduced a bill in November that would prohibit AI from making diagnoses or therapeutic decisions and prevent it from being used to determine a patient’s mental or emotional state.

Despite the federal government’s antiregulatory approach, so far in 2026 Manatt has tracked roughly 200 state AI bills, Jared Augenstein, a senior managing director at Manatt, told Healthcare Brew.

There are four main themes to state-level legislation being proposed in 2026, according to Augenstein: mental health chatbots, patient disclosure and consent, preventing AI tools from presenting as clinical providers, and payer use of AI.

The agencies’ involvement

Though they don’t carry the full weight of a law, there are a number of recommendations federal healthcare agencies have issued for healthcare organizations to follow when it comes to AI.

The FDA has taken steps including requesting public comment on how AI medical devices should be evaluated, as well as forming a Digital Health Advisory Committee, which met in November to discuss AI mental health chatbots (FDA commissioner Marty Makary recently told Healthcare Brew the organization is eagerly exploring AI’s use).

However, there are “still big questions about how the FDA broadly is going to approach regulating AI-enabled software medical devices,” Augenstein said. Approving medical devices is one of the FDA’s primary functions, but the agency’s “traditional paradigm of medical device regulation was not designed for adaptive artificial intelligence and machine learning technologies,” according to the agency’s website.

Navigate the healthcare industry

Healthcare Brew covers pharmaceutical developments, health startups, the latest tech, and how it impacts hospitals and providers to keep administrators and providers informed.

By subscribing, you accept our Terms & Privacy Policy.

The Centers for Medicare and Medicaid Services also created an AI playbook, which details the agency’s approach to integrating AI and recommendations for overseeing the technology.

And in September 2025, accreditation organization the Joint Commission along with the Coalition for Health AI (CHAI), a group of healthcare stakeholders working to develop AI guidelines, issued recommendations, which the groups said will “serve as internal governance to help US health systems safely and effectively implement [AI] at scale.”

However, healthcare leaders told Fierce Healthcare that CHAI’s guidelines have only added confusion for the industry.

A health tech’s viewpoint

Despite fears among some health tech leaders that regulation could slow innovation, Tim Hwang, general counsel at AI scribe company Abridge, told Healthcare Brew he isn’t antiregulation.

“I really don’t see it necessarily as more regulation/less progress, less regulation/more progress,” Hwang said. “I do really think we happen to be in a moment where the clarity that regulation can bring is really going to accelerate a lot of the exciting trends that we see in health.”

However, because AI is advancing so rapidly, there are some regulations in place that don’t align with the reality of what’s happening on the ground, Hwang said.

For example, when the Department of Health and Human Services (HHS) introduced information blocking rules in 2021, there was a lot of confusion around whether those rules applied to automated systems and AI agents, according to Hwang. HHS didn’t clarify until December 2025, when a proposed rule said automated systems and AI agents are, in fact, included.

Another concern, Hwang added, is that if the federal government doesn’t create a unified national technical standard for interoperability, there will be a “patchwork of state-level regulations that will create lots of contradictory standards,” similar to patient health data, which lacks unifying federal legislation and leaves companies to navigate various state regulations.

“How do we move fast enough to build a nationwide standard around some of this tech, versus allowing for a lot of fragmentation?” he asked.

A health system’s viewpoint

Tommy Ibrahim, EVP and chief transformation officer at Sioux Falls, South Dakota-based Sanford Health, told Healthcare Brew the health system is taking a “middle-of-the-road” approach to AI regulation.

There is a need to balance testing and validation with ensuring AI tools can be adopted in a timely manner, Ibrahim said. It’s hard to take a tailored approach to regulation, he added, “because there’s so many different technologies and AI tools that are emerging that have different applicability.”

Having uniformity in regulation is also important for health systems like Sanford that operate across multiple states, Ibrahim added. If states each take a different approach to regulation, “it would make it exceedingly difficult for us to actually be able to appropriately invent these tools and deliver on the advantages that [AI] can offer our patients,” he said.

About the author

Maia Anderson

Maia Anderson is a senior reporter at Healthcare Brew, where she focuses on pharma developments like GLP-1s and psychedelic medicine, pharmacies, and women's health.

Navigate the healthcare industry

Healthcare Brew covers pharmaceutical developments, health startups, the latest tech, and how it impacts hospitals and providers to keep administrators and providers informed.

By subscribing, you accept our Terms & Privacy Policy.