Skip to main content
Tech

State of the health tech industry

From AI to telehealth to data privacy, we dove into how healthcare’s digital is growing and reforming.

What’s Inside

Table of Contents

Chapter 1

AI use across healthcare

Chapter 2

Health systems leveraging AI and telehealth

Chapter 3

Concerns over loss of skills or eroded trust

Chapter 4

Providing better training for AI

Chapter 5

Addressing cybersecurity concerns

Chapter 6

Data privacy and tech regulation

Introduction

At the end of last year, we polled 277 of our readers—a mix of professionals in clinical or patient-facing roles as well as those who work in executive and administrative roles—about their thoughts on all things healthcare tech. Of course we touched on the industry’s favorite, AI use and development, but we also asked about common concerns, like cybersecurity and patient privacy risks, and other developing areas such as telehealth and federal regulations.

The following is what respondents identified as their biggest pain points, areas where the industry still has room to grow, and the most common use cases for various technologies so far.

Chapter 1

AI use across healthcare

AI use is quickly growing across industries, and healthcare is no exception. A recent Morning Brew Inc. survey found 75% of polled healthcare professionals said their organizations have already mostly or fully embraced the technology.

The most common use case for new AI tools, according to the survey, were documentation or note-taking assistance (55%). Companies like Ambience, Abridge, and Suki have popped up in recent years to provide ambient listening devices that can build clinical notes while patients and providers meet.

Survey Results

At your organization, how is AI being used right now?

55%

of healthcare leaders said documentation or note-taking assistance is the top current use of AI at their organization

45%

Administrative tasks

30%

Voice agents or chatbots

24%

Patient communication

19%

Population health management or predictive analytics

16%

Clinical decision support

12%

Research or clinical trials

10%

Medical imaging or diagnostics

5%

Not currently using AI

While startup tools are designed to be integrated easily into an electronic health record (EHR), the EHR companies themselves also have an opportunity to build their own AI and take advantage of their existing customer bases. EHR industry leaders like Epic and Athenahealth have created scribe tools to implement directly into their systems. There are already some partnerships (like Athena’s with Microsoft) that provide users with multiple solutions within the EHR.

These tools have helped produce a more than 15,700-hour reduction in work that often extends into “pajama time,” which is when providers find themselves working on documentation after they get home, according to an analysis from journal NEJM Catalyst.

“Eventually, we’re going to get to the point where people are much more comfortable talking to machines than they are talking to people.”

—Aimee Cardwell, CIO at Transcend

Otherwise, administrative tasks (45%) and employing AI as voice agents and chatbots (30%) took the second and third slots for most common uses. These kinds of administrative tasks, like prior authorization and scheduling inefficiencies, can slow down a patient’s ability to access healthcare.

“Eventually, we’re going to get to the point where people are much more comfortable talking to machines than they are talking to people, and they feel like they can multitask better than trying to call somebody in a call center,” Aimee Cardwell, chief information officer and chief information security officer in residence at data privacy company Transcend, told us.

Patient communication (24%), population health management or predictive analytics (19%), clinical decision support like diagnostics and treatment recommendations (16%), and research or clinical trials (12%) were other reported use cases.

For clinical trials, researchers at institutions like Mayo Clinic are using AI to predict drug efficacy. (A Morgan Stanley analysis found AI tools could save healthcare between $100 billion and $600 billion by 2050 just through drug development.) This works by checking internal predictions against previous clinical trial results and patient data.

And when it comes to patient communication, there are already AI tools available, including products from startup No Barrier, which can help translate languages in real time.

“Anything that can help give patients more information and help them better utilize their time with medical professionals—and on the flip side, for physicians, anything that can help them be better armed to make the use of their limited time with the patients—is going to be something that’s very valuable,” Kavi Goel, director of product management for Health AI at Google Research, told us.

Chapter 2

Health systems leveraging AI and telehealth

Technologies like telehealth and AI have become ubiquitous in healthcare as the industry adapts to challenges like care deserts and burnout.

Most healthcare workers say their organization utilizes virtual care and AI, according to Morning Brew Inc. research. Only 5% respondents said their organization wasn’t using AI, and just 11% said their organization wasn’t using telehealth.

Health systems have found a variety of use cases. Most respondents (63% of 117) said their organization used telehealth for messaging or portal communication, while 26% said they used it for remote monitoring. Additionally, 43% said they use it for behavioral or mental health services. The Covid-19 pandemic spurred increased demand for virtual behavioral health visits, with 39% of outpatient telehealth visits being for mental health services between March and August 2021, according to KFF.

Survey Results

What types of telehealth services are offered by you / your organization?

63%

of 117 healthcare leaders say messaging or portal communication is the most common telehealth service their organization offers

59%

Video visits

48%

Phone visits

43%

Behavioral or mental health telehealth

31%

Specialty telemedicine (e.g., dermatology, cardiology, radiology)

26%

Remote monitoring

When it comes to AI, respondents said they used the technology for documentation or note-taking, voice agent or chat capabilities, and medical imaging or diagnostics, all of which are most pervasive in hospital settings.

Matthew Huddle, managing director and partner in consulting firm BCG’s healthcare practice, told Healthcare Brew other common AI use cases he sees in the health systems he works with include appointment scheduling as well as supply and demand matching.

For example, by using AI to automate a hospital’s call center, a facility can potentially attract more patients. If a patient has to wait on hold for too long or they leave a message and never get a response, they’re likely to seek out care elsewhere, Huddle said. But an AI agent can schedule appointments on the first call.

When it comes to supply and demand matching, if a patient needs an MRI, for example, but the closest facility to them doesn’t have any availability for two weeks, an AI tool can see if there are other facilities that can get the patient in sooner, Huddle added.

Results are a bit mixed on AI’s ROI, however.

Typically, AI tools that focus on revenue cycle management prove to have the highest ROI, according to Huddle. The technology has “vastly increased the effectiveness of [revenue cycle management] teams and has led to increased collections,” he said.

“I think, as more of that automation and AI front door is added, you may see an increased utilization of virtual care.”

—Matthew Huddle, Managing Director at BCG

Ambient scribes also provide valuable ROI, though more in terms of provider and patient satisfaction versus a financial boost, Huddle said.

“It has been a huge satisfier for staff. It’s decreasing burnout; I think it can help decrease turnover rates,” he said. “It’s also been a satisfier for the patients, too, because suddenly you’re actually…having a conversation with the physician who is looking at you and giving you their full attention versus mostly spending the time typing on a computer.”

So far, AI hasn’t had much crossover with telehealth technology, according to Huddle. But some hospitals are utilizing an AI triage or AI symptom checker tool, which then may recommend a patient schedule a telehealth visit, he said.

“I think, as more of that automation and AI front door is added, you may see an increased utilization of virtual care,” Huddle said.


Chapter 3

Concerns over loss of skills or eroded trust

While AI is designed to build new efficiencies into the complicated world of healthcare, the tools can also pose some risks.

In Morning Brew Inc.’s survey, 38% reported they were worried about a loss of the “human touch.” That’s in addition to nearly half who reported concerns about cybersecurity and nearly one-third worried about patient privacy.

“We’re not prepared as individuals for the skill set that we need in order to interoperate effectively with machines,” Transcend’s Cardwell told us.

The skills that 127 industry professionals are most afraid of losing are diagnostic reasoning and critical thinking (61%), communication and 1:1 time with patients (50%), empathy and bedside manner (46%), and clinical intuition (46%). Ultimately, Cardwell says most patients still want a human for certain aspects of medical care.

Survey Results

What sort of skills are you most worried about providers losing as technology use increases?

61%

of healthcare leaders are most worried about losing diagnostic reasoning and critical thinking skills as technology use increases

50%

Communication and 1:1 time with patients

46%

Empathy and bedside manner

46%

Clinical intuition

29%

Physical exam and procedural skills

20%

Charting accuracy

17%

Research skills

Other concerns included loss of physical exam and procedural skills (29%), charting accuracy (20%), and research skills (17%).

Despite these concerns, healthcare professionals appear to support their coworkers, with 58% of 262 respondents reporting they strongly or mostly trust colleagues to use AI ethically. Previous studies found, however, that doctors using AI for clinical decision-making may be viewed negatively by their peers (though they still found the tech to be largely beneficial).

When it comes to trust in tech companies, 30% said they mostly trust Big Tech like Apple, Google, and Microsoft. In fact, there appears to be more industry trust in major medical technology companies like Medtronic, Abbott, and Institutive Surgical, with 84% of respondents reporting they mostly or somewhat trust them.

Google Research’s Goel told us tech companies have a responsibility to build trust with users. “Every company has got to decide their own cost benefit,” he said.

Part of the concerns stem from fears that AI could take jobs in healthcare completely, with 48% of respondents reporting they strongly or somewhat agree with this worry. While the American College of Physicians hopes AI will augment, not replace, providers, KFF reported new tech is already coming for jobs like call center roles.

“We’re not prepared as individuals for the skill set that we need in order to interoperate effectively with machines.”

—Aimee Cardwell, chief information officer and chief information security officer in residence at Transcend

“There’s a ton of innovation opportunity left—whether it’s in screening tools, whether it’s in where to route care in different parts of the world, whether it’s improving the efficiency of the medical delivery with physicians and their extended care teams, or whether it’s for patients and their loved ones acting as their own advocates,” Goel said.


Chapter 4

Providing better training for AI

Hospitals are integrating AI tools into their workflows at breakneck speed. Roughly 66% of physicians said they used AI in 2024, up from just 38% in 2023, according to data from the American Medical Association, the country’s largest lobbying group for physicians and medical students.

But many healthcare workers say they aren’t getting an adequate amount of training when it comes to AI, according to Morning Brew Inc. research: 50% of respondents said they felt their training was lacking, and 64% said they received minimal or no training on AI tools.

Survey Results

Do healthcare professionals feel they are receiving adequete amount of training on AI?

22%

strongly agree or somewhat agree

28%

are neutral

50%

somewhat disagree or strongly disagree

Brandon Robinson, managing associate general counsel and chief legal counsel for the University of Arkansas for Medical Sciences, told Healthcare Brew it’s “absolutely a priority” for the Little Rock-based health system to train its workers on AI. But with more than 11,000 employees, “it’s a daunting task.”

So, the system created an AI governance committee—something that, according to research from the Healthcare Financial Management Association, just 18% of health systems have done. It meets monthly to oversee all AI tools and trainings within the health system. The committee’s workload has gotten so large it’s had to hire a dedicated staffer, Robinson said.

Over the past few months, the health system has rolled out AI scribes—a functionality more than half of survey respondents said they use AI for—which record and summarize patient–provider conversations, saving clinicians time otherwise spent charting. Before providers can use a scribe, they must take training to get a license. Once they receive their license, they get access to additional training on how to use the scribe and the legal requirements that go along with it, Robinson said.

In order to keep everyone properly trained, the system is rolling out a “limited number” of AI tools, according to Robinson.

“We have to be very diligent about monitoring what’s being used,” he said. “We not only have to ensure that the officially approved tools are being used properly, we also have to ensure that someone is not improperly going out and using their personal accounts.”

Since not all health systems have the resources for a large governance committee, it can be easier for some to focus on utilizing AI tools that are already integrated within EHRs, Robinson said. For instance, EHR giant Epic launched its own AI scribe tool called Art in February—a move that’s expected to shake up the healthcare ambient scribe market.

“That’s not just the most efficient from a regulatory standpoint because we have this sandbox, which is our EHR that we know is protected, but [because] we’re also already paying for a large EHR system,” he said. “It’s more economically advantageous for us to use it when it’s built into our system.”


Chapter 5

Addressing cybersecurity concerns

As more healthcare companies turn to AI, cybersecurity risks have become a top worry.

Morning Brew Inc. data found 48% of polled healthcare industry professionals ranked cybersecurity as one of their three biggest concerns within health tech. That’s a higher share compared to any other issue.

Agentic AI brings particular risks, Vijay Balasubramaniyan, CEO and co-founder of cybersecurity solutions company Pindrop Security, told Healthcare Brew. Hackers may be able to convince eager-to-please voice or chat AI agents to divulge private patient information like social security numbers. This info could be used to access health savings accounts or even bank accounts, he said.

Bad actors are even creating their own AI voice agents with a speaking voice that’s realistic enough to trick humans, Balasubramaniyan added.

“When we first started, what we were telling these healthcare organizations is: ‘Your AI bots are getting tricked,’” he said. “Now I think it is pretty much even Steven.”

There are steps patients, policymakers, providers, and technology vendors can take to protect patient data. But when an attack does occur, Morning Brew Inc. data suggests that, like it or not, hospitals may take most of the blame.

Of 160 respondents, 79% said they think hospitals and health systems are “very responsible” if a cyberattack occurs.

This makes it extra important for healthcare systems to ensure they aren’t vulnerable.

The “most critical piece” for healthcare organizations to prevent cyberattacks is to have detailed governance and oversight of how and where AI is used, Scott Gee, deputy national advisor for cybersecurity and risk at trade group the American Hospital Association, told Healthcare Brew.

“Understand where you’re using AI now, understand where your data is going, and understand going forward what each AI tool that you implement does,” Gee said. “It’s a matter of making sure you’re using reputable AI that is producing good-quality answers and that data is being used appropriately.”

Beyond hospitals, survey respondents believe regulators and policymakers (50% of 160) are “very responsible” for a cybersecurity lapse. 

Survey Results

Who is responsible for preventing cybersecurity incidents?

79%

say hospitals and health systems are most responsible for preventing cybersecurity incidents

74%

Technology vendors

59%

Individual providers and staff

50%

Regulators or policymakers

24%

Patients

While there are no federal regulations currently in place, there are tools available to help. Organizations such as nonprofit healthcare accreditor URAC offer accreditation for safe, ethical AI programs.

Pindrop also creates technology that helps healthcare organizations detect whether the voice on the other line is a hacker bot or not, using information like the voice’s sound signature, the call’s originating location, and what type of device it’s coming from.

“We’re heading into a brave new world where there’s a lot of blurring of the lines of human identity. And I think this is where you need great technology to bring back trust.”

—Vijay Balasubramaniyan, CEO and co-founder of cybersecurity solutions company Pindrop Security


Chapter 6

Data privacy and tech regulation

Many in the healthcare industry want the government to protect patients from AI’s potential harms.

Morning Brew Inc. survey data found 83% believe additional regulation is needed when it comes to the use of AI in healthcare, and 71% say regulatory efforts “are not moving fast enough.”

The majority of states have enacted laws or adopted resolutions around AI both within healthcare and beyond, per the National Conference of State Legislatures—something 50% of 160 respondents said they supported. However, President Trump has made moves to reduce states’ ability to regulate AI, including a Dec. 11 executive order that told federal agencies to identify “onerous” state AI laws and propose a federal AI standard that would overrule state restrictions.

So far, none of that has happened.

Though the federal government hasn’t established an AI standard specifically, it does have a strict standard for protecting patient health information: the Health Insurance Portability and Accountability Act (HIPAA). The problem is HIPAA rules were created in the 1990s and 2000s, when technology was vastly different, Vasiliki Rahimzadeh, assistant professor of medical ethics and health policy at Baylor College of Medicine, told Healthcare Brew.

“The explosion in health AI development has further exposed fault lines in current US privacy regulations for adequately protecting personal data.”

—Vasiliki Rahimzadeh, assistant professor of medical ethics and health policy at Baylor College of Medicine

“The explosion in health AI development has further exposed fault lines in current US privacy regulations for adequately protecting personal data,” Rahimzadeh said.

For instance, consumer data from wearables, social media, and geolocation isn’t protected under HIPAA, Vasiliki explained. HIPAA is also limited to certain types of organizations. Though it covers healthcare providers, many AI developers are private companies that don’t fall under HIPAA’s purview, Rahimzadeh added.

Survey Results

Do healthcare professionals support federal regulation in addition to HIPAA on protecting patient data?

37%

of 160 healthcare professionals strongly support federal government creating patient privacy rules that go beyond HIPAA

33%

Somewhat support

8%

Are neutral

4%

Somewhat oppose

2%

Strongly oppose

16%

Are not familiar enough to have an opinion

The majority of those surveyed by Morning Brew Inc. (70% of 160) supported the idea of the federal government creating patient privacy rules that go beyond HIPAA.

“I think the overall goal should be to elevate protections for Americans everywhere, while enabling AI research and development that advances national interests at the same time,” Rahimzadeh said.


Navigate the healthcare industry

Healthcare Brew covers pharmaceutical developments, health startups, the latest tech, and how it impacts hospitals and providers to keep administrators and providers informed.

By subscribing, you accept our Terms & Privacy Policy.

Navigate the healthcare industry

Healthcare Brew covers pharmaceutical developments, health startups, the latest tech, and how it impacts hospitals and providers to keep administrators and providers informed.

By subscribing, you accept our Terms & Privacy Policy.