Skip to main content
Tech

‘Shadow AI’ continues to lurk in healthcare settings

A recent survey found close to a fifth of workers admitted to using unapproved tools.

5 min read

At a time when tech companies want to make AI tools as standard-issue as stethoscopes, the technology is seemingly everywhere in the healthcare industry. But some of its use still remains in the shadows, so to speak—ungoverned by workplaces and rife with security and patient safety risks, experts said.

This so-called “shadow AI” remains problematic, according to a recent survey from professional software provider Wolters Kluwer: Nearly a fifth (17%) of more than 500 healthcare workers admitted to tapping unauthorized AI in the workplace. And two in five said they’d encountered such a tool but didn’t use it.

Alex Tyrrell, SVP and CTO of Wolters Kluwer’s health division, told us healthcare workers aren’t necessarily breaking the rules intentionally; they may not have a clear idea of what tools are allowed or how tech companies use data inputted into AI systems for training purposes.

“As these tools become more ubiquitous, as we become familiar with them and use them in our daily lives, there’s the potential to kind of blur the line when you’re in a workplace setting, particularly in a regulated environment,” Tyrrell told Morning Brew.

It’s an extension of a problem that existed well before the AI boom, that of “shadow IT” more broadly, Tyrrell said. Online tools like browser extensions or seemingly innocuous software can create unforeseen data and privacy risks when not overseen by proper channels. It doesn’t help that every workplace platform has now introduced myriad new AI features that may or may not be clearly labeled as such, Tyrrell said.

“Suddenly there’s a new button, and that new button may be AI-driven, and it may not have gone through the same vetting process, it may not be monitored as closely,” he said. “And that’s almost another new avenue or new vector for the shadow AI.”

Privacy risk

Andy Fanning, co-founder and CEO of healthcare company Optura, said data privacy is the chief concern when it comes to an industry as sensitive as healthcare. While companies like OpenAI say they have processes for removing personal information before using interactions in any kind of training, the actual details of training techniques and processes tend to be tightly held trade secrets.

“If you upload 100 claim files into ChatGPT base—just your normal ChatGPT—they’re training on that data,” Fanning told us.

While false information from AI can also pose a risk to patient safety, both Fanning and Tyrrell said this was less of a problem. Healthcare workers using these tools are likely aware that AI is prone to hallucinations and vet output accordingly, they said.

Shadow AI most commonly comes into play when workers see potential for an AI tool, but their employer doesn’t yet have an option available, according to the Wolters Kluwer survey. Nearly half the respondents said they turned to AI to hasten workflows, and a third said their workplace lacked desired approved tools.

Navigate the healthcare industry

Healthcare Brew covers pharmaceutical developments, health startups, the latest tech, and how it impacts hospitals and providers to keep administrators and providers informed.

“Shadow technology at its core is an unmet need,” Fanning said. “The cause really is that we’ve been limited on technology budgets for years. There’s a lot of technical debt underneath. It’s pretty complicated to implement new things. They’re really just trying to keep the lights on in most of these organizations.”

Policies in place

Jessica Lamb, a partner at McKinsey focused on healthcare, said as sanctioned adoption has increased, shadow AI usage has become less of a problem, and many client companies now have policies in place to channel workers into proper avenues for tapping AI.

“At this point, I would say most organizations have some sort of enterprise large language model that they are comfortable with, and have been a bit more clear about the guardrails of what you can and can’t do,” Lamb said. “So to me, it’s much less of a problem now, especially because both those guardrails have been established, they’ve been generally pretty widely articulated, and there is access.”

Tyrrell recommends that IT teams perform regular audits for browser extensions, integrations, applications, “especially anything that’s capable of performing data processing, where [protected health information] may be exposed.” Employees should also understand that IT usually approves software and tools for one particular use case rather than giving blanket permissions, he said.

The landscape is also changing as more tech companies set their sights on the healthcare industry. Anthropic, OpenAI, and others have rolled out tools aimed at both providers and potential patients in recent months.

While these tools are tailored to healthcare regulations and privacy concerns, Tyrrell said they have the potential to introduce “dramatic confusion in the landscape.”

“I always emphasize that it’s really a mapping process. There can be tools that are really good for certain efficiency, certain repetitive tasks, certain administrative burdens in really any industry or any organization,” Tyrrell said.

“And then there’s the other side, where you really have to be very specific about how you’re going to deploy this tool and what its specific purpose and intent is, and make sure that that is well-governed as well. It’s not just, ‘We got a license to an approved tool; it went through sourcing and procurement, we looked at the licensing terms, it seems safe.’ You also have to think about, ‘What is the end user? What is the role? Who is the person that’s going to be using this tool?’”

Tyrrell noted that as more AI tools emerge in healthcare settings, it “becomes almost like a combinatorics problem.”

“There are just so many things to keep track of,” he said.

Navigate the healthcare industry

Healthcare Brew covers pharmaceutical developments, health startups, the latest tech, and how it impacts hospitals and providers to keep administrators and providers informed.