At a time when tech companies want to make AI tools as standard-issue as stethoscopes, the technology is seemingly everywhere in the healthcare industry. But some of its use still remains in the shadows, so to speak—ungoverned by workplaces and rife with security and patient safety risks, experts said. This so-called “shadow AI” remains problematic, according to a recent survey from professional software provider Wolters Kluwer: Nearly a fifth (17%) of more than 500 healthcare workers admitted to tapping unauthorized AI in the workplace. And two in five said they’d encountered such a tool but didn’t use it. Alex Tyrrell, SVP and CTO of Wolters Kluwer’s health division, told us healthcare workers aren’t necessarily breaking the rules intentionally; they may not have a clear idea of what tools are allowed or how tech companies use data inputted into AI systems for training purposes. “As these tools become more ubiquitous, as we become familiar with them and use them in our daily lives, there’s the potential to kind of blur the line when you’re in a workplace setting, particularly in a regulated environment,” Tyrrell told Morning Brew. Read more on this risky AI use here.—PK |