Tech

First-of-its-kind study uses AI to diagnose mental distress in healthcare workers

The study, led by NYU Grossman School of Medicine researchers, indicates that similar technology can be used to help medical workers self-diagnose anxiety and depression symptoms.
article cover

Tempura/Getty Images

5 min read

Hospital ICUs. Lack of sleep. Mood swings.

Researchers found that these keywords helped artificial intelligence (AI) effectively detect distress during healthcare workers’ therapy sessions in the earliest months of the Covid-19 pandemic, according to a study published in the Journal of Medical Internet Research AI on October 17.

The findings, from researchers at New York University (NYU) Grossman School of Medicine, could potentially help healthcare workers address burnout before it reaches critical levels.

“Finally we have a thermometer for anxiety and depression,” Matteo Malgaroli, the study’s lead author and a research assistant professor at NYU Langone Health’s psychiatry department, told Healthcare Brew.

How it works

In the study, a natural language processing (NPL) algorithm scanned through psychotherapy sessions transcripts from virtual behavioral health services provider Talkspace to highlight key themes and common phrases. The study is the first to use NLP—an AI-powered technology that helps computers process written and spoken language like humans—to identify psychological distress markers in healthcare workers’ speech, according to Malgaroli.

The researchers focused on the first three weeks of therapy that took place between March and July 2020 for 820 physicians, nurses, and emergency medical staff, along with sessions for 820 nonmedical workers.

They then tied the AI-detected keywords to certain mental health diagnoses, and found that healthcare workers who spoke to their therapist about either working in a hospital unit or experiencing a lack of sleep or mood issues were more likely to receive an anxiety and depression diagnosis than healthcare workers who didn’t discuss those topics. Notably, nonmedical professionals who experienced work stress were not found to have the same acute mental health issues.

“We kind of have the intuition, right, that people who were working as healthcare workers during Covid are probably going to have burnout, depression, but this is a way to objectively show that,” Malgaroli said. “The reason why we included this match sample of non-healthcare workers was to show that these people were stressed about it, they had some work disruptions, they had some general life disruptions, but it didn’t affect them in the same way as the healthcare workers.”

The therapy sessions took place during the height of New York City’s first wave of the Covid-19 pandemic, where upwards of 700 people died a day. Researchers analyzed transcripts from individuals across 49 states, with the largest concentration of healthcare participants coming from New York (13.1%) and California (13%).

Privacy breakdown

Talkspace, which provided the data for the analysis but wasn’t involved in the study, collected information about participants—like gender, age, and location—during the recruitment process, Mary Potter, Talkspace’s chief privacy officer, told Healthcare Brew. Privacy and security personnel reviewed the transcripts to “ensure successful deidentification,” which is required for organizations to share data, she said.

Navigate the healthcare industry

Healthcare Brew covers pharmaceutical developments, health startups, the latest tech, and how it impacts hospitals and providers to keep administrators and providers informed.

“This study was performed under the direct oversight of the Institutional Review Board (IRB) at NYU. The NYU IRB reviewed how Talkspace deidentifies data using the ‘safe harbor’ method defined by Health and Human Services and HIPAA,” Potter said. “After that review, NYU determined that the individual that is the subject of the transcript could not be identified and additional consent was not needed to protect the participants.”

Potter added that the technology has the potential to help providers identify “‘blind spots’ they may miss over the course of care.”

Practical implications

Malgaroli hopes to get the technology into the hands of medical professionals soon.

“The general idea is to have possibly a passive measurement, right? Because when we think about burnout, the time that somebody comes to a doctor and says, ‘I think [I have] burnout,’ it’s a bit too late,” he said.

Not only would a digital tool help individuals manage their mental health, but it could also help the professionals treating them.

“Everyone’s excited about the opportunities to use AI. The potential is great,” he said. “A lot of doctors [are worried] this technology will be used instead. Instead of replacement, what this technology can do is assist […] some of the most routine and standardized parts of our job.”

He added that “this will not only be a way to remove a lot of clinical burden [and] allow clinicians to truly engage in what they care about—which is the patient-to-doctor relationship—but also it brings standardization.”

Still, more research is needed to ensure a digital screening tool is culturally competent. If data from a certain demographic—perhaps the kind of patients comfortable with virtual therapy, or those with no digital savviness and don’t know it exists at all—is used to train the AI, for example, that means that group’s more likely to be represented. Other efforts led by Malgaroli, who hails from Italy, include expanding the work “to be culturally inclusive,” such as seeing how AI can suggest diagnoses when the speaker has an accent.

“What is the efficacy of this [application] across languages, across certain characteristics?” he asked, citing potential questions for future research. “There needs to be equity; there needs to be representation.”

Navigate the healthcare industry

Healthcare Brew covers pharmaceutical developments, health startups, the latest tech, and how it impacts hospitals and providers to keep administrators and providers informed.

H
B