Uncategorized

To save suicidal teens, listen to their voice

High-tech acoustic software can help clinicians identify young people at risk and step in before lives are lost

April 28, 2016 Andrew Good

Suicide rates among Americans are on the rise, underscoring the need for early intervention. USC researchers tested and found high-tech acoustic software can identify teens at risk to help clinicians intervene before it’s too late.

The difference between suicidal and non-suicidal patients can be as subtle as the breathiness of their speech and the tension or pitch of their voices.

Through speech analysis of teen patients, researchers at USC and Cincinnati Children’s Hospital Medical Center (CCHMC) have identified specific vocal cues as indicators of suicide risk. Researchers were surprised to realize that some of the cues they had identified were non-verbal.

“If you want to assess a person’s risk to attempt suicide, it’s important to look at what they say, as well as how they say it,” said lead author Stefan Scherer of the USC Institute for Creative Technologies.

The study was published in IEEE Transactions on Affective Computing in January.

The third-highest cause of death

Suicide is the third-highest cause of death for American teens; identifying risk is critical among this group in particular because it is often hesitant to let others know that help is needed. It’s also a key problem among military veterans who worry that seeking out therapy or admitting to mental health problems might stigmatize them.

“We want to develop software and algorithms to help clinicians objectively measure these changes or have a ‘warning light’ as to suicide risk,” Scherer said.

The researchers used software to analyze the voices of 60 patients at the Cincinnati hospital — 30 of whom were suicidal. The patients, ages 13 to 18, had been interviewed in 2011 for a study by John Pestian, a professor of pediatrics, psychiatry and biomedical informatics at the University of Cincinnati and CCHMC. A trained social worker interviewed them for their background and family history, asking the patients open-ended questions about their fears, secrets and emotional struggles.

Scherer’s team analyzed the interviews using computer software that identified both verbal and non-verbal cues. Verbal content, such as mentioning death, repeated references to the past or heavy use of first-person pronouns (I, me, myself) were all common in the speech of suicidal patients. But what was surprising to researchers were the non-verbal cues.

Especially significant were the differences between suicidal and non-suicidal subjects. Suicidal subjects had breathier speech, differences in pitch and other subtle changes in the tenseness or harshness of their voices.

All of these cues are significant precisely because they are non-verbal, Scherer said.

“If you want to assess a person’s risk to attempt suicide, it’s important to look at what they say, as well as how they say it,” he said.

Open-ended questions

Non-verbal cues are much easier to identify because you can ask the patient anything. In the case of the clinical interviews, some of the open-ended questions weren’t even specifically about emotion. Patients were asked about their sleep habits and Internet usage, as well as more direct questions about past traumas.

Scherer characterized the study as part of a wave of innovative mental health research that uses technology in new ways.

“Technology brings a different set of eyes and ears to the field so doctors can focus on their actual work with an individual,” Scherer said. “Doctors don’t have a lot of time, and they’re trying to assess an individual’s risk. It enables them to go back into the nitty-gritty details of what the behavior was really like with an individual and maybe make better assessments.”

Scherer characterized the study as part of a wave of innovative mental health research that uses technology in new ways. It also marks the latest study by USC researchers to use voice analysis for signs of psychological trouble.

Last year, researchers at the USC Viterbi School of Engineering developed algorithms analyzing speech between couples — even predicting whether couples will stay together more accurately than relationship experts.

The study was sponsored by the U.S. Army Research Laboratory and the CCHMC’s Innovation Fund.