Theravive Home

Therapy News And Blogging

October 18, 2022
by Patricia Tomasi

How Can Artificial Intelligence Help With Suicidal Ideation?

October 18, 2022 08:00 by Patricia Tomasi  [About the Author]

A new study published in the Journal of Psychiatric Research looked at the performance of machine learning models in predicting suicidal ideation, attempts, and deaths.

“My study sought to quantify the ability of existing machine learning models to predict future suicide-related events,” study author Karen Kusuma told us. “While there are other research studies examining a similar question, my study is the first to use clinically relevant and statistically appropriate performance measures for the machine learning studies.”

The utility of artificial intelligence has been a controversial topic in psychiatry, and medicine overall. Some studies have demonstrated better performance with machine learning methods, while others have not. Kusuma began the study expecting that machine learning models would perform well.

“Suicide is a leading cause of years of life lost across most of Europe, central Asia, southern Latin America, and Australia (Naghavi, 2019; Australian Bureau of Statistics, 2020),” Kusuma told us. “Standard clinical practice dictates that people seeking help for suicide-related issues need to be first administered with a suicide risk assessment. However, research has found that suicide risk predictions tend to be inaccurate.”

Only five per cent of people ordinarily classified as high risk died by suicide, while around half of those who died by suicide would normally be categorised as low risk (Large, Ryan, Carter, & Kapur, 2017). Unfortunately, there has been no improvement in suicide prediction research in the last fifty years (Franklin et al., 2017).

“Some researchers have claimed that machine learning will become an efficient and effective alternative to current suicide risk assessments (e.g. Fonseka et al., 2019),” Kusuma told us, “so I wanted to examine the potential of machine learning quantitatively, while evaluating the methodology currently used in the literature.”

Researchers searched for relevant studies across four research databases and identified 56 relevant studies. From there, 54 models from 35 studies had sufficient data, and were included in the quantitative analyses.

“We found that machine learning models achieved a very good overall performance according to clinical diagnostic standards,” Kusuma told us. “The models correctly predicted 66% of the people who would experience a suicide-related event (i.e. ideation, attempt, or death), and correctly predicted 87% of the people who would not experience a suicide-related event.”

However, there was a high prevalence of risk of bias in the research, with many studies processing or analysing the data inappropriately. This isn’t a finding specific to machine learning research, but a systemic issue caused largely by a publish-or-perish culture in academia.

“I did expect machine learning models to do well, so I think this review establishes a good benchmark for future research,” Kusuma told us. “I do believe that this review shows the potential of machine learning to transform the future of suicide risk prediction. Automated suicide risk screening would be quicker and more consistent than current methods.”

This could potentially identify many people at risk of suicide without them having to reach out proactively. However, researchers need to be careful to minimise data leakage, which would skew performance measures. Furthermore, many iterations of development and validation need to take place to ensure that the machine learning models can predict suicide risk in previously unseen populations.

“Prior to deployment, researchers also need to ascertain if artificial intelligence would work in an equitable manner across people from different backgrounds,” Kusuma told us. “For example, a study has found their machine learning models performed better in predicting deaths by suicide in White patients, as opposed to Black and American Indian/ Alaskan Native patients (Coley et al., 2022).”

That isn’t to say that artificial intelligence is inherently discriminatory, Kusuma explained, but there is less data available for minorities, which often means lower performance in those populations. It’s possible that models need to be developed and validated separately for people of different demographic characteristics.

“Machine learning is an exciting innovation in suicide research,” Kusuma told us. “An improvement in suicide prediction abilities would mean that resources could be allocated to those who need them the most.”

About the Author

Patricia Tomasi

Patricia Tomasi is a mom, maternal mental health advocate, journalist, and speaker. She writes regularly for the Huffington Post Canada, focusing primarily on maternal mental health after suffering from severe postpartum anxiety twice. You can find her Huffington Post biography here. Patricia is also a Patient Expert Advisor for the North American-based, Maternal Mental Health Research Collective and is the founder of the online peer support group - Facebook Postpartum Depression & Anxiety Support Group - with over 1500 members worldwide. Blog: www.patriciatomasiblog.wordpress.com
Email: tomasi.patricia@gmail.com


Comments are closed