February 15, 2019
by Tina Arnoldi
Facebook uses their software to predict if someone is likely to die by suicide based on activity. If a Facebook user creates content indicating that individual could be at high risk for suicide, Facebook notifies local police about the risk and the police may do a wellness check with assistance from Facebook in locating that person. Facebook speaks positive about their efforts. At the end of 2018, Mark Zuckerberg stated that Facebook was able to help first responders reach 3,500 people around the world who needed help.
Issues around Facebook and privacy are already a concern for people but its suicide prediction program has been somewhat under the radar. Although the goal of attempting to save lives is a good one, there are potential risks such as false positives when people are hospitalized unnecessarily and privacy concerns.
But if Facebook is going to screen for suicide anyway, should they make their algorithm public to better involve outside mental health care professionals and people without a vested interest in Facebook to better identify potentially suicidal people? Caleb Backe with Maple Holistics believes “transparency in Facebook's suicide prevention algorithm could help other experts to identify potentially suicidal people. There’s reason to believe this kind of transparency is a crucial precursor to determining if someone is at risk. Nearly all communication now takes place on social media, which means that finding a way to make the connection between what’s written and human experience is becoming increasingly necessary.”
It is true that much communication happens on social media and people share on social media without a filter. Many do not differentiate between personal thoughts shared with a close friend or mental health provider versus public thoughts. People may post on Facebook because it is the only way they know how to reach out. In a day-to-day, in-person situation, they may simply not know how to ask for help.
The flip side of this though, as Backe noted, is that “there are those who do it for attention.” Unfortunately this generates goes false positives and takes care away from people who really need it. Backe feels this “might lead both types [genuine cries for help vs those seeking attention] to hesitate before expressing their mental health concerns. The first because they don’t want to be stopped, and the second because they want attention, but not that much attention. There’s a fine line between your friends knowing where your head is at and seeking real medical attention."
Holly Zink, a social media expert helping who people struggle with digital addiction, acknowledges that although people post freely and often without a filter on Facebook, it is not a true reflection of their lives. “A person's Facebook profile is a representation of what they want the public to see, not their reality,” says Zink. “All too often do you see someone post about how happy they are when actually, they are suffering from a mental health issue or having problems at home. You might get a glimpse into who someone is by looking at their profile, but usually, it doesn't tell the whole truth.”
What Facebook does with user data was especially high profile with the Facebook-Cambridge Analytical scandal from 2018. As new information comes to light, mental health providers and advocates should be aware of Facebook and privacy changes. Facebook using data for more than political means, such as identifying potential suicide risk, was a task traditionally left to mental health professionals. While Zink points out that “Facebook should absolutely not be collecting someone's data without their permission, no matter the situation, the collection of data regarding mental health issues could help save someone's life and prevent them from possibly dying from suicide.” There is no easy answer on how to approach mental health and social media use. The hope is that research and conversation will continue with independent experts weighing in.