November 5, 2018
by Tina Arnoldi
In a recent Op-Ed for The Washington Post, Dr Adam Hofmann addressed the use of artificial intelligence (AI) to diagnosis mental health. The picture that AI captures may reflect how people feels at a point in time, but not their general state. And tools that use data to diagnose may be developed by individuals without mental health backgrounds. Additional complications stem from how we communicate. Electronic conversations with emojis, words and actions are very much in the moment and not a true reflection of a person. But since so many people spend their time interacting with the digital world, I asked him if AI is a tool mental health professionals can use in some capacity.
“It has been well-documented in fact that the socionome is very much a curated depiction of reality”, said Hofmann. "The argument that an algorithm might be able to pick out a subtle pattern not visible to a human observer is a tempting one, but as usual, these diagnostic tools are limited by the signal-to-noise ratio.” I agree the amount of available data from technology is definitely overwhelming with a lot of noise, but wondered if there was still potential for diagnosing mental illness.
Hofmann said, “Algorithms looking for a crude, almost brute-force description of a clinical term such as depression or anxiety might miss subtler representations of such which might be circumstantial or context- or culture-specific.” There are many cues that a person gives through their demeanor which is identified only by another person. Caleb Backe with Maple Holistics agreed, asking how an app could “detect if a person was tired due to a prolonged physical sickness or due to depression?” And although an algorithm may recognize a word, it will not necessarily know what it means. If someone says “I can’t do this anymore” to a counselor or physician, it can have a different meaning than someone who says that because of frustration with workplace bureaucracy.
It is also much easier for people to choose how to present themselves online compared to presenting an image in person. Facebook is an excellent example where people create cyber-airbrushed images. Hofmann sees this as an additional concern. “How much of one's output on the internet and social media is virtue-signaling, and how much represents virtue? How much is true to self and how much is code-switching? I suspect that by its very nature, the social media output of people is likely to be far less useful a diagnostic tool than traditional, personal and confidential interviews in a patient-caregiver setting.”
For the sake of argument, I wondered if regulating algorithm based tools could open to door to confidently using them for early intervention. Since computers can comb through data much quicker than people, perhaps they can at least help mental health professionals screen patients.
Hofmann does not believe we are there yet. “The best examples of AI and algorithm-based tools are those in which the patient or user self-identifies as needing help. That in and of itself is profoundly important. A successful example of this is the Crisis Text Line, which was deployed as a first-line triage service on the backbone of a traditional counseling service. It uses the strengths of both tools, AI and clinical care.”
Conversations on Twitter around this topic indicate that individuals agree with this assessment, where tools are beneficial when they recognize the need for help. As one user said, “if you aren't officially diagnosed, the app still guides you through and gives you a (preliminary) diagnosis anyway by letting you self-check for symptoms." Backe concedes some value in this if an "app will alert both the user as well as the user’s doctor and provide a way for them to communicate”. Perhaps the role of mental health apps are to be early warning symptoms rather than a formal diagnosis?
Hofmann is still hesitant about apps playing a role in identifying mental illness. “Unless there are significant changes in diagnostic accuracy, which for aforementioned reasons I doubt in the near future, the most successful application of this type of technology will occur when new technologies are wedded to traditional pathways of clinical care and are thus subject to the societal pact and governance rules of the latter. This will ensure minimal risk and maximal reward in the short- to medium-term.” Until that time, we hope anyone looking to algorithms for assistance around mental health does so with care.
Tina Arnoldi, MA is a marketing consultant and freelance writer in Charleston SC. Learn more about her and connect at TinaArnoldi.com