Theravive Home

Therapy News And Blogging

December 14, 2018
by Tina Arnoldi

Artificial Intelligence Bias and Mental Health Implications

December 14, 2018 09:35 by Tina Arnoldi  [About the Author]

Machines do what we tell them to do so are we not responsible for bias?  Headlines such as “Amazon scraps secret AI recruiting tool that showed bias against women” imply the demographics of a team are very relevant to the users of these tools. If machines can discriminate, what are some considerations around bias and AI fairness when it comes to mental health?

When asked this question, Cody Swann, with Gunner Technology, first points out that Amazon didn’t scrap their program "because the programmers - or Amazon - was biased, per se, but rather due to period data. Basically, the team fed tons of data on current and former employees into a system along with the employee ratings."

After entering the data, the system compared new applicants and potential recruits against an existing rating system. As Swann noted, the skewed data was there because in the past “men were rated highly than women.” The machine was doing exactly what it was told to do. It was not about bias.

Swann continues, "There weren’t as many women working for Amazon over this period, so the data set had more highly rated men than women. This is a problem of historical data creating a self-fulfilling prophecy AI."

Claire Whittaker, a tech blogger, data scientist and product manager at Amazon, agrees with Swann’s viewpoint. "An algorithm can only learn from the data you give it. With so many of us being unaware of our unconscious bias combined with a known challenge bringing diverse voices into tech, it is inevitable that issues [such as those seen in the media] occur. With AI able to process data at a significantly faster rate than a person, these biases can rapidly present and scale in ways unintended by the developers." 

But even self-learning AI needs examples to learn. People are still the ones supplying the data, stated Alan Majer with Good Robot.  “It's a human that does the labeling, so in a resume screening tool you might say this candidate is desirable or this candidate is not desirable. So your bias may exist in how you label them. Many times you'll try and get around these biases by looking at outcomes instead, you'll say, look, this employee stuck around and was very successful, but this one wasn't. But in that case you might discover that your organization has a systematic gender bias. Maybe women tend not to stick around because of that bias and of course the AI will simply learn that organizational bias and reinforce it in your hiring practices.”

This background of how bias originates is especially important when it comes to mental health, an area that is already stigmatized. And people do not experience life in silos. What are the cultural, organization, and familial factors that impact one’s mental health and how does it influence treatment?  

"Mental health is a far trickier domain for AI because of our own incomplete understanding of the human brain - the most complex known structure in the universe," says Majer, who studied psychology as well as software development. “Our incomplete understanding makes mental health a much more subjective discipline than many others. People are complex, we often fail to understand our own motives, and even family and experts may struggle to help. So that means that labeling and defining clear-cut examples for AI to learn from becomes even more difficult, and fraught with bias. The only good news is that because of our own build-in biases, AI's unusual belief system that does not share all of our assumptions might be able to learn and identify new and useful insights which humans do not. Perhaps such insights will prove helpful in improving our understanding of mental health too.”

Maybe AI can get beyond individual biases, if developed with the rigorous standards around mental health research and protocols. Harry Glaser, CEO and Co-Founder of Periscope Data places responsibility on people to “understand the potential for harm when analyzing the data.” Knowing how to manipulate data is not enough of a skill set. His company works with Crisis Text Line, an AI solution in the mental health space. Based on machine learning and natural language processing, the platform analyzes “conversations based on common keywords to help forecast crisis trends and train counselors to have impactful conversations with texters.”  When help is offered without the benefit of social and visual cues, any indicators that serve as an early warning sign could be a matter of life and death in a crisis. And although an algorithm will never perfectly identify someone as having a mental health problem, neither will people.

With Amazon, or any other service provider, Whittaker points out the need to increase diversity in the people responsible for building out these technologies.  “By creating a diverse pool of talent, we minimize the impact of bias we cannot see. The risk if we do not do this on our mental wellbeing, is that the challenges we face for inclusion are amplified exponentially.”  This implication is valid for mental health professionals as well as those who develop mental health applications. For developers, technical skills are definitely not enough. Preeti Adhikary, VP of Marketing for Fusemachines stresses the value of including “mental health leaders in the design of such tools so that industry-specific, non-technical nuances are covered too.” Perhaps developers with a psychology degree will be the new standard?

About the Author

Tina Arnoldi

Tina Arnoldi, MA is a marketing consultant and freelance writer in Charleston SC. Learn more about her and connect at TinaArnoldi.com


Comments are closed