Theravive Home

Therapy News And Blogging

July 19, 2019
by Tina Arnoldi

Ready for Amazon to Read Human Emotions?

July 19, 2019 08:04 by Tina Arnoldi  [About the Author]

Photo by Bianca Castillo on UnsplashAs reported by Bloomberg, Amazon is developing a device to read human emotions and designed to work with a smartphone app. Microphones paired with software discern the wearer’s emotional state from the sound of his or her voice. According to internal documents, the goal is to improve the user’s interactions with others.

It is not the first time Amazon has been in the news for concerns about its use of AI (artificial intelligence). The question is whether AI can provide actionable insight into the expression of emotion and if it is appropriate to rely on a device for guidance on how to act in a social situation.

Javid Muhammedali, VP of Artificial Intelligence at Bullhorn believes it is possible to discern emotion and tone in a conversation and take an action based on that evaluation. He notes “the key to responsibly using this type of information in human resources could be to coach people on how to respond and provide helpful suggestions. This could help interviewers discern nervousness in a job candidate and perhaps offer tips for how people with disabilities can respond to a recruiter’s question.” While AI can help guide these interactions, humans should still be the ones “making the final decisions from multiple options and indications, not a machine judging a person’s emotion.”

Josh Wardini, Co-Founder of Serpwatch, anticipates AI will eventually be able to recognize major human emotions with a high level of accuracy. “Human voice changes significantly based on a person’s emotional state which gives the device a chance to distinguish between joy, anger, sadness, happiness,” said Wardini.

Rana Gujral, CEO of Behavioral Signals, is optimistic about the potential of emotion recognition but notes that we need to make sure the technology we create is emotionally intelligent. “As robotics and AI evolve, we can safely bet on two things,” said Gujral. “Machines will become more and more intelligent and we will rely on these machines increasingly so. With these two truths, we need to make sure that we’re preparing these intelligent machines to be emotionally intelligent and morally sound. We do not want to create a machine that is the definition of a psychopath, in a human context. Emotionally intelligent entities make morally and ethically sound decisions that will make our lives safer and easier.” Done correctly, this could help professionals, law enforcement, medical practitioners and educators in their daily interactions.

Dr. Stylianos Kampakis, a data scientist and CEO of The Tesseract Academy, believes affective computing has the potential to change the world. “We've seen many companies in this area, and the ubiquity of smart devices (like smart watches) means that the consumers are already open to the idea of tracking variables like heart rate," notes Kampakis.

If people have already demonstrated openness to tracking physical data, why not emotional data? Could this have negative implications? Kampakis is not concerned. “Those products will be used by users in order to increase mindfulness, a bit like a digital psychotherapist. I think any product that helps us do that, can have a positive effect on our lives."

Keiland Coopera researcher at the University of California Irvine and co-founding director of ContinualAI, says the issue is how the technology is used more than the technology itself. “The algorithms are only as good as the data given to them, and while a huge company such as Amazon will have more data of this kind than any other, there is still potential for incorrect classifications to be made. Anonymity is also a concern, especially with something as personal as someone's emotions. “

Cooper believes it’s important that the limitations of any device are laid out to the consumer about what it can and cannot do. He agrees with Kampakis that a Fitbit style approach “to give the consumer insight on their habits but keeping suggestions to a minimum, may be the best approach in these early days. Allowing users to know precisely how their data is being used and what privacy options they have is also important.”

Wardini expresses some caution about how data is used beyond self-monitoring. He said “I don’t see how AI could properly advise people to interact in a certain way and take into account all of the specific circumstances. Social situations vary greatly and there is rarely a formula which always leads to a successful outcome. AI would need much more than just emotional recognition in order to be able to give the best advice possible.”

Interacting with a device through voice is not new as we see with the proliferation of home devices. People use them on a daily business and emotional state is one of many signals that may result in responses that are aligned with what the user needs. The question is whether this information is best used informationally for self-monitoring or can provide behavioral guidance when interacting with others.

About the Author

Tina Arnoldi

Tina Arnoldi, MA is a marketing consultant and freelance writer in Charleston SC. Learn more about her and connect at TinaArnoldi.com


Comments are closed