Meena, featured on Google’s AI blog, is a chatbot trained in a conversational model. The concept is that it conducts conversations that are more sensible and specific than existing chatbots. This chatbot, trained with 2.6 billion parameters, is designed to offer more humanlike conversations.
Connecting with chatbots for personal help isn’t a new concept as some have already explored the possibilities of chatbots as therapists. But as we get closer to human-like conversations with a chatbot, the question is whether people are comfortable with it.
While John Stevenson, CEO of Top VPN Canada, looks forward to new developments that help push his business farther, he has concerns about privacy. “Chatbots have become a feature incorporated in many online businesses - conveniently programmed to address very basic questions and sparing the extra expense for hiring additional customer service representatives,” said Stevenson.
These benefits are clear. With routine customer service requests, this is a big time savings for companies. If a chatbot can walk a customer through a solution, it reduces the resource load on a company. But a trade-off with automation can mean a loss of privacy.
Stevenson says “Since my business focuses on internet privacy, I am someone who is very particular with online security. As much as Meena promises its flexibility to hold a conversation, chatbots like it also make very good targets for hackers. With this technology, the chances for scamming, hacking and phishing information increase as more users interact with the chatbot. With Meena’s almost-humanlike interactions, it will make it hard to identify red flags. One example of my concern for this type of technology in the real world is phishing phone calls. With this sort of conversational technology Meena is able to produce, hackers and scammers can use this to scam people over the phone into sending them money by pretending they are a government official. This can cause a huge spike in frauds and a big problem for law enforcement agencies.”
In addition to privacy, there is the clear complexity in programming chatbots to have these conversations. Ketan Pande, Founder at Goodvitae, worked on chatbots for his platform and does find them helpful, but acknowledges the challenge in feeding all possible questions and replies in the program. He said, “The chatbot satisfies repetitive queries with the same informative response which is difficult for humans. How we reply to an inquiry as people depends on factors like mood, time, etc.”
Creators of Meena might argue that it can go beyond these standard responses because of how it was developed. Conversations pulled from social media were organized into message trees where the first message was the parent and the replies were considered child nodes. This made it possible for the platform to learn from and give an impression that it ‘understood’ the topic of discussion.
Prairie Conlon, LMHP and Clinical Director of CertaPet, thinks using an app or a “chatbot” for your mental health can give you a temporary fix when you’re feeling down, but it is no substitute for a therapist or having a conversation in real life. Even though Meena is trained on real human exchanges, Conson notes that “AI is programmed to use standard responses, but we know that when it comes to human beings and personal problems, it’s no one size fits all. Using an app is better than nothing when in a pinch, but should really be used as a supplement to actual treatment."
While there is some value to chatbots as a way to lift mood, it is more meaningful with a real human on the other side who can respond to nuances that AI cannot. Eventually most chatbots show their true selves. In the start of a conversation, they are more likely to pass as human, but later on, responses may not make sense due to the rigidity of parameters for responding. And that helps no one feel better.