[ad_1]
Which option is better: call centers staffed by humans or chatbots? On the one hand, large enterprises complain that it costs too much money to answer the hundreds of thousands of calls they receive each month. On the other hand, customers are frustrated when chatbots don’t know how to solve their problems.
The answer isn’t as simple as choosing between humans or a chatbot. Behavioral science shows us that the optimal solution is a carefully selected combination of both, with a few clever human-centric design hacks along the way.
Creating a Great Chatbot Customer Experience Is Difficult
Customer experience is the new marketing battleground. One survey on the role of marketing found that 89 percent of companies expect to compete mostly on the basis of customer experience, up significantly from 36 percent just four years ago. Less than half of those surveyed believed their customer experience capabilities were superior to those of their peers.
Perhaps the biggest challenge to moving customer experience to chatbots is that 86 percent of consumers prefer humans. And businesses might be underestimating the challenge. Research by NICE inContact shows that businesses tend to underestimate customers’ satisfaction with human-assisted methods of customer service and overestimate satisfaction when it comes to self-service methods like chatbots.
Even big tech companies, with massive resources at their disposal, can fail the chatbot challenge. During its Build 2016 conference in late March 2016, Microsoft CEO Satya Nadella described chatbots as “conversations as a platform,” and the “third runtime.” Nadella was saying that chatbots were as important to humanity as the operating system or the web browser. Just weeks later, at its April 2016 developer conference, Facebook unveiled a bot API for its Messenger messaging app. But after only ten months, when its AI bots hit a 70 percent failure rate, Facebook scaled back its ambitions and focused its efforts elsewhere. Despite the hype, their chatbots could only fulfill about 30 percent of requests without the intervention and assistance of human agents.
Beware the hype. The current generation of AI has narrow intelligence. It can be trained to perform single tasks within well-defined boundaries, but it doesn’t have common sense, general knowledge, or the out-of-the-box thinking ability to solve problems.
Chatbot Success in Sales
Don’t despair because it’s challenging to get chatbots right. When used strategically, chatbots can perform as well as, or even better than, humans.
In the recently published paper “Machines vs. Humans: The Impact of Artificial Intelligence Chatbot Disclosure on Customer Purchases,” researchers presented the results of field experiment data on more than 6,200 customers who received highly structured outbound sales calls from either chatbots or human workers. In the cases where the customer was unaware that they were interacting with a chatbot, the chatbots were just as effective as experienced human workers and four times more effective than inexperienced salespeople.
It wasn’t all good news, however. When customers were informed before the conversation that the chatbot wasn’t human, the sales effectiveness of the chatbot dropped by almost 80 percent. The researchers also observed that those customers became curt and purchased less, because they perceived the bot as less knowledgeable and less empathetic.
Now that the blocker to chatbot sales success had been identified, the researchers searched for a remedy. By delaying the timing of the disclosure of the chatbot, the researchers found that they could mitigate the negative effects on sales effectiveness, especially when customers had positive AI experiences before this encounter.
The way you design your customers’ chatbot experiences will significantly affect the outcome.
When Customers Want a Human
Sometimes your customers really need a human. Sometimes they just prefer a human. And sometimes they prefer automation. Studies show that customers prefer automated solutions when a product is a commodity, such as electricity or books, or when the process is transactional or creates friction in the customer experience, such as filling out paperwork. On the other hand, customers seek human service when they are emotional, especially when they are confused or anxious.
An anxious customer is less engaged, less satisfied, and has less trust in the decisions being made by a business. In “Mitigating the Negative Effects of Customer Anxiety through Access to Human Contact,” the researchers report the results of experiments about how customers using self-service technologies responded to scenarios that caused anxiety, with and without access to human advice.
Consistent with well-established results in social psychology, the researchers found that when people had the option to connect with another person, either an expert or a peer, the negative effects of anxiety were offset. However, researchers were surprised to observe that very few participants took advantage of the opportunity to chat with someone. The most anxious customers were the most likely to choose to chat with a human, but merely having the option seemed to be all most people needed to feel supported.
Choosing only chatbots or only humans isn’t a best practice. Removing all access to human customer service staff is not the solution. A clever combination of both humans and chatbots, along with offering reassurance that a human is available when required, will yield the best results.
You can further enhance the human/chatbot hybrid solution by training your AI to recognize which situations it can solve itself and which should be handled by a human. Train your AI to recognize:
- Anomalous scenarios that are outside the bounds of its training.
- When a customer is feeling abnormally distressed.
- When the chat is unlikely to be resolved without human intervention.
Human, but Not Too Human
There is a growing body of research showing that chatbots are more effective when they are trained to show emotional responses to consumers. For example, in “Achieving affective human–virtual agent communication by enabling virtual agents to imitate positive expressions,” the researchers report on an experiment in which they test their hypothesis that the imitation of a positive emotional expression by a virtual agent induces a positive emotion, regardless of whether the consumer believes they are chatting with a human or AI. The study used functional magnetic resonance imaging to track participants’ brain activity and self-reporting from participants when they felt emotion. The chatbot was trained to recognize when the participant smiled, and then to imitate that smile.
The participants reported a positive emotion only when their smile was imitated by the agent’s positive expression. This emotional response occurred regardless of whether they believed they were chatting with a human or a machine. The researchers reported that the imitation of a smile “activated the participants’ medial prefrontal cortex and precuneus, which are involved in anthropomorphism and contingency, respectively.” In lay terms, when a machine responds to a human smile with a positive expression, the human begins to treat the machine as more human, with resulting changes in their behavior towards that machine.
The results of this research suggest that teaching an AI to recognize emotions and respond accordingly can overcome the negative effect of believing that the chatbot is a computer program.
Consumers want your chatbots to feel more human, but not to look human. A study by CapGemini found that consumers are ready to embrace AI-powered customer support and that AI interactions, if properly designed, can enhance the personal connection they feel to brands. But don’t make your AIs look too human. One focus group participant said, “They are machines and they were made to help, but I would find it scary if they looked like real humans.”
So beware the uncanny valley, a hypothesized relationship between the degree of an object’s resemblance to a human being and the emotional response to such an object. The uncanny valley concept suggests that humanoid objects that imperfectly resemble actual human beings provoke uncanny or strangely familiar feelings of eeriness and revulsion in observers. Humans find it creepy when androids are human-like in appearance but not entirely realistic. As one focus group participant explained, “Having a human-like AI-based robot would be too spooky, like those dolls that look like real babies.”
Humans and AI Best Practices
Research shows us that our choice isn’t between AIs and humans alone. Rather, it is a best practice to use a combination of humans and AIs, leveraging the strengths of each. Beware the hype. Successful AI-driven customer experiences require careful planning and a deep understanding of human behaviors. You can be one of the winners in customer experience if you follow these tricks and tips:
- Use AI to make your chatbots more intelligent and less narrowly rules-based. Use it to handle more edge cases so that customers don’t get frustrated.
- Don’t completely remove humans from the customer experience.
- Tell your customers that humans are available and will step in if necessary.
- Teach your AI to recognize when customers are emotional and train it to adapt its responses.
- Ensure that your AI is humble and can recognize when it doesn’t know enough and needs to triage a decision to a human.
About the author
VP, AI Strategy, DataRobot
Colin Priest is the VP of AI Strategy for DataRobot, where he advises businesses on how to build business cases and successfully manage data science projects. Colin has held a number of CEO and general management roles, where he has championed data science initiatives in financial services, healthcare, security, oil and gas, government and marketing. Colin is a firm believer in data-based decision making and applying automation to improve customer experience. He is passionate about the science of healthcare and does pro-bono work to support cancer research.
Meet Colin Priest
[ad_2]
Source link