Chatbots and AI assistants need further development to reach the point where they can fully understand the nuances of language and engage in real conversations with people. But good design can help overcome the limitations of existing conversational AI bots, according to developers at companies on the leading edge of the trend.
"I think it's a myth that we'll have to wait until the best AI materializes to improve assistants," said Cathy Pearl, head of conversation design outreach at Google. "Actually, we can do a lot now with good design principles and a little technology that we have now."
Conversational AI was one of the big discussion topics at the 2019 Re•Work Deep Learning Summit in San Francisco. Pearl and speakers from Uber and software vendor Autodesk elaborated on some of their experiences in improving conversational AI bots and interfaces so the technologies can better simulate human conversation and help users resolve problems faster.
Plan for language variability
One of the biggest conversational AI challenges is building a robust understanding of language variability, said Gökhan Tur, director of conversational AI at Uber's AI Labs unit. "There are so many ways of saying the same thing," Tur said.
In addition, users can take conversations in unexpected directions. For example, he noted that, if an online chatbot or AI assistant for a pizza chain asked a customer what toppings he wanted, it would expect him to respond with choices from the list of available toppings. But the customer may instead ask, "What kind of olives do you have?"
The problem, Tur said, is that existing conversational AI bots don't really have natural language understanding capabilities -- instead, they mimic understanding of language. To improve on that, developers need "to go beyond superficial learning to use reinforcement learning," he added.
One strategy Tur pointed to involves adapting the technique of using game simulators to create AI systems that can play the games.
Such simulators are relatively trivial since games have defined rules of play; simulating human speech is more complex. But Tur said doing so would make it possible to develop conversational AI programs that jointly reinforce each other, much like generative adversarial networks do. "User simulators are still a research area, but I believe it is the future of conversational AI," he said.
Carry on a normal conversation
Ultimately, it's important to design conversational AI bots that enable people to talk like they normally do. For example, Pearl said she recently played a conversational AI game that asked her if she wanted to build a cathedral. When she responded, "Sure," the chatbot replied, "I don't understand."
In the early days of interactive voice response systems, developers worked with prebuilt grammar and dialogue tools, Pearl recounted. She recommended that conversational AI developers adopt similar tools to reduce the work they have to do to account for the different ways people might respond to simple questions. "You shouldn't expect writers or developers to reinvent the wheel when they have a yes-or-no question," she said.
Cathy Pearlhead of conversation design outreach, Google
According to Pearl, it's also a good idea to look at what the prospective users of a conversational AI system really say, as opposed to basing development plans on your own preconceptions. "You are not going to predict the variety of ways that people will talk to your bot," she said. "You need to invest in that."
Things can get particularly complicated in fields like healthcare. Pearl once worked on a medical app that was designed to help individuals look up basic information about common symptoms. "You would not believe how many ways there are to say abdominal pain," she quipped.
Focus on what really matters
Another design strategy Pearl suggested is to restrict the conversations that AI tools can have to a particular domain. That makes it easier to create the correct context for chatbots and AI assistants, she said. As an example of how the context can be broken if a system isn't focused, Pearl said a bot reminded her about a dental appointment on her calendar one day. When she asked how long it would take to get there, the bot responded, "It depends on the orbit between Earth and Mars."
Humans often use negative modifiers in the course of normal conversation -- something else that should be taken into account when designing conversational AI bots. For example, Pearl said that, when one chatbot asked her how she was feeling, she responded, "Not great." The bot then asked, "Is anything in particular making you feel so positive?" That might have been a disastrous response for a bot handling a customer support problem, she noted.
Adding humor to chatbots can help build rapport with users, but it can also lead to problems. Nikhil Mane, a conversational AI engineer at Autodesk, was reviewing the logs for its customer service bot and noticed that the AI engine threw in a joke when one user was trying to activate the company's design software. "If you put this in front of a frustrated user, it just exacerbates the problem," Mane said.
Since then, Mane's team has done a lot of work to improve the bot, which is called AVA -- short for Autodesk Virtual Agent. A key strategy was to make back-end processes more accessible to users, Mane said. In the beginning, a natural language processing service did most of the heavy lifting for AVA. But, after mining the bot's logs, the developers identified the most popular queries and then created a welcome message that lists them so users can click on the questions instead of having to type them into AVA's interface.
The data Autodesk collects on the use of AVA isn't just quantitative, Mane said. To help boost the bot's ability to understand what people are looking for, the conversational AI team also asks users for qualitative info, particularly when they get frustrated trying to resolve problems. In addition, he said, the team is working to use machine learning to improve the quality of interactions.