Getty Images/iStockphoto

AI chatbots don't need to be sentient, they just need to work

Chatbot users, vendors and ex-Google engineer Blake Lemoine discuss what is needed to make current chatbot tech more effective for customer service organizations.

One may forgive customer service technology users for laughing out loud at the suggestion of AI gaining sentience. They have a difficult time simply bringing their chatbots online to solve basic customer problems, let alone assigning them metaphysical characteristics.

This comes as rogue ex-Google engineer Blake Lemoine said he believes that Google's LaMDA, a large language model in development, is sentient.

While chatbot technology is evolving, the bots available in today's market are nowhere near sentient, according to a panel of contact center technology users and other experts. Much more work needs to be done to make customer service chatbots integrate with existing enterprise IT to automate even the simplest tasks.

"If you have some experience chatting with these things and see the results of these new models, it does change something in terms of your perception," said Dan Miller, Opus Research founder. "You know it's not sentient, but it's perfectly capable of sounding very human -- and other times just being bats--t crazy."

Pegasystems founder and CEO Alan Trefler said he doesn't believe it's possible that Google LaMDA is sentient, and thinking about it in those terms "confuses the conversation" about AI. As constituted now, he added, AI tech can be used for good, such as making people more productive in their jobs. It can also be used for more controversial applications such as facial recognition for law enforcement.

People identifying chatbots as sentient beings dates back more than 50 years to Eliza, the original chatbot, which was released in 1966 and set up to talk like a therapist. Trefler remembers fellow Dartmouth students developing relationships with Eliza in the mid-1970s. Lemoine fell into the same trap, Trefler believes.

"This is, frankly, just a classic case of anthropomorphism," Trefler said. "It is so natural for humans to attribute human values, human motivations, human cognitive skills to dogs, cats and machinery … it's a natural vulnerability that we have."

IT gets in the way

Sentient or not, customer service chatbots need a lot of care and feeding to perform their jobs. Present-day bot tech suffers -- and customers get frustrated -- because chatbots can't easily integrate with legacy systems to access the data and automate tasks.

There is real AI tech that isn't sentient but can be deployed at enterprise scale, said CX consultant Phillip Jackson, co-founder of Future Commerce, a retail media research startup and podcast. The problem for the creators of these technologies is that they must deal with "the slowest moving organisms on earth -- the enterprise."

"You can't integrate anything because you have 65 middle managers not contributing anything, and they're all obstructing the actual boots on the ground," Jackson said. "The engineers don't have actual access to the real data, or they themselves are obstructions because they are encumbered with legacy tech debt that are the vestiges of digital transformation efforts 15 to 20 years ago that stalled."

Virgin Atlantic airlines deployed chatbots for customer self-service in 2018 through Microsoft Virtual Agent and Genesys, on channels including WhatsApp, SMS and Apple Business Chat. During the pandemic year of 2020, customer contacts spiked 500%, and the company supersized its chatbot self-service operations because it had no other choice, said Ceri Davies, who manages a Virgin customer service center in Swansea, Wales.

"We are hugely invested because we were in a position where we were able to contain a lot of our customer contact within our chatbot," Davies said. "We couldn't just increase the amount of people that we had when the company was under such tough measures."

But the bot can't think for itself. To do that, Virgin Atlantic now dedicates a full-time analyst role to monitor chatbot performance and figure out how to solve even more customer problems, when possible. The analyst looks at the situations where customers get stuck in loops, and tests how new suggestion buttons or free-text fields may help get them unstuck.

Recent projects include giving the bot the ability to resolve loyalty-program questions about accrued miles. Other projects involve capturing information so when the bot does shunt a customer to a human agent, information is passed on to save both agent and the customer time.

There are more integrations to come, Davies said. On her wish list are feedback-collection mechanisms to measure customer sentiment, analytics to look at chatbot conversation length and quality, and the ability to measure common contact-center metrics such as average case handling time -- which they do for the human agents.

As for the chatbot being sentient? "I think that we are probably so far from having a bot that is alive that it doesn't resonate with us," Davies said. "I understand what they're saying. But for us, we are so far removed from that."

Human-sounding bots: Better CX?

Users of chatbots are divided as to whether their virtual agents should converse with personable, human-like traits and whether or not they should be given names, such as Eliza. Virgin Atlantic's is named Amelia. But Amelia's success lies in its effectiveness, not human-like traits, and its effectiveness is measured by what percentage of customer problems it solves. For Virgin, 60% is good, and 40% is not.

"Something that all companies need to be wary of is putting something out there that isn't very good, and how that impacts your customers," Davies said. "If I'm a customer and I've had a poor experience, I'm probably not going to try and use it again."

Rabobank, the largest Dutch bank, has 10 million retail customers and one million business customers. It does not assign names to its two Microsoft-Nuance chatbots running on Azure in conjunction with its Genesys cloud customer service and telephony system for its contact center and bank branches. 

Rabobank mapped numerous customer journeys to control what slivers of customer service the bots could handle, said Thom Kokhuis, head of conversational banking and CRM. Customers had done business mostly in person prior to 2020, and they prefer mostly audio and video channels for their banking now, he said.

Will sentience in artificial life ever be achieved? I'm not prepared to say, 'No.' Are we remotely close to it? Absolutely not.
Alan TreflerCEO, Pegasystems

Bots can perform uncomplicated tasks such as cancel a lost or stolen credit card, or request an increased credit card limit. In contrast, complex processes like applying for a Rabobank mortgage can now be done digitally -- but it's accomplished by humans, not bots.

"You should only position your chatbots where the chatbots can help, and are of value in the customer journey," Kokhuis said. "Complex questions, the chatbot cannot handle. We don't want that chatbots to handle high-emotion journeys, because chatbots cannot give emotions back -- it's only text."

Bots and web experiences need to strike a balance between programming and fake humanity to help technology users create good experiences, said Brett Weigl, senior vice president and general manager of digital and AI at Genesys.

On one hand, customers can't be required to enter the equivalent of secret codes to solve simple problems with a bot. 

"If I effectively have to go to a command line to get something done as an end user, well, that's not where we should be," Weigl said. "The other extreme is some artificial personality. That I just don't see as a useful ingredient for most scenarios."

Bots can't clean up bad IT

Lemoine blames humans for the complex tangle of chatbot integration issues. Sentient or not, AI is good enough to easily solve most customer service problems it is presented with, he said.

The issue is the obstacles we put in front of the AI between the problem and the solution, he said. The lack of access to data systems, the rules and policies of an individual company, and laws and regulations all get in the way of software-assisted customer service.

AI, just like human agents in contact centers, must juggle many different applications to help a customer. For example, changing a password for a bank debit card requires identity verification, password selection and pushing the new password to various applications customers need access to. It quickly becomes difficult for both man and machine.

"We could redesign the entire system with AI in mind, and then it would work fine," Lemoine said. "I guarantee you if you just wanted me to build you an AI with a new conversational interface that could change your password, I could do that tomorrow. However, if you wanted it to do it and be compliant with all regulations, and do it in such a way as to minimize litigation, now you're back in the same situation you were before."

Pegasystems' Trefler predicted that, in the next few years, rapidly advancing conversational models will make chatbots more effective. Moreover, the architecture of the tech stack will evolve. Enterprises that adopt a model where customer service AI is pushed out to all channels at once -- instead of dedicating bots to single, linear customer journeys -- will find more business value in AI and adapt to ever-changing customer communication channels.

Still, he doesn't expect the AI will be sentient.

"Will sentience in artificial life ever be achieved?" Trefler mused. "I'm not prepared to say, 'No.' Are we remotely close to it? Absolutely not."

Don Fluckinger covers enterprise content management, CRM, marketing automation, e-commerce, customer service and enabling technologies for TechTarget.

Dig Deeper on Customer service and contact center

Content Management
Unified Communications
Data Management
Enterprise AI
ERP
Close