Getty Images

GenAI chatbots for customer care create new level of liability

Air Canada found liable for promises its chatbot made against the airline's policy; CX leaders discuss best practices for using generative AI in chatbots.

Chatbots can be held liable for promises they make to customers in some cases, just like human customer service agents can. CX leaders must take this and other considerations into account when turning on generative AI features for customer service.

The liability chatbots bring became real last week when an Air Canada passenger Ryan Moffatt successfully sued the airline in British Columbia's Civil Resolution Tribunal's small-claims court. Air Canada's chatbot had promised a retroactive bereavement fare refund. Moffatt had spent more than $1,600 on a round trip ticket between Vancouver and Toronto to visit family after their grandmother passed away last November.

That, Air Canada said, went against its clearly posted policies. Still, the court ordered the airline to refund nearly $600 of the fare, interest and court costs last week.

"Mr. Moffatt says, and I accept, that they relied upon the chatbot to provide accurate information," wrote Christopher Rivers, the tribunal member who presided over the case. "I find that was reasonable in the circumstances. There is no reason why Mr. Moffatt should know that one section of Air Canada's webpage is accurate, and another is not."

While the incident is a public-relations nightmare for Air Canada, it's a tiny judgment, said Matt Edic, chief experience officer at IntelePeer, a customer communications automation platform vendor that includes generative AI bots. A human could have made the same mistake.

"Technology can't keep you from shooting yourself in the foot," Edic said. "The difference is, a robot makes a mistake once, and you fix it. You're not going to have that same mistake [again]."

One mistake that companies make is letting chatbots search the web for information instead of tightly training them on internal content. Another, Edic said, is trusting the bot to solve too many things at once, which can lead to bad or confusing outcomes.

CX leaders webinar zoom screen shot
In an online panel, Zendesk customers discuss generative AI's impact on immersive CX.

Chatbots and the customer experience

Chatbot can cut customer service costs by handling simple tasks such as warranty claims or password resets. Generative AI chatbots can also summarize information for customers who prefer quick online interactions over phone calls.

"There are some people in the world who do not want to talk to someone. They just want to figure it out on their own," said Jane Griener, director of customer care operations at Purple, a mattress manufacturer. "[Our chatbot has] provided a great option for those consumers who have that mindset. … It helps them get targeted information that they need in a fast and easy format."

The downsides of chatbots are emerging as users get more experience with them. One example is bot hijacking for product returns. In this scenario, a customer sends back an item that wasn't theirs and claims a refund to which they aren't entitled.

That, Griener said, has spurred Purple to put more emphasis on employee training and customer validation.

"Because of the price point of our items, we've had some unusual problems with fraud -- or loss, as we call it," Griener said. "It's constantly changing, the ingenious, crazy ways that people can come up with to commit fraud."

Chatbots also require internal planning that involves IT and CX leaders working together to protect customer data, said Sarah Vanden Broek, senior manager of CX and Biz Ops at ClassPass, a consumer company that manages fitness club lessons.

The first security consideration for those who implement new or updated-with-GenAI chatbots is to secure customer data in-house so it isn't exposed to employees who shouldn't see it. The second is to design workflows that protects data stored in browsers so that it expires after a customer engages a chatbot in a public place, such as a library, to prevent someone else from accessing protected data from a chat.

"Review any GDPR, CCPA, or privacy workflows that you already have and then see how a new vendor or a change in technology impacts them," she said.

Laying the groundwork for GenAI

Before adding generative AI to a CX workflow, companies should review customer feedback from surveys and agent conversations to figure out what they really want. Then assess their own CX maturity level to determine if it's capable of handling generative AI chatbots. Before, during and after implementation, CX teams need to have a quality assurance process to constantly refine and update chatbot interactions.

Finally, it's essential that the CX and IT leaders rolling out generative AI chatbots consult with front-line agents as well, said Maria Vargas, vice president of customer service at Rue Gilt Groupe, an online service that spots sales on luxury-brand and boutique items. Making agents part of the decisionmaking process will help them understand, appreciate and become connected to the technology instead of fearing the new software forced upon them.

"There doesn't have to be that disconnect," Vargas said. "To quote Jack Nicholson in A Few Good Men, if you believe they can't handle the truth, then you're keeping information from them and excluding them from the process."

Don Fluckinger covers digital experience management, end-user computing, CPUs and assorted other topics for TechTarget Editorial. Got a tip? Email him.

Next Steps

Salesforce TDX reveals how consumer and enterprise AI differ

Dig Deeper on Customer experience management

Content Management
Unified Communications
Data Management
Enterprise AI
ERP
Close