Generative AI chatbots can potentially replace or supplement several business tasks, including customer service.
Tools like ChatGPT, Google Bard, Jasper AI and ChatSonic use advanced machine learning technology to generate complex text. Common benefits of generative AI include ease of training and customization, reduced operational costs, and 24/7 service. Despite these benefits, however, tools like ChatGPT have risks like fabricated information and privacy concerns.
Customer service professionals must consider these risks before implementing generative AI in their contact centers.
What is ChatGPT, and how can it improve customer service?
ChatGPT is a specific natural language processing (NLP) tool that uses generative AI. NLP lets the tool generate responses to user prompts or questions in a conversational, human-like way. Generative AI analyzes and "learns" various types of data -- text, audio, imagery -- and generates human-like answers to inputs.
In the case of generative AI for customer service purposes, organizations commonly integrate the tool into text- or voice-based chatbots to accomplish the following:
- Answer customer questions about a product or service.
- Facilitate orders, exchanges and returns.
- Provide multilingual support.
- Direct users to FAQ information and service teams for assistance.
Every business is different, so customer service use cases can vary dramatically. However, many contact centers use AI to replace or supplement web- or app-based chats they previously fielded with live customer service agents.
Risks of ChatGPT in customer service
Despite their benefits, generative AI systems like ChatGPT have several drawbacks. Customer service managers must understand the following risks before they hand over control to a bot.
1. Fabricated information
Generative AI bots are only as useful as the information they possess. In some cases, AI can interpret information incorrectly or use insufficient or outdated information. If the AI system learns inaccurate or fabricated data, it may generate incorrect responses to user questions.
Generative AI chatbots can also create "hallucinations," or coherent nonsense, in which chatbots confidently and eloquently make false statements. In this situation, the tool generates random and inaccurate outputs that may confuse users. Fabricated information can cause users to lose trust in the tool, diminishing its usefulness.
2. Biased information
AI models can learn to identify and describe objects, such as chairs and benches, as developers train them on images and textual descriptions of these objects. Although AI models have little opportunity to pick up bias from images of chairs, ChatGPT consumes and analyzes data from billions of webpages. Therefore, racial and political bias found on the internet can roll over to the tool's outputs.
If an organization's generative AI chatbot generates racist, sexist or politically biased responses -- which ChatGPT has done -- the organization may experience damage to its brand.
Organizations that want to invest in a generative AI tool should understand how different vendors train their products and whether they apply safeguards to reduce risks of bias. If organizations plan to train a tool themselves, they should also do their best to keep biased information out of their training data.
3. Question misinterpretation
Even if users carefully write their questions, AI systems like ChatGPT may incorrectly focus on specific keywords or phrases within complex questions that aren't critical to users' intents. This incorrect interpretation makes the AI tool generate misleading or inaccurate outputs. If this happens, customers may become frustrated as they carefully rewrite questions in a way that the tool can understand.
4. Inconsistent answers
If developers train their generative AI chatbots on comprehensive data sets, these systems can respond consistently to customer questions. However, the chatbot may return inconsistent results if the training data set lacks completeness. Customers want clear answers to their problems, so chatbots that offer different answers to the same question can damage CX.
5. Lack of empathy
ChatGPT can simulate empathy in its responses, but it still lacks the compassion and empathy of a live agent. If an angry customer engages with an AI-backed bot that lacks true empathy, they can become increasingly frustrated.
6. Security concerns
As with any network-connected technology, bad actors can covertly insert false information into generative AI systems. For example, they may insert malware-infested links or phishing practices that provide opportunities for the AI to deliver malware and phishing information to users.
Platforms may also collect and store sensitive details that bad actors could access or leak, so organizations must take steps to minimize the risk of AI breaches.
Should organizations use ChatGPT for customer service?
IT professionals shouldn't jump in with both feet. Instead, they should take a more cautious approach to generative AI implementation within customer service processes. Sometimes AI tools cannot offer the functionality that customer service professionals need. In other situations, the tool may meet the organization's needs with additional training.
Organizations that want to use generative AI in customer service should treat the system like a brand-new employee that still needs to learn of the company's processes. These systems usually need training before managers let them directly interact with customers or clients.
Customer service leaders must ensure the tool's outputs align with their organization's customer service best practices. However, as AI tools like ChatGPT evolve, developers may find ways to reduce their risks.