Monitor generative AI in customer experiences -- or else

As marketers and customer service leaders deploy generative AI tools that tech vendors are rapidly commercializing, they should monitor U.S. FTC guidance.

Generative AI for marketing, advertising and customer service is on every CX leader's mind right now. They should know the U.S. Federal Trade Commission is thinking about it, too, so leaders should make sure they are on the right side of the compliance fence.

CX technology vendors have detonated the generative AI (GenAI) hype bomb, extolling a vast array of potential use cases, new cloud services and plug-ins. Many of these products are still theoretical, either in preview or beta. As TechTarget's Enterprise Strategy Group (ESG) compiles upcoming research on GenAI applications across the enterprise -- and more on its deployment in contact centers -- we recommend buyers of these technologies purchase systems that transparently do the following:

  • Show how these AI tools choose their words.
  • Reveal what safeguards tech vendors put in place that keep AI in check.
  • Assign humans to monitor the tools' behavior.

All these factors are important because the lack of supervision over AI tools won't fly as a defense.

The FTC's plans for generative AI

The FTC recently laid out its plans to monitor the potential downside of GenAI in customer service and marketing. It's one part of a four-agency push for AI regulations that also includes the Civil Rights Division of the U.S. Department of Justice, the Consumer Financial Protection Bureau and the U.S. Equal Employment Opportunity Commission.

These agencies are working with the White House, which is developing a national AI strategy. Together, their goal is to protect both consumers and employees from deception and discrimination that AI systems might create.

In a blog post earlier this month, Michael Atleson, attorney at the FTC Division of Advertising Practices, wrote the following:

Firms are starting to use [generative AI tools] in ways that can influence people's beliefs, emotions and behavior. Such uses are expanding rapidly and include chatbots designed to provide information, advice, support and companionship. Many of these chatbots are effectively built to persuade and are designed to answer queries in confident language even when those answers are fictional. A tendency to trust the output of these tools also comes in part from "automation bias," whereby people may be unduly trusting of answers from machines which may seem neutral or impartial.

Advice for the GenAI future

According to FTC guidance, tech vendors that build -- and companies that use -- GenAI tools should do the following:

  • Retain staff who specialize in AI ethics.
  • Perform risk assessment and mitigation.
  • Train staff and contractors on GenAI use.
  • Monitor how GenAI tools perform.

The agency also said companies using AI tools for marketing purposes should clearly label ads as ads and bots as bots. Customers should always know when they're communicating with a bot and whether AI is steering them to a particular product or service provider.

Firms are starting to use [generative AI tools] in ways that can influence people's beliefs, emotions and behavior.
Michael AtlesonAttorney, FTC Division of Advertising Practices

CX leaders have many responsibilities at their organizations, and their performance is often judged on revenue metrics, as well as customer satisfaction and retention. It might be tempting to adopt all the GenAI available to stay competitive and keep CX on the cutting edge, but keep in mind that you -- not the vendors -- are on the hook for FTC compliance.

We advise that, before anyone signs contracts for these shiny new technologies, they should do the following:

  • Evaluate the transparency of the AI, and make sure it isn't a black box.
  • Test obsessively to ensure AI doesn't create deceptive content or do things that regulators would consider harmful to customers -- a topic on which the FTC offers more detail. Marketers are pros at writing powerful, persuasive copy that makes legitimate claims and doesn't cross the line. AI has no morals and doesn't answer to regulators. It's best to know where the AI will push claims before unleashing it on your customers.
  • Document your risk assessment process and the actions you took to address what you found.

Tech vendors, for their part, should build transparency into their GenAI tools to enable customers to perform due diligence content audits. They should also make a straightforward UX to access these features and document the processes for compliance purposes.

Some vendors are in such a rush to be early to market, their mindset is, "Whatever our tech creates -- that's the customer's responsibility." While that is technically true, vendors that enable users to do the right thing more easily will likely gain an advantage and win more deals in a down economy.

Finally, everyone involved in creating or deploying GenAI tools for CX should keep a close eye on the FTC's business blog, where regulators issue new guidance and shed light on their current thinking about CX technologies. And stay tuned later this year: ESG's research will shed light on enterprise GenAI adoption.

Senior analyst Don Fluckinger covers customer experience technologies for Enterprise Strategy Group, TechTarget's research, advisory and consulting arm.

Dig Deeper on Customer service and contact center

Content Management
Unified Communications
Data Management
Enterprise AI
ERP
Close