The Federal Trade Commission made it clear this week that the agency won't hesitate to pursue action against AI systems that result in harms to consumers. But pursuing emerging AI technologies such as OpenAI's ChatGPT could prove a separate, more daunting challenge.
The FTC and other consumer protection agencies issued a joint statement Tuesday stating that existing consumer protection laws apply to AI systems that are frequently used to approve or deny consumers for housing, lending and employment opportunities. The agencies' statement comes at a time when leaders globally struggle to hash out new rules for rapidly evolving AI systems that can potentially harm consumers through algorithmic bias and discrimination.
While the agencies' statement targeted AI systems used in decision-making, a complaint has already been filed with the FTC regarding ChatGPT, a generative AI model that's becoming a popular tool for consumers and businesses alike to generate content. In its complaint, the Center for AI and Digital Policy, a nonprofit research organization, alleges that ChatGPT is "biased, deceptive, and a risk to privacy and public safety" and urges the FTC to investigate OpenAI.
The problem is that generative AI functions differently from automated decision-making tools involved in making life-altering decisions about consumers. Still, that there have been calls to pause or stop the use of ChatGPT isn't surprising, said Alan Pelz-Sharpe, founder of market research firm Deep Analysis. However, whether those calls have any actual impact would be surprising, he said.
"How do you stop it even if it is found to be in violation?" Pelz-Sharpe said. "With so much money invested and so many invested in these systems' success, I'm not sure there is a fine big enough to deter them. Even then, they could appeal, and these kinds of procedures usually drag on for years and will allow years more expansion of the platform and its capabilities."
Generative AI a new headache for regulators
The adoption of generative AI tools such as ChatGPT has escalated over the last several months, with companies including Microsoft, Google, Amazon, Samsung and others incorporating such tools into both their business and consumer products.
A letter calling for a pause on the development of generative AI such as ChatGPT failed to generate much response in Washington. Instead, Sen. Mike Rounds (R-S.D.) said it would be a detriment to the U.S. to pause development of such technology while countries such as China are investing heavily in AI and their own generative AI tools.
Amid a back-and-forth in Congress on whether emerging AI merits new rules for federal regulators to enforce, the U.S. released its Blueprint for an AI Bill of Rights last year, intending to help businesses navigate ethical implementations of AI systems. Most recently, the Department of Commerce issued a request for comment on AI accountability measures.
During a press briefing Tuesday, FTC Chair Lina Khan said the agency launched a new Office of Technology and plans to bring on experts to help the agency understand and grasp how technologies such as AI function to help with the agency's enforcement actions. Khan also said the agency is paying attention to competition concerns in the generative AI space.
Indeed, Pelz-Sharpe said that from a commercial perspective, it will be crucial for agencies such as the FTC to watch whether large companies investing in generative AI could create monopolies.
"The sheer scale of the cost to build such systems, plus the expertise and access to computing power and data, means that no firms other than the most prominent tech firms can compete," he said.
Companies responsible for AI use
Given that it's unlikely the FTC will act anytime soon on tools such as ChatGPT, Forrester Research analyst Alla Valente said it's up to businesses to adopt policies and guidelines around the use of such tools.
Alla ValenteAnalyst, Forrester Research
Using any emerging technology, including ChatGPT, comes with risks, whether through exploitation or manipulation by outside threats such as bad actors or even unintentional internal misuse. Employees using the tool could accidentally input sensitive company data without the proper business use policies, Valente said.
"Every time you put data in, this is OpenAI, which means that your data will be used to then calibrate that AI," Valente said. "If you don't want data possibly used for that purpose, you need to be careful what information is being plugged in."
Developing policies around the use of tools such as ChatGPT will be critical for businesses going forward because it's too early for regulators to establish guidelines for the technology's use, Valente said.
"We can't wait for regulators or regulation to save us from risks," she said. "Regulations are not proactive -- they're not going to ensure that guidelines are being met until we know what the risks are or we see what can go wrong."
Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.