sdecoret - stock.adobe.com
Federal agencies promise action against 'AI-driven harm'
Federal enforcement agencies cracked down on artificial intelligence systems Tuesday, noting that the same consumer protection laws preventing bias and discrimination apply to AI.
Federal enforcement agencies warned today that AI vendors and their customers will be held accountable for AI systems resulting in bias or discrimination in housing, lending and employment opportunities.
Federal Trade Commission Chair Lina Khan said Tuesday that there is "no AI exemption" under the numerous consumer protection laws, such as the FTC Act, the Equal Credit Opportunity Act and Fair Credit Reporting Act. The laws are designed to protect consumers from unfair bias and discrimination -- something regulators globally are becoming increasingly concerned about, as AI systems widely used by agencies and companies in employment, lending and hiring decisions can demonstrate such behaviors.
Khan said the FTC has a long record of adapting enforcement of existing laws to protect Americans from evolving technological risks, including from AI systems.
Lina KhanChair, Federal Trade Commission
"Today's statement makes clear that AI technologies are covered by existing laws," Khan said at a press briefing. "These tools are not emerging in some type of legal vacuum. To the contrary, each agency here today has legal authorities to readily combat AI-driven harm."
The commitment to enforcing existing rules for AI systems was jointly announced by the FTC, U.S. Department of Justice Civil Rights Division, the Consumer Financial Protection Bureau and the Equal Employment Opportunity Commission. This action comes amid growing conversations about the need for new rules targeting AI systems.
Federal agencies unite on enforcing consumer protection laws for AI systems
AI systems could face increased scrutiny as Tuesday's announcement by the federal enforcement agencies indicated an "important step forward to affirm existing law and rein in unlawful, discriminatory practices perpetrated by those who deploy these technologies," said Rohit Chopra, director of the Consumer Financial Protection Bureau.
Chopra said the CFPB is taking action to protect consumers from black box AI models, requiring financial institutions and other businesses to clearly explain how an AI system decides to deny credit or lending. Chopra said companies must take responsibility for the use of AI systems.
"When consumers and regulators do not know how decisions are being made by artificial intelligence, we're unable to participate in a fair and competitive market free from bias," he said during the press briefing.
The FTC's Khan also indicated the agency will monitor competition among AI systems and the companies deploying them.
"A handful of powerful firms today control the necessary raw materials, not only the vast stores of data, but also the cloud services and computing power that startups and other businesses rely on to develop and deploy AI products," Khan said. "This control can create the opportunity for firms to engage in unfair methods of competition."
Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.