sdecoret - stock.adobe.com
While the European Union moves forward on building rules to govern artificial intelligence usage, the reality of AI regulation in the U.S. is far less likely -- despite cries from U.S. lawmakers and companies.
The EU took another step Wednesday toward finalizing the EU AI Act after its legislative arm, the European Parliament, approved the legislation. Finalizing the AI Act involves negotiating with EU countries in the European Council to determine the law's final shape, which is expected by the end of the year.
Once finalized, the AI Act would provide the world's first rules on AI as the U.S. continues to mull the technology's risks and benefits.
Overarching AI regulation is not the path the U.S. will take, said Michael Capps, former president of Epic Games and current CEO of Diveplane Corp., a responsible AI platform provider. Instead, he said businesses will rely on guidance from the White House with tools such as the Blueprint for an AI Bill of Rights. Capps spoke during MIT Technology Review's EmTech Next event this week.
Michael CappsCEO, Diveplane Corp.
"The chances of Congress actually regulating are super slim," he said.
But lawmakers and companies in the U.S. are pressing for more regulation. OpenAI CEO Sam Altman appeared before the Senate for a high-profile hearing in May asking for measures like licensing requirements for AI systems or creating an AI regulatory agency. Following Altman's testimony, Sen. Michael Bennet (D-Colo.) introduced a bill to create a federal agency to oversee AI.
Why AI regulation won't be the U.S. answer
Congress doesn't know enough about technologies like AI to regulate it properly, and any AI regulation proposed now would be outdated soon enough, Capps said.
He added that he believes the answer lies instead in litigation when individuals bring lawsuits against companies asking them to explain how recommendations from opaque AI algorithms are made, especially those that negatively affect their lives. Litigation will compel companies to pursue transparent AI systems and track training data to help decipher an algorithm's outputs.
"There will be people saying, 'I didn't get the credit score I should have because of your black box algorithm,'" he said. "That will be what prevents companies from doing things that are unethical with AI."
Capps said he also believes the responsibility is on companies to come together and create AI standards. Industry-led global standards setting has occurred with other technologies such as Wi-Fi and Bluetooth.
EU moves toward AI regulation
The AI Act looks to break AI systems down into different categories of risk and establish obligations for both AI providers and users depending on the level of risk.
The AI Act bans what lawmakers deem to be "unacceptable risks," such as real-time and remote biometric identification systems including facial recognition and classification based on social characteristics such as behavior and socioeconomic status.
Capps said that although he doesn't necessarily agree with the EU's approach, he appreciates that the AI Act calls out risky behaviors, including that AI systems can affect lives.
The EU AI Act demonstrates the increasing intersection between data privacy and AI governance, said Caitlin Fennessy, vice president and chief knowledge officer at the International Association of Privacy Professionals, in a statement Wednesday.
"AI governance will demand attention from a multidisciplinary field, but the latest Parliamentary developments show the connections between privacy and AI governance are likely to grow rather than fade," Fennessy said.
Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.