putilov_denis - stock.adobe.com
AI regulations are coming and will be a significant focus for lawmakers in the U.S. and globally in 2022.
That's according to Beena Ammanath, executive director of the Global Deloitte AI Institute, who sees a fast-moving worldwide push for AI regulation. As artificial intelligence technology use increases across enterprises, Ammanath said it will be important for governments, the private sector and consumer groups to develop regulations for AI and other emerging technologies.
Broadly, advocates for AI regulation seek transparency for black box algorithms and the means to protect consumers from bias and discrimination.
The U.S. has been slow to regulate AI compared to the U.K., Germany, China and Canada. The U.K. released its 10-year AI strategy in September, which includes building a regulatory and governance framework for AI. The U.K.'s Office for Artificial Intelligence is expected to propose regulations in early 2022.
The U.S. is leading in AI adoption, which means U.S. officials have an obligation to take a leadership role in AI regulations as well, Ammanath said. However, she expects regulations to vary based on geography, country and culture, similar to variations in privacy legislation.
"I believe that we need an independent, government-led effort on AI ethics to ensure that AI systems are fair, trustworthy and free of bias," Ammanath said.
U.S. officials work on AI bill of rights
Earlier this year, the White House Office of Science and Technology Policy, led by President Joe Biden's science adviser Eric Lander, began working on an "AI bill of rights," which protects consumers from potential harm from AI technology.
Beena Ammanath Executive director, Global Deloitte AI Institute
The proposed AI bill of rights gives consumers the right to transparent and explainable AI, particularly as AI systems are used to approve credit and home mortgages, as well as make other impactful decisions.
However, while the AI bill of rights is a good starting point, Ammanath said it should lead to more detailed policies. Specifically, Ammanath said she would like to see greater specificity around the definition of ethical AI, as well as regulations and policies that account for the nuances in how AI is applied across various industries.
"The challenge with any broad policy is that ethical and trustworthy AI can mean very different things based on AI applications and the industry in which it is applied," she said.
Indeed, when considering a broad range of principles to apply to AI, it's also crucial to consider challenges such as definitions of fairness, said Gartner analyst Frank Buytendijk.
As lawmakers and organizations look at principles for AI, Buytendijk said the top five most common principles often considered are:
- AI should be human-centric and socially beneficial.
- AI should be fair in its decision-making.
- AI should be transparent and explainable.
- AI should be safe and secure.
- AI should be accountable.
However, each of those principles faces issues, he said.
For example, Buytendijk said, do IRS fraud protection models need to be transparent? "And if you spend a lot of money building certain algorithms, do they represent intellectual property? The bill of rights would have to reflect that there are underlying dilemmas, gray areas."
Different approaches to AI regulation
Other nations often take different approaches to monitoring and regulating how technologies progress, Buytendijk said.
"The U.S. way is leave it more to the markets and if organizations do the wrong thing, the customers will go somewhere else," he said. The EU, in contrast, is more regulation-based.
Regions like the EU cannot outspend the U.S. or China in terms of AI development, but they can take a leadership position in crafting regulations for developing AI responsibly, much like they did with the General Data Protection Regulation (GDPR) privacy law, Buytendijk said.
What's happened with GDPR is that other regulatory regimes have taken the same principles and applied them in different ways, like the California Consumer Privacy Act. Buytendijk said it's possible a similar thing could happen with AI regulations.
For CIOs and companies investing heavily in AI, Buytendijk said it will be important to plan for innovation in terms of bias detection and management, as well as explainability and transparency, heading into 2022.
"Prioritize those because the more you achieve there, the likelier you are to not run into too much trouble with AI regulations coming from various countries," he said.
Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.