Getty Images

AI systems need Energy Star-like rating, says policy expert

One expert says policymakers and data scientists need to work together to create a better framework for AI use, which could include establishing a rating system.

Getting AI policy right means policymakers and data scientists need to work together to create a better framework for companies accessing the technology.

That's according to Nicol Turner Lee, director of the Brookings Institution Center for Technology Innovation, who spoke during the recent MIT Technology Review EmTech Digital event.

Governments, including the European Union, are trying to find ways to regulate the AI technology that has permeated society and, in some cases, negatively affected consumers such as within the criminal justice and financial systems. While the technology itself isn't to blame, Turner Lee said the lack of rules around AI use and accountability for AI creators is creating some of the current issues.

How are we developing, not just inclusive and equitable AI, but legally compliant AI?
Nicol Turner LeeDirector, Center for Technology Innovation, Brookings Institution

"How are we developing, not just inclusive and equitable AI, but legally compliant AI?" she asked during the EmTech Digital event.

Turner Lee said there are approaches both regulators and AI systems creators can take to begin creating a better framework for AI use.

Setting guardrails, incentivizing due diligence

Not all AI algorithms and systems are bad, which means policymakers should identify the most sensitive, high-risk areas that AI could impact and focus regulatory efforts there, Turner Lee said.

For example, guardrails should be set for AI systems within financial products that make determinations about loans or mortgages for consumers, or to ensure consumer privacy isn't violated by racial recognition technology.

"Policymakers should define the guardrails," she said.

It's also "incumbent upon developers to do their due diligence" as they create AI systems, Turner Lee said.

Data scientists should be responsible and accountable for their AI systems, as well as completing all the testing on a product and embedding the right auditing tools.

"That is not the responsibility of government; that is the responsibility of the scientist or the agency or corporation licensing the autonomous system," she said.

Policymakers have a role to play when it comes to that due diligence by providing incentives to companies to create transparent algorithms based on a rating system.

Indeed, Turner Lee suggested that a rating system comparable to Energy Star assessments for appliances should be created so consumers know if an AI system meets federal standards. Energy Star ratings show that an appliance meets federally mandated guidelines for energy efficiency.

"What we can do is create transparency and we can produce a culture of excellence that allows us to have these components in place," she said.

Turner Lee said policymakers and data scientists should work together to define the rules of the road for AI.

"You need people working together and collaborating on what those principles are," she said.

Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.

Dig Deeper on CIO strategy

Cloud Computing
Mobile Computing
Data Center
Sustainability
and ESG
Close