Getty Images

Stakeholders want more than AI Bill of Rights guidance

While organizations like The Brookings Institution applaud the White House's Blueprint for an AI Bill of Rights, they also want to know when enforceable AI rules will be coming.

The White House's Blueprint for an AI Bill of Rights outlines ethical ways for businesses to use artificial intelligence, including technical resources for building ethical AI -- but experts still want to see actual AI regulation come to fruition.

When the White House Office of Science and Technology Policy (OSTP) released the blueprint in October, many saw it as a "welcome and important step," but it's not enough, said Alex Engler, a fellow in governance studies at The Brookings Institution, which hosted a webcast Monday to discuss the AI Bill of Rights. Engler was one of four panelists breaking down the blueprint's impact.

The AI Bill of Rights offers guidance and resources to businesses, but is not an enforceable law, meaning businesses and federal agencies don't need to comply with the ethical AI principles it lays out, Engler said.

Instead, the AI Bill of Rights catalyzes federal agencies to act on the guidance and points the way for policymakers to consider AI regulation, said Harlan Yu, executive director of technology and equity nonprofit Upturn, based in Washington, D.C.

"This document in the long term, will be judged not by what's on paper but all the concrete actions that are going to flow from this document, particularly from the federal agencies," Yu said during the panel discussion. "We're talking about prospective rule-making, enforcement actions, regulatory guidance and legislative actions that really need to put these principles into practice."

How companies can use the AI Bill of Rights

The AI Bill of Rights applies to all automated systems that significantly impact people, such as AI decision-making systems for housing, employment and healthcare-related decisions, said Sorelle Friedler, OSTP's assistant director for data and democracy and a panelist.

The blueprint lays out five core protections for individuals when it comes to AI, including the right to data privacy, notices for when an automated system is being used, the ability to opt out, and protection from discriminatory and unsafe or ineffective systems.

Additionally, the blueprint includes a technical companion helping businesses and federal agencies apply and practice the five core principles.

These are actionable safeguards that are technologically realizable and necessary.
Sorelle Friedlerassistant director for data and democracy, White House Office of Science and Technology Policy

Several federal agencies have already responded to the guidance, Friedler said. The Department of Labor released a report on what the AI Bill of Rights means for workers and is increasing enforcement of its required surveillance reporting. Meanwhile, the Department of Health and Human Services issued a proposed rule including a provision prohibiting algorithmic discrimination in clinical decision-making.

"These are actionable safeguards that are technologically realizable and necessary," Friedler said about the blueprint during the webcast.

Friedler said she hopes "putting the weight of the White House" behind the blueprint and providing technical guidance to businesses helps "move the conversation forward from principles into practice."

The future of AI regulation in the U.S.

Brookings' Engler described AI as a legislative challenge and said he doesn't expect to see any current bills around AI regulation pass anytime soon.

Due to the complexity of AI regulation, Engler said he expects the adoption of multiple rules and regulations rather than a single law to help drive and govern ethical AI use. Indeed, Friedler said numerous bills addressing different aspects of AI, from data privacy to algorithmic transparency, have been introduced.

"While I'm hoping to see some legislation, it's not going to be 'we passed an AI law and we're done,'" Engler said.

Jerome Greco, a public defender in the Legal Aid Society and a panelist, said the lack of legislation leaves few legal means available to tackle some of AI's biggest problems, such as the use of facial recognition by law enforcement.

"That is one of the flaws, that the AI Bill of Rights is not legislation," Greco said. "It could lead to that, and I'm hopeful that it does on many fronts, but it currently doesn't, and our courts are not set up to handle these [cases]."

Dig Deeper on CIO strategy

Cloud Computing
Mobile Computing
Data Center
Sustainability and ESG
Close