sdecoret - stock.adobe.com
The White House Office of Science and Technology Policy unveiled its Blueprint for an AI Bill of Rights Tuesday, identifying five principles to guide the design and use of automated systems.
The AI Bill of Rights focuses on issues such as algorithmic harms, including bias and discrimination against consumers during the hiring and credit check process, as well as rampant social media data collection threatening consumer privacy. The blueprint only advises businesses how to act and is unenforceable, but it may influence legislation and regulation.
The Office of Science and Technology Policy began working on the AI Bill of Rights last year. While some welcome the blueprint, other experts say its lack of enforceability will effect little change among AI systems causing harm to consumers.
The five principles include:
- Protection from unsafe and ineffective systems: The AI Bill of Rights recommends pre-deployment testing, risk identification and mitigation for all automated systems.
- Protection from algorithmic discrimination: Businesses should perform system equity assessments as part of the system design.
- Data privacy protection: The AI system should include data privacy protections by default. The blueprint also opposes using AI to surveil behavior where it could affect consumer rights, such as at work, school or home.
- Provisions for notice and explanation: Businesses should notify consumers when automated systems are in use and explain why.
- Provisions for human alternatives: Consumers should have the right to opt out of automated systems and the ability to request to work with a human.
AI Bill of Rights offers guidance, but no enforceability
The Blueprint for an AI Bill of Rights is not a legal requirement. Instead, it acts as a guide for businesses developing and deploying AI systems.
The main problem with the AI Bill of Rights is it has no teeth, said Alan Pelz-Sharpe, founder of market analysis firm Deep Analysis.
Pelz-Sharpe said that at best, government departments might over time adopt some of the blueprint's recommendations and provide some transparency to their use of AI systems. But he said it's a stretch to expect public tech companies to follow the blueprint voluntarily.
"Neither I nor other observers see any logic or reason to trust tech firms to regulate themselves and for them to act ethically," Pelz-Sharpe said. "Yet the government is not only moving slowly to address these concerns, it's still not clear at all if it fully grasps the magnitude of the problems that lie ahead."
Alan Pelz-SharpeFounder, Deep Analysis
Pelz-Sharpe said AI systems are already affecting consumers and the European Union is "years ahead in providing guidance and moving toward enforcement."
"One might have thought that regulating AI and providing enforceable guidelines would be a bipartisan issue that everyone could get behind, but so far there seems to be little interest in the issue," he said.
However, organizations including the Center for Democracy and Technology (CDT) said it is a step in the right direction.
"The AI Bill of Rights marks an important step in recognizing the ways in which algorithmic systems can deepen inequality," said CDT President and CEO Alexandra Reeve Givens in a statement. "It expresses expectations for safer and fairer data practices -- something to which all entities developing and deploying AI systems should commit."
Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.