Getty Images/Tetra images RF
President Joe Biden's chief science advisor, Eric Lander, wants an "AI bill of rights" to protect consumers from a technology that can have a significant impact on their lives. AI-enabled systems are being used to approve credit and home mortgages, as well as make employment and healthcare decisions, and they need to be held accountable, he said.
An AI bill of rights, Lander said, will give consumers a right to transparency and explainable AI, a technology approach that provides insight into algorithmic processes. They will use the AI bill of rights as the basis for regulation and legislation, he said.
"Soon after ratifying the constitution, Americans adopted a bill of rights to guard against the powerful government we just created," Lander said Tuesday at the 2021 Stanford University Human-Centered Artificial Intelligence Fall Conference. "We probably need a bill of rights to guard against the powerful technologies we've created."
Lander was appointed by Biden as director of the White House Office of Science and Technology Policy earlier this year. He is on leave from his jobs as a professor of biology at MIT and professor of systems biology at Harvard Medical School.
Lander said his office announced the AI bill of rights project in October. The project focuses on AI systems that make direct judgments that affect benefits and opportunities for individuals. Some of the rights could include a right for individuals to govern their personal data and the right to know what data was used to create and test an AI algorithm.
Eric Lander, Ph.DDirector, White House Office of Science and Technology Policy
"We're focused on ways to translate those rights that individuals have in a pre-AI world into a world where AI is involved in many important decisions," he said.
Call to action
Lander said he doesn't aim for his office to work alone on the AI bill of rights.
During the conference, Lander assigned "homework" to experts listening in, asking for their input on what sort of rights should be included in an AI bill of rights, as well as how to make it technically feasible and enforceable.
Lander said several questions still need to be answered, such as how AI developers can produce systems that adhere to the AI bill of rights.
Lander said there will ultimately need to be human accountability behind AI systems. "An actual human who can explain and defend and be held responsible for a decision," he said.
Lander said his office aims to build an AI bill of rights with teeth, suggesting enforcement come through multiple avenues such as U.S. and state government's procurement policies, new laws or litigation.
"We want to really think through things that could be practical to protect people," he said. "We see this as a way not to limit innovation. We see this as a way to improve the quality of products by not rewarding people who cut corners and instead setting ground rules that reward people who would produce safe, effective, fair, equitable products."
The focus on AI regulation has increased in 2021, as U.S. regulatory agencies like the Federal Trade Commission attempt to regulate AI through existing laws like the Fair Credit Reporting Act, a law that wasn't created with technologies like AI in mind. The states are also proposing their own AI regulation legislation.
Though interest in regulating AI has increased, the federal government has yet to regulate the technology. In Europe, the European Commission has already proposed an AI regulatory framework.
There are concerns that the U.S. is not "AI-ready," according to a report issued earlier this year by the National Security Commission on Artificial Intelligence. The Biden administration has until recently been vague on its approach to AI, though investing in AI innovation is mentioned in Biden's recently passed infrastructure bill.
Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.