2024 election to shape U.S. climate policy's fate Inflation Reduction Act boosts clean energy incentives

Biden EO aims to build foundation for AI legislation

The Biden administration is tasking federal agencies with making standards for AI systems development that could shape what legislation looks like at the state and federal level.

President Joe Biden's administration today released a sweeping executive order focused on crafting standards for AI systems development, which could underpin future legislation in Congress or states across the U.S.

The EO firstly requires developers of AI systems to share safety test results with the U.S. government, building off commitments the Biden administration secured earlier this year from AI companies to build safe and trustworthy systems. The EO also directs federal agencies to begin developing AI safety standards, including a request for guidance from the Department of Commerce on watermarking methods to label AI-generated content.

While the EO doesn't issue mandates around the use of AI, it does address the growing need for AI standards that's commonly pointed out in congressional proposals for AI legislation, said Anna Lenhart, a policy fellow at the George Washington University Institute for Data, Democracy and Politics.

"You can't say, 'Hey, company, you have to watermark,' if there's no watermark standard," she said.

Through the EO, Lenhart said, the White House is using its authority to set AI best practices and standards so that when AI legislation does pass, the standards are ready and "we're not spending two years writing the standards before we can enforce the laws."

Congress has yet to pass AI legislation, while governments around the world, including the European Union, advance their own rules of the road, prompting the Biden administration to take action.

"Congress has stalled, and that has ceded a lot of tech policy decisions to the executive branch and to the courts -- and we're seeing that across the board," Lenhart said.

Federal agencies to face challenges developing AI standards

The executive order directs federal agencies to develop certain AI safety standards that might prove challenging, said Hodan Omaar, a senior policy analyst at the Center for Data Innovation, in a statement.

For example, the EO states that NIST will set standards for "extensive red-team testing to ensure safety before public release" of AI systems. Red-team testing is a tactic to uncover weaknesses in a system's security controls. The EO also says agencies funding life science projects will set standards for biological synthesis screening to "protect against the risks of using AI to engineer dangerous biological materials."

What we want is for Congress to pass a bill that then makes these things move from standards and best practices to mandates.
Anna LenhartPolicy fellow, George Washington University Institute for Data, Democracy and Politics

"These are all active areas of research where there are no simple solutions," Omaar said. "Policymakers often forget that the reason industry hasn't already adopted certain solutions is because those solutions don't yet exist."

Setting AI standards in those areas will be challenging, which is why it's important for agencies to start now, Lenhart said. It will take time and engagement from multiple sectors and industries, which means Congress might need to eventually direct additional funds to agencies working on the standards while maintaining the overarching goal of passing AI legislation, she said.

"What we want is for Congress to pass a bill that then makes these things move from standards and best practices to mandates," Lenhart said.

The Biden administration is moving in the right direction and taking critical initial steps for creating safe AI systems, but it's only the start, said Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights, in a statement.

"[T]oday is just the beginning of a regulatory process that will be long and arduous -- and ultimately must require that the companies profiting from AI bear the burden of proving that their products are safe and effective, just as the manufacturers of pharmaceuticals or industrial chemicals or airplanes must demonstrate that their products are safe and effective," he said.

Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.

Dig Deeper on CIO strategy

Cloud Computing
Mobile Computing
Data Center
Sustainability
and ESG
Close