In 2024, U.S. agencies will tackle key objectives from President Joe Biden's executive order on artificial intelligence, focusing on evaluating open source AI systems' risks and benefits as well as crafting comprehensive AI standards.
Biden's order on AI released in October 2023 detailed the administration's AI data security and policy approach. It calls for federal data privacy legislation and gives federal agencies a significant amount of work studying AI issues.
Speaking at CES 2024, Alan Davidson, administrator of the National Telecommunications and Information Administration (NTIA) within the Commerce Department, said realizing the promise of AI means addressing the risks and concerns it raises, including data security and privacy, bias, risk of disinformation, and effects on the labor market.
"At the federal level, President Biden's AI executive order is the most significant government action to date on AI," he said. "It brings the full capabilities of the U.S. government to bear in promoting innovation and trust in machine learning and AI tools."
This year, the Commerce Department will play a leading role in the administration's AI efforts, Davidson said.
Department of Commerce works on Biden's AI order
In response to Biden's order, the Commerce Department is directing agencies it oversees to take action. NIST is creating a U.S. Artificial Intelligence Safety Institute. Meanwhile, the U.S. Patent and Trademark Office is exploring AI copyright issues, while NTIA launched an initiative on AI accountability.
NTIA plans to seek public comment on the risks and benefits of open source AI and aims to produce a report by July 2024, according to Davidson.
He said there are conflicting views about open source AI.
"On the one hand, early conversations about open source AI have engendered fear -- fears about making the most advanced frontier models widely available without appropriate or adequate restrictions or safeguards on their use," Davidson said. "On the other hand, we've heard from people concerned about the impact on competition and innovation if only a small set of players control access to the most important models."
NIST, which has already crafted guidance for AI use called the AI Risk Management Framework, has also been tasked with creating a common set of standards, terms and guidelines for AI use.
Sam MarulloCounselor to the U.S. Secretary of Commerce
Sam Marullo, counselor to the Secretary of Commerce, said at CES that NIST is assessing questions such as what it means for an AI model to be safe. That could include red-teaming AI models, or having teams of people try to break a model before it's released. However, that's only one potential approach among other best practices, Marullo said. NIST has also been asked to look at technical aspects of content authentication and watermarking.
The work NIST is doing could eventually serve as guidance for AI regulation in the U.S., he said. Policymakers are already considering AI regulation and what that should look like for U.S. businesses, but legislation has yet to advance.
"There are ideas on the table, but no one is exactly sure what the best set of comprehensive regulations looks like," Marullo said. "But in order to get to regulation, the first thing you have to do is you have to agree on some common terms, best practices and guidelines. That is what NIST, as a standards agency, is really good at. The president asked us in his executive order to do that."
Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.