Getty Images

U.S. begins rulemaking for AI developers on riskiest models

The federal government has proposed a rule outlining cybersecurity and developmental reporting requirements for both AI developers and cloud providers.


Listen to this article. This audio was generated by AI.

The U.S. government on Monday proposed mandatory cybersecurity and developmental reporting requirements for leading AI developers and cloud providers.

The Notice of Proposed Rulemaking, introduced by the Department of Commerce's Bureau of Industry and Security (BIS), outlines a rule that would require developers of the most powerful AI models and cloud providers to give detailed reports to the federal government on the safety of their systems. The order targets AI systems with the potential to affect national security, the economy, public health and safety.

The proposed rule would also require reporting on outcomes of red-teaming efforts, which "involve testing for dangerous capabilities like the ability to assist in cyberattacks or lower the barriers to entry for non-experts to develop chemical, biological, radiological or nuclear weapons," according to a press release. The regulatory effort comes nearly a year after President Joe Biden's executive order on artificial intelligence asking AI developers to voluntarily share safety test results with the U.S. government.

"As AI is progressing rapidly, it holds both tremendous promise and risk," Secretary of Commerce Gina Raimondo said in the release. "This proposed rule would help us keep pace with new developments in AI technology to bolster our national defense and safeguard our national security."

The information gathered from such reporting would help the federal government ensure that powerful AI models meet strong safety and reliability standards and are able to withstand cyberattacks and misuse by foreign adversaries, according to the release.

Proposed rule for AI developers rooted in national security

The Biden administration made it clear that such a rule was coming, and BIS is "clearly operating within its mandate to collect data about the status of the U.S. AI industry, given its potential impact on national defense," said Daniel Castro, vice president of the Information Technology and Innovation Foundation.

The U.S. government needs to understand U.S. industrial capabilities in AI, as well as potential vulnerabilities.
Daniel CastroVice president, Information Technology and Innovation Foundation

"The U.S. government needs to understand U.S. industrial capabilities in AI, as well as potential vulnerabilities," he said. Indeed, in the release announcing the proposed rule, Alan Estevez, undersecretary of commerce for industry and security, said the reporting requirements would help government officials understand the capabilities of the most advanced AI systems.

However, Castro said the proposed rule stands on weaker ground regarding concerns of the risk of foreign adversaries exploiting U.S. AI models.

Castro argued it would be better for government leaders to assess the kind of access bad actors have to all AI models, regardless of whether they are produced by U.S. companies, and "consider the viability of various countermeasures."

Makenzie Holland is a senior news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general assignment reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.

Dig Deeper on Risk management and governance

Cloud Computing
Mobile Computing
Data Center
Sustainability
and ESG
Close