Getty Images/iStockphoto

U.S. senators sharpen focus on AI regulations

Licensing AI systems, creating a federal AI agency, and establishing third-party testing and auditing regimes are just some of the AI regulation ideas U.S. senators are pursuing.


Listen to this article. This audio was generated by AI.

U.S. senators are using a bevy of experts as a sounding board for their ideas on potential AI regulations.

In the second of a series of hearings to help guide senators' work on AI regulations, the Senate Judiciary Subcommittee on Privacy, Technology and the Law spoke with leading AI experts, asking for their thoughts on ways to regulate the rapidly evolving technology.

During the first of these hearings, OpenAI CEO Sam Altman notably advocated for AI regulation, including the idea of establishing a new federal agency to oversee the technology. Shortly after, Sen. Michael Bennet (D-Colo.) introduced a bill to do just that. Sens. Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) also introduced an AI bill called the No Section 230 Immunity for AI Act, which would give consumers the right to take companies to court over harmful AI-generated content such as deepfakes.

Blumenthal, chair of the subcommittee, made clear during Wednesday's hearing that Congress isn't done introducing legislation targeting AI. He said it's not enough to simply provide guiding principles such as the Blueprint for an AI Bill of Rights, or that companies including Microsoft, Google, Meta, Amazon, Anthropic, Inflection and OpenAI voluntarily committed to testing AI models before deployment and watermarking AI-generated content. Though Blumenthal said he appreciates the commitments, which demonstrate that testing of AI systems and watermarking content are possible, it shouldn't negate action by Congress.

"The goal for this hearing is to lay the ground for legislation," he said. "To use this hearing to write real laws -- enforceable laws."

U.S. senators propose ideas for AI regulation

During the hearing, Blumenthal and other senators queried experts on a number of potential routes for AI regulation, including the following:

  • Establishing a licensing regime for companies engaged in high-risk AI development.
  • Creating an AI testing and auditing regime for objective third parties.
  • Creating a new federal agency overseeing AI.
  • Imposing legal limits on certain AI uses for elections and nuclear warfare.
  • Requiring watermarking and transparency when AI is being used.
  • Enhancing access to data and AI models for researchers.

Dario Amodei, CEO of AI safety and research company Anthropic, said he supported the idea of a testing and auditing regime for powerful AI models. He noted, however, that more research through organizations such as NIST needs to be done to ensure the creation of proper testing methods.

"We should recognize the science of testing and auditing for AI systems is in its infancy," he said during the hearing. "It is not currently easy to detect all the bad behaviors an AI system is capable of without first broadly deploying it to users, which is what creates the risk. Thus, it is important to fund both measurement and research on measurement to ensure a testing and auditing regime is actually effective."

An AI system that harms human beings is simply not good AI.
Stuart RussellProfessor of computer science, University of California, Berkeley

Yoshua Bengio, founder and scientific director of Mila - Quebec AI Institute, also supported the idea of allowing independent audits of AI systems as well as enhancing research around the safe development, deployment and use of such systems.

Though supportive of the ideas that senators introduced during the hearing, Stuart Russell, professor of computer science at the University of California, Berkeley, said it's important to go even further and consider consequences for AI systems that are found to cause harm. He suggested going as far as recalling those AI systems from the market.

"Regulation is often said to stifle innovation," Russell said during the hearing. "But there is no real tradeoff between safety and innovation. An AI system that harms human beings is simply not good AI."

Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.

Dig Deeper on CIO strategy

Cloud Computing
Mobile Computing
Data Center
Sustainability
and ESG
Close