Getty Images/iStockphoto

OpenAI CEO advocates for AI regulation

To mitigate harms caused by artificial intelligence, industry leaders including OpenAI CEO Sam Altman said regulation will be necessary.


Listen to this article. This audio was generated by AI.

Artificial intelligence regulation will be vital to guiding the benefits of generative AI tools in the future and mitigating risks associated with the technology, said Sam Altman, CEO of ChatGPT creator OpenAI, before a Senate panel Tuesday.

Altman said his greatest fear is that the AI industry will "cause significant harm to the world." Generative AI tools are raising concerns, from creating incorrect and harmful information to the possible widespread displacement of jobs.

I think if this technology goes wrong, it can go quite wrong.
Sam AltmanCEO, OpenAI

"I think if this technology goes wrong, it can go quite wrong," Altman said during the hearing. "And we want to be vocal about that. We want to work with the government to prevent that from happening."

Altman testified before the Senate Judiciary Subcommittee on Privacy, Technology and the Law in a hearing that kicked off a series to help determine a congressional course of action on AI regulation.

During the hearing, several U.S. senators, including Subcommittee Chair Sen. Richard Blumenthal (D-Conn.), emphasized the need to get ahead of rapidly advancing AI tools by implementing appropriate guardrails for the technology. Blumenthal said it's incumbent upon Congress to write rules for AI -- something Congress failed to do for social media platforms as they rose to prominence.

"Our goal is to demystify and hold accountable those new technologies to avoid some of the mistakes of the past," Blumenthal said. "Congress failed to meet the moment on social media. Now, we have the obligation to do it on AI before the threats and the risks become real."

Experts differ on routes to AI regulation

Blumenthal suggested some methods of AI regulation could include developing score cards for AI systems that provide information such as what data AI models are being trained on. He also recommended limiting AI use in decision-making that affects consumers' livelihoods.

Blumenthal said AI companies should also be required to rigorously test AI systems before releasing them to the public.

Indeed, OpenAI's Altman said that before releasing any new systems, OpenAI conducts its tests while also engaging external parties for system audits. Before releasing its latest AI model, GPT-4, OpenAI spent six months conducting external red teaming and testing, he said. However, Altman agreed with Blumenthal that regulatory interventions such as licensing and testing requirements for the development of AI models will be necessary to address risks.

"This is a remarkable time to be working on artificial intelligence, but as this technology advances, we understand that people are anxious about how it can change the way we live," Altman said. "But we believe that we can and must work together to identify and manage the potential downsides so we can all enjoy the tremendous upsides."

Some senators proposed creating a new government agency to oversee AI regulation, which Christina Montgomery, IBM's chief privacy and trust officer and a witness at the hearing, opposed. Montgomery highlighted a claim made by multiple law enforcement agencies, including the Federal Trade Commission, that they can enforce existing consumer protection laws regarding AI systems.

Instead, Montgomery urged Congress to govern the deployment of AI in specific use cases rather than regulating the technology itself.

Montgomery said that would involve defining risks and creating different rules for different levels of risk. For example, a risk that senators highlighted during the hearing was bad actors using tools such as ChatGPT to create and spread election disinformation, which Montgomery said would represent a high-risk category.

Gary Marcus, a professor of psychology and neuroscience at New York University and another witness during the hearing, highlighted the need for third-party access to AI systems for testing, particularly when it comes to the idea of developing score cards for AI systems.

Marcus said technical challenges remain, including understanding how advanced AI models generalize information from large data sets.

"It's important for scientists to be part of that process and that we have much greater transparency about what goes into these systems," Marcus said.

Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.

Dig Deeper on AI technologies

Business Analytics
CIO
Data Management
ERP
Close