Getty Images

Former OpenAI associates fear AGI, lack of U.S. AI policy

Since the U.S. lacks an overarching AI policy, insiders worry that existing AI harms aren't being addressed and that artificial general intelligence could increase risks.

Former OpenAI, Google and Meta employees testified before Congress Tuesday about the threat posed by AI reaching human-level intelligence, calling on members of the Senate Subcommittee on Privacy, Technology and the Law to advance U.S. AI policy that protects against harms caused by AI.

Artificial general intelligence (AGI) is an AI system that reaches almost human-level cognition. William Saunders, a former member of technical staff at OpenAI who resigned from the company in February, testified during the hearing that AGI could cause "catastrophic harm" through systems autonomously conducting cyberattacks or assisting in the creation of novel biological weapons.

Saunders said that while there are significant gaps to close in AGI development, it's plausible an AGI system can be built in as little as three years.

"AI companies are making rapid progress toward building AGI," Saunders said, pointing to OpenAI's recent announcement of GPT-o1. "AGI would cause significant changes to society, including radical changes to the economy and employment."

He added that no one knows how to ensure AGI systems will be safe and controlled, meaning they could be deceptive and hide misbehaviors. OpenAI has "repeatedly prioritized speed of deployment over rigor," which also leaves vulnerabilities and heightens threats such as theft from foreign adversaries of the U.S., Saunders said. During his employment at OpenAI, he noted that OpenAI did not prioritize internal security. He said there would be long periods where vulnerabilities would have allowed employees to bypass access controls and steal the company's most advanced AI systems, including GPT-4.

"OpenAI will say they are improving," he said. "I and other employees who resigned doubt they will be ready in time. This is true not just with OpenAI. The incentives to prioritize rapid deployment apply to the entire industry. This is why a policy response is needed."

AGI, lack of AI policy top insiders' concerns

I resigned from OpenAI because I lost faith that by themselves they would make responsible decisions about AGI.
William SaundersFormer employee, OpenAI

Saunders called on policymakers to advance policies prioritizing testing of AI systems before and after deployment, requiring testing results to be shared, and implementing protections for whistleblowers.

"I resigned from OpenAI because I lost faith that by themselves they would make responsible decisions about AGI," he said during the hearing.

Helen Toner, who served on OpenAI's nonprofit board from 2021 until November 2023, testified that AGI is a goal many AI companies think they could reach soon, making federal AI policy necessary. Toner serves as director of strategy and foundational research grants at Georgetown University's Center for Security and Emerging Technology.

"Many top AI companies including OpenAI, Google, Anthropic, are treating building AGI as an entirely serious goal," Toner said. "A goal that many people inside those companies believe that, if they succeed in building computers that are as smart as humans or perhaps far smarter than humans, that technology will be, at a minimum, extraordinarily disruptive. At a maximum, could lead to literal human extinction."

Margaret Mitchell, a former Microsoft and Google research scientist who now works as chief ethics scientist at AI startup Hugging Face, said policymakers must address the many gaps in companies' AI practices that could lead to harm. Indeed, voluntary self-regulation on safe and secure AI, something multiple AI companies committed to last year, does not work, said David Harris, senior policy advisor at University of California Berkeley's California Initiative for Technology and Democracy, during the hearing.

Harris, who worked at Meta on the civic integrity and responsible AI teams from 2018 to 2023, said those two safety teams no longer exist, pointing out how trust and safety teams at tech firms generally have "shrunk dramatically" over the last two years.

Harris said many Congressional AI bills provide solid frameworks for AI safety and fairness. Indeed, multiple AI bills are awaiting floor votes in both the House and Senate. But Congress has yet to pass AI legislation.

"My fear is we'll make the same mistake we did with social media, which is too little, too late," Sen. Richard Blumenthal (D-Conn.), chair of the subcommittee, said during the hearing. "What we should learn from social media is don't trust big tech. We can't rely on them to do the job."

Makenzie Holland is a senior news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general assignment reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.

Next Steps

Fears grow about big tech guiding U.S. AI policy

Dig Deeper on CIO strategy

Cloud Computing
Mobile Computing
Data Center
Sustainability
and ESG
Close