Getty Images/iStockphoto

Schumer's AI policy objectives prioritize innovation

The SAFE Innovation Framework proposed by Sen. Chuck Schumer names explainability, accountability and innovation as some of the top policy objectives for AI in the U.S.

While Congress has been slow to advance AI legislation amid a growing wake of concern about AI technologies, its long-term focus on a wide range of AI policy objectives is starting to take shape.

Sen. Chuck Schumer (D-N.Y.) proposed in June the SAFE Innovation Framework, which focuses on several policy objectives for AI including safeguarding national security, accountability in developing responsible AI systems, aligning foundational AI systems with democratic values, determining what information the U.S. government and public need about AI systems and data, and supporting U.S.-led AI innovation.

Schumer's proposed SAFE Innovation Framework involves first taking stock of the spectrum of AI topics through a series of AI Insight Forums convening AI experts across industries starting this fall. Following the forums, Schumer proposes that Congress begin a bipartisan process to implement the SAFE Innovation Framework.

Sarah Kreps, director of the Tech Policy Institute in the Cornell University Brooks School of Public Policy, said Schumer's proposed approach to AI regulation is "more measured and mindful of the upsides of AI innovation."

"We haven't yet seen significant national-level legislation, but Schumer has the clout to convene experts and the key firms that can build momentum behind AI legislation that not only makes sense, but can work," she said.

SAFE Innovation Framework focuses on innovation

In remarks made in June, Schumer said the SAFE Innovation Framework "must never lose sight of what must be our north star -- innovation."

Indeed, the framework emphasizes innovation and investment, which go hand in hand when discussing regulation, said Daniel Zhang, senior manager for policy initiatives at the Stanford Institute for Human-Centered Artificial Intelligence.

The framework lays out the foundation that gives policymakers some time to get a better understanding of the complex and multidisciplinary nature of AI.
Daniel ZhangSenior manager for policy initiatives, Stanford Institute for Human-Centered Artificial Intelligence

"The framework lays out the foundation that gives policymakers some time to get a better understanding of the complex and multidisciplinary nature of AI and talk to the public who are impacted by the technology before jumping on to specific legislations," he said.

The SAFE Innovation Framework is structured around the right questions of accountability, transparency and explainability, while also recognizing that "we can't paint AI with such broad brushes," Kreps said. The challenge with AI regulation is that one size doesn't fit all, meaning policymakers need to think differently about national security issues versus potential bias and discrimination issues associated with the technology, she said.

"We need to be asking all of those questions," Kreps said. "What are the risks in each of those areas, what regulation is feasible and desirable for each area, and who should do that regulation?"

Zhang said some of the policy objectives listed in the SAFE Innovation Framework will require further development and clarification. Making AI explainable, for example, is a task Zhang said is already proving difficult for AI developers. When discussing such topics around AI, there is a "gap between what policymakers want and what is technically feasible," he said. The framework also needs to address the lack of technical talent in AI, which he said is necessary for the U.S. to maintain its leadership in the technology.

Ultimately, Zhang said developing AI regulations should be a cooperative effort, including between the U.S. and EU, which are leading voices on the democratic governance of AI. He said harmonizing efforts across jurisdictions in developing foundational AI policies is critical.

"It should be a collaborative endeavor, with the entire AI ecosystem across governments, academia, industry and civil society organizations sharing insights, experiences and best practices," Zhang said. "Acting hastily or without a holistic understanding can lead to regulations that are either too restrictive -- stifling innovation -- or too lax, endangering user rights and societal values."

EU AI regulatory efforts face backlash

Business leaders in Europe have already started raising concerns about the European Union's draft AI regulation, the EU AI Act.

In an open letter, 150 corporate leaders in the EU expressed concerns about the pending legislation, saying they believe it would threaten "Europe's competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing." The EU AI Act aims to regulate AI based on risk, categorizing different AI models into high, medium and low risk.

Kreps described the EU's top-down approach to AI regulation as static, which she said doesn't accommodate the evolving nature of AI technologies, nor the value and importance of AI for the economy, medicine and society.

"I would worry that Europe's approach is neither mindful of the pace of technological change nor the potential upsides of those technologies," she said.

Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.

Dig Deeper on CIO strategy

Cloud Computing
Mobile Computing
Data Center
Sustainability
and ESG
Close