Listen to this article. This audio was generated by AI.
Lawmakers weighing the U.S. approach to AI are deeply divided on promoting AI innovation while protecting against AI-related harms. A measure to stop state AI law enforcement has brought the debate to a turning point.
During the recent budget reconciliation -- a process to speed up passage of a federal budget -- Republicans in the U.S. House of Representatives proposed imposing a moratorium on states enforcing their own AI laws for a period of 10 years.
U.S. companies innovating in AI today face regulatory headwinds both abroad, from regulations like the European Union's AI Act, and at home, with a growing patchwork of state AI laws, that could "jeopardize our global leadership," Rep. David Joyce (R-Ohio) said. Joyce spoke during a House Subcommittee on Commerce, Manufacturing and Trade hearing on Wednesday.
"Just since January, there have been over 1,000 AI bills introduced across the U.S.," Joyce said. "These measures vary widely in their definitions, requirements, enforcement mechanisms and scope. This emerging patchwork of regulations is creating confusion and inconsistency."
Joyce said he isn't advocating for a "free-for-all, wild west-type regulatory environment." The House Committee on Energy and Commerce is assessing a national framework to provide clarity and consistency without stifling AI growth, he said. Also, Joyce pointed to President Donald Trump signing the bipartisan Take It Down Act into law on Monday. The law requires digital platforms to remove intimate images of individuals, whether real or computer-generated, upon notification.
"This law is a prime example of targeting a specific harm with a narrowly tailored law to fill a gap that has been identified in existing law," he said.
Still, Democratic lawmakers resisted the proposed state AI law moratorium, calling it a giveaway to big tech companies.
"We can agree that a patchwork of various state laws is not good for innovation, business or consumers," Rep. Lori Trahan (D-Mass.) said during the hearing. "But this is a bad policy because it sets another disincentive for us to act urgently or even in time. All the while, Republicans are once again ceding Congress's duty to protect Americans' privacy to the very companies who are perpetrating the worst abuses online."
State AI laws under debate
U.S. policymakers need to focus on the opportunity AI presents rather than follow a precautionary approach like the EU, testified Sean Heather, senior vice president for international regulatory affairs at the U.S. Chamber of Commerce.
In the United States, the AI Act's influence is noticeable. States like Colorado, California, Texas and Virginia have introduced AI regulations that echo Europe's approach.
Sean HeatherSenior vice president for international regulatory affairs, U.S. Chamber of Commerce
Europe is "woefully behind in key digital sectors" due to its excessive regulations, he said. Regulations such as the EU AI Act also target U.S. big tech companies -- something Heather said the U.S. should prevent from happening in other countries and U.S. states. Such regulations fail to achieve a balance between regulating risk and fostering innovation, he said.
"In the United States, the AI Act's influence is noticeable," Heather said. "States like Colorado, California, Texas and Virginia have introduced AI regulations that echo Europe's approach."
AI-related harms can be addressed through existing consumer protection laws, testified Adam Thierer, a senior fellow at the R Street Institute. He said U.S. innovators risk being squeezed between "overzealous European regulation" and "excessive state and local mandates."
However, Amba Kak, co-executive director at the AI Now Institute, argued that Congress has "failed to sufficiently regulate big tech for over a decade." Meanwhile, she said, states moved well before Congress in areas like deepfakes, which the Take It Down Act targets.
She added that state measures offer nimble, targeted protections for U.S. consumers and require disclosures for individuals affected by AI in sectors such as employment, healthcare and education.
Kak said state AI laws set bare minimum standards for an industry that "derives its power from obscurity."
"A moratorium on AI-related state laws at a time when there are minimal federal laws in place would instead set the clock back and freeze it there," she said during the hearing. "Why we would treat these companies with kid gloves at a moment when they need more scrutiny, not less, is what should be in focus today. We don't have 10 years to wait."
Makenzie Holland is a senior news writer covering big tech and federal regulation. Prior to joining Informa TechTarget, she was a general assignment reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.