whyframeshot - stock.adobe.com

U.S. policy moves reflect big tech issues with state AI laws

House Republicans proposed a 10-year moratorium on state AI rules, reflecting a concern among tech companies about the growing patchwork of state AI and data privacy measures.

Big tech vendors want relief from the growing patchwork of state laws on AI and data privacy. President Donald Trump's One Big Beautiful Bill Act delivers on this by placing a 10-year moratorium on state AI laws, while Congress works on yet another federal data privacy bill.

The U.S. House of Representatives passed the extensive tax and domestic policy package Thursday, and it will now be taken up by the Senate. The tax bill passed by a 215-214 vote, with all House Democrats voting against the package. The bill's inclusion of halting state AI law enforcement applies to any law or regulation "limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems entered into interstate commerce."

The bill reflects the federal administration's policy shift on AI, said Gartner analyst Lydia Clougherty Jones. While the proposal will require more approvals as it moves through the Senate, she said the private sector "should prepare today for a more deregulated tomorrow" as Trump's deregulatory message is reflected in Congress.

Big tech vendors have advocated for federal policy to preempt state AI laws. In comments submitted to the White House Office of Science and Technology Policy, which is in the process of creating a federal AI Action Plan, generative AI vendor OpenAI called state AI laws overly burdensome while Google characterized the growing patchwork as chaotic. Similarly, the tech giants for years have been calling on Congress to enact a federal data privacy framework preempting state data privacy laws.

However, the last two significant federal data privacy laws introduced to Congress failed to pass into law. During a hearing this week on AI regulation, Rep. Lori Trahan (D-Mass.) targeted big tech companies, saying the vendors have lobbied to stop federal efforts at adopting data privacy laws that include measures they disagree with, including minimizing data collection.

While Trahan agreed that the patchwork of state laws isn't good for business, she said the conviction that a state AI law moratorium will "somehow motivate Congress to unify the patchwork of state laws" isn't justifiable, given Congress's lack of success with passing a federal data privacy law. She argued against removing state guardrails while Congress struggles to agree on its own measures for big tech.

"Our constituents aren't stupid," she said. "They expect real action from us to rein in the abuses of tech companies, not to give them blanket immunity to abuse our most sensitive data even more."

How state data privacy laws have affected big tech

Google recently reached a data privacy settlement with the state of Texas for nearly $1.4 billion. Texas Attorney General Ken Paxton sued Google in 2022 for unlawfully tracking users and collecting biometric, geolocation and incognito search data.

Paxton described the hefty settlement in a release as a win for Texans' privacy, adding that it "tells companies that they will pay for abusing our trust." Paxton also secured a $1.4 billion settlement with Meta for unlawfully collecting facial recognition data.

The settlement is "orders of magnitude larger" than other data privacy settlements that have been reached in the U.S. and shows Texas is serious about focusing on big tech, said Cobun Zweifel-Keegan, managing director of IAPP, an organization of privacy professionals.

"In Texas … there's a lot of pride behind this settlement," Zweifel-Keegan said. "There is a bit of that mentality going around. We certainly see that in Texas, but I wonder if it won't spread to other states."

However, he said the Texas settlement with Google is less significant in terms of tackling new issues.

Once you make a misstep, it could take many years for all the regulators to finish in their investigations to reach settlements.
Cobun Zweifel-KeeganManaging director, IAPP

"It raises mostly old issues that have been previously litigated, previously enforced against either by AGs or in class action lawsuits," Zweifel-Keegan said. "One of the big takeaways for me is that it shows the long tail of privacy in that, once you make a misstep, it could take many years for all the regulators to finish their investigations to reach settlements."

During the IAPP Global Privacy Summit last month, Evangelos Razis, a professional staff member on the House Committee on Energy and Commerce, said the committee is working on a federal data privacy framework. He said the committee is assessing what has and hasn't worked at the state level on data privacy.

Preparing for a state AI law moratorium

While multiple U.S. states have data privacy laws and enforce them against big tech vendors, others -- including Utah, California and Colorado -- have passed AI laws as well.

Similar to concerns about the patchwork of data privacy laws, the collage of state AI laws is an issue that Congressional Republicans are convinced will hamper businesses' ability to innovate and build the technology to compete on a global scale.

With the state AI law moratorium, Clougherty Jones said states may be able to enforce only certain aspects of their regulations, or none at all.

She said businesses should pay attention to the proposed moratorium's mention of automated decision systems. According to Gartner, 50% of business decisions will be automated or augmented by autonomous AI agents by 2027.

"Organizations should consider the impact of the proposal, if enacted, on their AI ambition for augmented decision making," she said. "They should also track state legislatures with pending AI regulations to regulate automated decision making, such as New Mexico, New York and Texas, which are currently considering draft laws."

Faith Bradley, a teaching assistant professor in information systems at George Washington University School of Business, said the federal government often lags behind technology advances. By the time new technologies are widely adopted and negative effects occur, the government is playing catch-up, she said.

She said AI itself "is not evil," but given the uncertainty around how data is collected, stored and used, legal frameworks will be necessary to hold AI vendors accountable.

"It's very important when it comes to using any kind of AI tool, we have to understand if there is any possibility of misuse," she said. "We need to calculate the risk."

Makenzie Holland is a senior news writer covering big tech and federal regulation. Prior to joining Informa TechTarget, she was a general assignment reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.

Dig Deeper on Risk management and governance