AlfaOlga/istock via Getty Images

How providers can navigate the patchwork of state health AI laws

To contend with the complex regulatory patchwork of state health AI laws, experts recommend that health systems adopt flexible, principles-based governance structures.

As health AI innovation accelerates at breakneck speed, regulatory efforts are struggling to keep up. At the federal level, the Trump administration has taken a largely deregulatory stance on health AI. In the meantime, states have attempted to fill in the regulatory gaps.

Manatt law firm's Health AI Policy Tracker shows that 47 states introduced 250 AI bills impacting healthcare in 2025, resulting in a complex patchwork of state laws.

"It's so convoluted," according to Amy Worley, leader of BRG's Privacy and Information Compliance group. "There are multiple definitions of even what AI is in the existing state laws."

To navigate these varied and, at times, conflicting regulations, health systems must develop flexible governance and compliance structures. Worley shared with Healthtech Analytics some of the approaches health systems can adopt.

What state-based AI regulation looks like

According to Manatt's tracker, 33 of the 250 AI bills introduced were passed and enacted into law in 21 states.

These laws focused on the use of AI chatbots in mental and clinical health, transparency, payer use of AI and 'AI sandboxes' for testing of innovative AI tools. For instance, in August 2025, Illinois became the first state to ban the use of AI for mental health treatment and clinical decision-making within behavioral healthcare. Additionally, Texas enacted a law requiring healthcare providers to give written disclosures about AI use in clinical care to patients and establish an AI regulatory sandbox program.

Further, states introduced approximately 60 bills in 2025 aimed at regulating payer use of AI. Four of these were enacted into law in Arizona, Maryland, Nebraska and Texas, and included provisions such as prohibiting the use of AI alone to deny care or prior authorizations and requiring human review of algorithm-driven decisions.

These focus areas signal a shift from 2024, when policymakers were more focused on broad AI governance frameworks.

Developing flexible governance structures

Moving forward with health AI implementation amid various state laws requires healthcare providers to develop governance structures that address common concerns across state laws rather than getting into the weeds of each law on a state-by-state basis.

According to Worley, the best way to build flexibility into an AI governance structure is to adopt a principles-based approach. This requires identifying a set of principles. Worley provided the following examples:

  • We're proactive and manage the risk from the very beginning.
  • We design for transparency and explainability.
  • We explain how the AI works.
  • We give individuals control over their data, where appropriate.
  • We give people opt-out or appeal rights.
  • We try to stay away from zero-sum outcomes.

Healthcare organizations can also look to existing frameworks, like the National Institute of Standards and Technology AI Risk Management Framework, to help them identify a common set of principles for their governance approach, Worley added.

Vendor partnership is a critical aspect of establishing this principles-based governance approach. Worley highlighted the importance of accountability, with the health system identifying potential risks stemming from the use of an AI tool and assigning a clear "owner" for those risks.

First, health system leaders should ask vendors for documentation detailing how their tools work and the potential risks associated with them, and then they can assign responsibility.

"So, I want to see [the vendor's] explainability documentation," Worley said. "I want to see their bias testing and get all of that documented as a part of the due diligence. And then the contracting should specify the liability for those things."

The liability each entity assumes will vary depending on the type of AI software being used, she added. For example, vendors may need to take greater liability for software-as-a-service cloud-based tools, while providers would assume greater responsibility for tools they plan to maintain on-premises.

Further, Worley underscored the importance of creating AI governance committees to manage the evolving landscape of health AI regulation. Monitoring new state and federal laws impacting health AI should be a part of these committees' functions.

"Larger organizations will be able to do that internally," she noted. "They should be able to hire folks who can do this internally. Smaller organizations will probably need to work with some experts, whether it's outside counsel or external consulting partners."

Preparing for anticipated federal regulation

Though the Trump administration released an AI Action Plan last year, touting plans to remove "red tape" and limit "onerous" regulations, it has taken little concrete action, particularly with regard to health AI. It was only late last year that HHS began seeking public input on how the agency can support AI adoption in healthcare.

Thus, federal regulation may be forthcoming, adding to the already convoluted landscape of AI regulation in America. Luckily for health organizations, the market itself makes demands that will help them prepare for new federal and state regulations. These include product safety, data privacy and negligence laws that already govern new tools entering the market.

"You still have what the market requires," Worley noted. "You still have the contract negotiations, and most of these players are setting up systems, especially on the AI developer side, that comply with the EU, et cetera."

Worley also warned healthcare organizations from expecting federal preemption of state laws. Even though the federal government seems poised to challenge state laws if they are deemed too burdensome, there may not be a legal pathway to do so.

"It's going to be a long time before this law settles," Worley said. "It's very new, it's very dynamic, and that's not going to change anytime soon. And so, organizations are going to have to build an internal system that works regardless of these one-off changes that are going to keep popping up."

Anuja Vaidya has covered the healthcare industry since 2012. She currently covers the virtual healthcare landscape, including telehealth, remote patient monitoring and digital therapeutics.

Dig Deeper on Artificial intelligence in healthcare