Nabugu - stock.adobe.com

Key HIPAA compliance considerations for agentic AI tools

Agentic AI tools, like any other AI-powered tools, come with HIPAA compliance considerations to ensure that PHI is protected.

In the age of AI, healthcare organizations are increasingly adopting tools that help them work faster and smarter, with an emphasis on streamlining provider workflows and patient experience. Agentic AI, which differentiates itself from traditional AI by its autonomous decision-making capabilities, also shows significant promise in healthcare, enabling better patient outreach and revenue cycle gains.

Despite the benefits that AI-driven tools can bring, HIPAA compliance remains an important consideration when implementing new tools into a healthcare setting. Healthcare security and privacy professionals should consider the compliance implications of bringing a traditional or agentic AI tool into their environments, including third-party risk and increased compliance complexity.

What is agentic AI?

Traditional AI solves pre-defined problems and prompts. Examples include tools such as ChatGPT and other generative AI models. Rather than strictly responding to user input, agentic AI can make decisions, adjust its behavior and take initiative.

"I see the term agentic AI being used by folks when they probably just mean AI. For it to really be agentic, there needs to be some autonomy and goal-directed behavior," Jordan Cohen, partner at Akerman, said in an interview. "There's almost a set it and forget it aspect to it."

Cohen noted several examples of agentic AI in healthcare, such as patient outreach and follow-up or triage agents that can help conduct patient intake and ask about symptoms. AI agents can also check in with patients with chronic conditions and determine whether there is a need for human follow-up. For instance, they can be used in diabetic retinopathy screening, analyzing photographs of a patient's retina and diagnosing diabetic retinopathy without interpretation.

Agentic AI has also shown significant promise in the healthcare revenue cycle management space, enabling faster claims processing and offering a potential solution to ongoing workforce shortages.

"Healthcare is definitely in the early days of true agentic AI, especially in the diagnostic space, just because there really isn't the trust there yet to outsource the substantive decision-making of healthcare professionals to AI," Cohen said. "We're not there yet, but I think there is a lot of promise if the developers can get a handle on some of the issues with AI, like hallucinations."

Of course, human oversight is still crucial to AI success. However, AI agents can conduct initial screening and perform administrative tasks that save providers time, which they can instead devote to direct patient care.

HIPAA compliance: AI vendor complexity creates third-party risk

The benefits of agentic AI are clear, especially when it comes to reducing administrative burden and streamlining workflows. As such, healthcare leaders can likely expect increased adoption of agentic AI tools in the future.

However, when it comes to HIPAA compliance, implementing an AI agent, or any AI-driven technology for that matter, comes with third-party risk considerations.

"One of the tricky things with AI is just the sheer number of different business associates that may be involved in these AI systems," Cohen noted. Engaging with any new vendor requires due diligence to ensure compliance, particularly when that vendor will be handling protected health information (PHI) directly or indirectly.

"If you have PHI coming from, let's say, a healthcare provider, that information is going to continue to be PHI as it flows through the various AI systems, unless it is properly de-identified," Cohen said. "It is going to be subject to HIPAA, including as it flows from business associate to business associate."

Any given AI vendor might deal with subcontractors, such as data storage providers, who must also be looped into compliance and legally bound to protect PHI.

"One of the challenges with implementing an AI system from a covered entity's perspective, and even from a business associate's perspective, is keeping track of all of the different potential vendors and making sure that there are business associate agreements in place," Cohen stated.

"And as you go down the chain, the business associates and subcontractors must agree to the same restrictions on the use and disclosure of PHI."

Third-party risk has remained a pain point for healthcare for years. A 2025 report by Ponemon Institute and Imprivata revealed that nearly half of surveyed health IT leaders experienced a data breach or cyberattack involving third-party network access in the past year, highlighting gaps in third-party risk management.

While agentic AI is not healthcare's introduction to third-party risk, it does add compliance complexity to an area that healthcare organizations have long struggled with.

"The vendor management issue, including on the HIPAA business associate side, is going to be more complex when you have these sophisticated AI systems," Cohen suggested, emphasizing that compliance complexities are bound to arise when implementing new technology.

Existing HIPAA compliance strategies also apply to agentic AI

Though agentic AI adoption comes with increasing compliance complexity, the adoption of this technology will not require covered entities to reinvent their compliance strategies.

"I think the good news for those in the health IT security and risk management space is that a lot of the best practices aren't necessarily specific to agentic AI," Cohen said. "And really a lot of them aren't specific to AI at all. These are things that we've been talking about for years."

While agentic AI grows in popularity and makes its way into healthcare, the core tenets of HIPAA compliance remain the same. However, adopting new technology with a new web of vendors will naturally add complexity to an organization's approach to compliance.

"For example, consider a data inventory. The best practice is to have a diagram that illustrates the source of data, how it's getting ingested, processed, stored and which vendors are touching it," Cohen said. "And if you don't have visibility into that, you basically have one hand tied behind your back."

Cohen stressed the importance of transparency in the data inventory and flow of data, as well as regulatory mapping to understand what regulations apply in what instances. Outside of HIPAA, healthcare organizations may be subject to other regulations, such as the Federal Trade Commission (FTC) Act and FTC laws on deceptive trade practices, which could come into play when dealing with AI technology.

"If you're in the HIPAA-regulated space, AI has some unique challenges regarding training in contracts," Cohen added. "So, it's important for the parties to understand what a vendor can and cannot do with AI in terms of using that data to train AI algorithms."

Cohen also recommended addressing breach notification timelines and cross-border transfers and ensuring that any new AI system rollout is equipped with the proper security controls, such as encryption. Robust data governance policies and procedures will help covered entities manage the security and privacy risks of AI while leveling up their overall compliance programs.

"Putting those policies and procedures in place to understand exactly what needs to happen -- as the systems are getting built, as the systems are getting implemented, if there are breaches, how the vendor contracts are determined -- it's going to be crucial to reducing that risk profile," Cohen said.

Jill McKeon has covered healthcare cybersecurity and privacy news since 2021.

Dig Deeper on HIPAA compliance and regulation