
sdecoret - stock.adobe.com
Agentic AI compliance and regulation: What to know
Agentic AI's autonomous nature and its ability to access multiple data layers bring heightened risk. Learn how to ensure its deployment meets compliance standards.
The widescale adoption of artificial intelligence by organizations has brought countless benefits, but it has also come with downsides.
In fact, 95% of executives said their organizations experienced negative consequences in the past two years as a result of their enterprise AI use, according to an August 2025 report from Infosys, "Responsible Enterprise AI in the Agentic Era." A direct financial loss was the most common consequence, reported in 77% of the cases.
As dire as those figures might seem, they could get even worse as organizations begin to implement agentic AI. Infosys found that 86% of executives who were aware of agentic AI believed that the technology poses additional risks and compliance challenges to their business.
"Agentic AI, because of its autonomous decision-making and autonomous action without a human in the loop, introduces additional risks," said Valence Howden, an advisory fellow with Info-Tech Research Group.
The term agentic AI, or AI agents, refers to AI systems that can make independent decisions and adapt their behavior autonomously to achieve a specific goal. Unlike traditional automation tools that follow a rigid, fixed set of instructions, agentic AI systems use learned patterns and relationships to reason and adjust their actions in real time. The capability to act independently is what sets AI agents apart from basic automation.
Why agentic AI needs new compliance strategies
Agentic AI's ability to make decisions and execute actions on its own introduces heightened risk into the organization, prompting AI experts and compliance officers to advise executives to be more attentive to embedding needed controls into the systems from the start.

"An [agentic AI] agent is parsing data through lots of layers, and there are compliance and governance and risk across all those layers," Howden explained. The more complex and important the activities performed by agents are, the more companies are increasing that risk.
At the same time, compliance under any circumstances is hard to do because it's a moving target, Howden underscored. "It's moving all the time, and yet you have to build a compliance structure for something that doesn't stay the same," he said.
Asha Palmer, senior vice president of compliance at Skillsoft, which makes learning management system software and training content for businesses, has witnessed how the additional security risk agentic AI poses can manifest. She cited a case at another company where an AI agent broke through a firewall to access confidential data during its testing phase.
Indeed, accessing and exposing sensitive data is one of the main risks that agentic AI presents, Palmer and others said. If programmed to gather insights, for example, an AI agent might access sensitive areas of the system without proper safeguards, leading to unintended exposure. If the agentic agent is compromised, it could also be manipulated to expose those weak spots.
Other risks of agentic AI include AI hallucinations, infringement on copyrighted or otherwise protected material, the use of biased or bad information to make decisions and unauthorized actions.
Those risks are not necessarily unique to agentic AI, as they also are associated with artificial intelligence in general. However, as Palmer and others interviewed noted, these risks are heightened in agentic AI: The sequence of agentic AI actions happening within the workflow, the layers in which the actions are happening, the speed at which those actions take place and the autonomous nature of those actions all make it more difficult to root out where, what and why something goes wrong.

This complexity has convinced experts like Andrew Grosso, principal attorney with Andrew Grosso & Associates and current chair of the Subcommittee on Law for the Association for Computing Machinery's U.S. Technology Policy Committee, that agentic AI fundamentally changes how companies will need to approach compliance. "My opinion is that agentic AI does require new compliance strategies," Grosso said.
Addressing new risks and implementing controls for agentic AI
How can enterprises address the risks inherent in using agentic AI? Palmer said her approach to ensuring agentic AI complies with any relevant regulations and standards is the same approach she takes to ensure compliance and reduce risk with other types of AI:
- Understand and assess the use case. Working with a cross-functional team, start with understanding and assessing the use case where AI will be deployed. List the specific risks associated with the use case.
- Identify key stakeholders. To ensure accountability, identify both the technology developer responsible for the AI system and the business owner in charge of the use case.
- Consider the purpose of the use case. Clarify what the objective of the use case is. Understand how AI is being used to achieve that objective.
- Identify the data involved. Pinpoint the data the AI system will access during its operation. Assess the sensitivity and safeguarding required by that data to mitigate security risks.
Palmer said the information she gleans from these steps determines what controls are put in place to ensure the AI tool -- whether agentic AI or another type -- operates in a manner that complies with all relevant regulations, standards and best practices.

Those controls, she said, include technical controls as well as ongoing testing, human oversight and revisions.
"At Skillsoft, we run our controls on agentic AI, and we report results for that. We do bias testing, hallucination testing, we do offensiveness testing. We do our own testing to make sure we have the proper guardrails," Palmer added.
Grosso stressed the need for significant human oversight during the agentic AI training period.
"Eventually, after many 'on-the-job' training exercises, the system will become sufficiently adept at the job it was designed to perform, and that human oversight can be rolled back or possibly eliminated," he said.
However, he noted that "a real problem is that professionals may become too comfortable with their machine counterparts too early and let up on oversight too readily."
Emerging AI compliance frameworks for enterprises
Ensuring AI agents are compliant with any applicable rules, regulations, standards and best practices falls under the idea of responsible AI.
Responsible AI is an approach to developing and deploying AI to ensure it is accountable, ethical, fair, safe, transparent and trustworthy.
There are several frameworks that organizations can use to help ensure they're developing responsible AI and, as part of that, compliant AI agents:
- European Union's AI Act. This act promotes safe, transparent AI by categorizing risk levels, guiding responsible development and ensuring compliance through clear rules, accountability, and enforcement mechanisms.
- G7 Code of Conduct for AI. This set of voluntary guidelines promotes the safe, secure and trustworthy development and deployment of advanced AI systems and advises organizations to identify, evaluate and mitigate risks throughout the AI lifecycle.
- ISO/IEC 42001. This set of voluntary guidelines covers the development and use of responsible AI by ensuring accountability, transparency and risk management; it helps align AI systems with ethical principles and regulatory requirements, thereby promoting trust, safety and compliance throughout the AI lifecycle.
- NIST AI Risk Management Framework. This framework, intended for voluntary use, helps organizations design, develop and deploy responsible AI by addressing risks across those efforts. It promotes trustworthy AI through core functions -- govern, map, measure and manage -- helping ensure compliance, transparency and alignment with ethical and legal standards.
Regulatory trends in agentic AI
The Infosys report found that 78% of surveyed executives viewed "[Responsible AI] practices as having a positive impact on their business growth" and noted that most of the surveyed executives also said they "welcome new AI regulations, mainly because such regulations will provide clarity, confidence, and trust in enterprise AI both internally and for their customers."
However, regulations are still evolving, with experts saying none specifically addresses agentic AI.
"The trend right now is to use the EU's AI Act framework as a foundation," the report stated, noting that most countries are using the framework with only slight variations to ensure their rules align with the EU's to avoid a patchwork of dozens of versions of regulations.
Lawmakers in the U.S, both at the federal and state levels, are considering regulations but have yet to offer organizations any firm direction. In 2023, then-President Joe Biden issued an executive order on safe, secure and trustworthy AI; his successor, President Donald Trump, rescinded that order in 2025 and issued an executive order that further revoked any policies deemed to be a barrier to AI development in the U.S.
How enterprises can prepare for AI agent compliance today: 7 steps
Even in an evolving regulatory environment, compliance experts said organizations can take the following seven steps to ensure their development and deployment of AI agents comply with laws and standards:

- Ensure their compliance programs are aligned to the business strategy and business operations so it's clear what objectives the AI agent will have as well as what compliance measures will be necessary, Palmer said.
- Identify the actions happening at all layers and at all points along the workflow so that compliance needs can be addressed and accountability and transparency are baked into the system, said Soogand Alavi, an assistant professor at the University of Iowa's Tippie College of Business.
- Audit AI agents to "check the responses they're giving so you know they are complying with regulations," Alavi added.
- Train employees on responsible AI. "Each unit of an agentic AI system used in professional or other complex fields should undergo a training, review and certification process, even after it has been put into service," Grosso said. "Companies must not ask too much of [their] individual agentic AI systems until they are well trained and have demonstrated their competence."
- Resist becoming "too reliant on agent-based AI systems too early," Grosso said. "Compliance must reinforce for them that they are still in control and are still responsible for the processes and outcomes when these devices are used."
- Make no assumptions: A quirk that a user might not deem significant could have serious and adverse repercussions not all that far down the timeline, Grosso said. "These systems have the capacity to act and self-modify on their own," he emphasized. "Therefore, a user must consult with designers as well as uninterested experts concerning the tasks to which these devices are being put. Small initial errors in AI systems can build upon themselves and, in the long term, snowball into big problems."
- Develop adequate ongoing resources to ensure compliance and governance in AI development and deployment, Howden said, explaining that compliance work must evolve as AI systems do. "Otherwise we'll be chasing something we can't catch," he said. "If we don't embed it now, we won't be able to do so later."
Mary K. Pratt is an award-winning freelance journalist with a focus on covering enterprise IT and cybersecurity management.