Getty Images
How agentic AI governance tackles data, security challenges
AI agents promise real gains -- and pose real risks. Enterprises that move fast without first tightening governance controls might struggle to prevent rogue behavior.
Enterprises are going all in on agentic AI, accelerating initiatives even as they outpace the controls required to govern them.
The gap between agentic AI ambition and readiness is growing wider as organizations move from experimentation to production. The key issue is no longer whether AI agents can automate work, but whether the necessary data governance, observability and identity foundations are in place to handle these autonomous systems at scale. There's a reckoning coming for AI and data leaders to prove all the money spent on agentic AI will bring value while also managing costs and reducing operational risk as adoption rises.
Agentic AI exposes gaps in enterprise data readiness
The pressure is reflected in research numbers. Forrester Research says enterprises are entering AI's "hard hat" phase, where cost control, governance and operational reliability matter more than impressive demos. It predicts 25% of planned AI spending overall in 2026 will get bumped to 2027 as CFOs push harder for ROI. Meanwhile, in its latest annual survey on enterprise AI adoption, conducted in August and September 2025, Deloitte found that 74% of companies planned to deploy agentic AI moderately or more extensively within two years, up from 23% at the time. However, only 21% of the 3,235 respondents said their company had a mature agentic AI governance model.
"This is a different type of thing; it doesn't work the way we're used to software working," said Jeff Pollard, vice president and principal analyst at Forrester Research. "What's different with agents is we're giving them agency. That's an important distinction, because this is really the first time we have widely deployed software in our environment that has an intent -- a goal -- and has the ability to go do something without us explicitly telling it what to do."
Some agentic AI risks are familiar, including exposure of sensitive data, Pollard said. Others are newer, namely the risk that an agent will take harmful actions because an attacker alters its goal or because issues in the organization's IT and data infrastructure cause performance drift.
A McKinsey survey released in 2026 found that security, risk management and governance concerns are among the most frequently cited barriers to scaling AI, including agentic systems. AI security guidance group OWASP highlights goal hijacking, tool misuse, and identity and privilege abuse as core threats for autonomous systems in 2026.
Agents that exceed their intended scope can have significant, even catastrophic, consequences, such as disrupting business operations or, in certain domains, creating safety risks. For enterprise leaders, that makes agentic AI investments less of a model decision than one about data governance, observability, architecture and identity and access management (IAM). To limit risk, organizations need controls to keep agentic AI secure, governable and resilient.
"Bounded autonomy is the key here," said Adnan Masood, chief AI architect at IT consultancy UST.
He said that means managing the identity of agents, limiting their access to data and the actions they can take, and monitoring their behavior.
Policy as the control layer for agentic AI
Leaders need to establish which actions agents can take independently and which require human approval, based on risks to the organization and the necessary controls to prevent them, Masood said. Decisions on acceptable agentic AI use should be codified in formal policies to guide where agents will be used, how much autonomy they have and which safeguards apply.
"We need to think about digital agents as workers and think about the policies around them just as we would humans," Masood explained.
Policy and governance capabilities are also becoming a buying requirement. IDC says organizations increasingly need AI governance platforms that provide a centralized inventory of AI systems and support policy management, risk assessment, audit trails and continuous monitoring across the full lifecycle of traditional AI, generative AI and agentic AI models. In practice, that means defining where agents can act autonomously, where human approval is required and which records of an AI system's behavior must be retained for audit and compliance.
How to track agent actions across data workflows
It is impossible to govern agents without having data on the actions they take, Pollard said.
"We need full observability into the behaviors, the tool access, the data access, into the identity or task an agent is operating on behalf of, and telemetry on the reasoning of the agent -- why it did what it did, what step did it choose to take over the other," he said. "We need data about what's happening. And we need something that's laid on top of it to understand the intent of the agent."
As a result, observability now means more than application uptime in AI systems. It must include logs, metrics and traces for the runtime environment, plus decision telemetry, tool-use records and business-context signals that reveal when an agent drifts or exhibits harmful behavior.
Treating agents as managed identities
Organizations with mature cybersecurity and data privacy practices usually have strong IAM programs that ensure only authorized workers and systems can access enterprise data and applications, and only when required to do their work. Enterprises need the same IAM controls for AI agents, according to Masood.
"You want to make sure the action that an agent is permissioned to perform is the only one it performs," he said.
Masood also said organizations should create short-lived access privileges for agents, meaning access is granted only when an agent is authorized to complete a task as part of a workflow.
"Authentication shouldn't be forever," he added.
In addition to data misuse by agents, attackers can exploit identity and privilege vulnerabilities, according to OWASP. To prevent such incidents, it recommends the use of both task-based and time-bound permissions, plus measures such as verifying all privileged steps with a centralized policy engine and escalating critical actions for human approval. Deloitte emphasizes automated decisions should be auditable and embedded into existing governance processes rather than managed through informal or shadow controls.
Data architecture as a control point for agents
Siloed data stores and static data warehouse models won't support secure, governable and resilient agents, said Pablo Ballarin, co-founder and virtual chief information security officer at cybersecurity services firm Balusian S.L. and a member of the Emerging Trends Working Group at ISACA, an association for governance professionals.
Ballarin said that's why it's essential that organizations move to dynamic, entity-centric and governed data fabric architectures.
That's the strategy at the University of St. Thomas in St. Paul, Minn. Jena Zangs, the university's chief data and AI officer, said it uses a centralized data lakehouse, data mesh architecture and metadata tagging to support agentic AI use.
"That gives us governability," she said. "And it keeps data and business close to create a data product, so when we talk about agentic AI, we can keep agents to a specific domain and they don't have to have access to the entire database."
From deployment to continuous agent oversight
A modern data architecture enables organizations to embed controls and policy enforcement at the data and access layers, said Arpita Soni, a senior member of IEEE. But organizations also need to continuously monitor data environments and AI agents, analyzing observability data to ensure that the controls and policy enforcement mechanisms are working as expected, Soni said.
"You have to monitor and trace everything an agent does," she said, adding that organizations also need to tune their security information and event management systems to ingest agentic AI monitoring data and send alerts on issues, such as model drift.
Organizations must also use data from observing and monitoring agents to run audits when they produce incorrect outputs.
This need to monitor agents is not theoretical. In 2022, an Air Canada chatbot misstated a bereavement fare policy to a customer, and the airline was later ordered to pay damages. This case reinforced that companies can be held responsible for misinformation delivered by AI systems acting on their behalf.
Weak controls could cost companies in remediation fees, compensation claims and reputational damage. Stronger agentic AI governance will bring improved execution speed, data quality and ROI.
Mary K. Pratt is an award-winning freelance journalist with a focus on covering enterprise IT and cybersecurity management.