sdecoret - stock.adobe.com

Ethical considerations of agentic AI and how to navigate them

Beyond the risks associated with traditional AI, agentic AI poses specific ethical concerns, including diminished human oversight, privacy erosion and misaligned outcomes.

Agentic AI is a significant advancement in the development, deployment and practical applications of AI systems. It opens opportunities for creating more autonomous systems capable of operating independently beyond the traditional paradigm of AI models operating as part of a constrained system. AI agents are increasingly being integrated into multi-agent systems, which adds a greater degree of complexity to traditional AI norms.

Along with these agentic AI advancements come new ethical concerns. AI tools introduce new security and governance risks as agents acquire the ability to query more types of systems or enter data into systems of record. Autonomous agents increase the risk of agents running amok in novel and unexpected ways. Multi-agent systems can make it more difficult to understand and manage complex, interconnected workflows.

Expanding the scope of agentic AI compared to traditional AI also inflates associated risks, which multi-agent systems could further accelerate, said Vershita Srivastava, practice director at consultancy Everest Group. These systems can exacerbate the existing problem of agentic AI's black box decision-making, making it more difficult to trace ownership of a specific outcome across a network of collaborating, intelligent autonomous agents. "The result is often unforeseen emergent behaviors and a significant accountability gap where ownership is nearly impossible to determine," Srivastava said.

When traditional AI is applied, it's easier to trace issues back to human developers or data providers. But it's becoming increasingly difficult to assign blame when unintended outcomes occur with highly autonomous agents. "Without a clear stakeholder stepping up to assume responsibility," Srivastava said, "it is becoming everybody's responsibility and consequently nobody's problem."

Why do ethical concerns surround agentic AI?

Agentic AI not only introduces a range of new ethical issues, it also amplifies many of the traditional concerns associated with AI systems. The shift from AI tools to autonomous partners, for example, creates new categories of risk, liability when damage spirals out of control, and difficulty in assessing the emergent behaviors from multi-agent systems.

Without a clear stakeholder stepping up to assume responsibility, it is becoming everybody's responsibility and consequently nobody's problem.
Vershita SrivastavaPractice director, Everest Group

Agentic AI can be viewed as digital employees that never sleep, working tirelessly 24/7 and automatically making thousands of decisions per hour, all the while continuously learning and adapting to their environment, explained Manisha Khanna, global product marketing lead for AI at analytics and data science tools provider SAS. But there might be times when humans can't peek over an agent's shoulder to see exactly what it's doing or why.

"The change in levels of agentic autonomy, tool use, multi-agent systems and other factors makes the ethical minefield bigger and more complex," Khanna reasoned. "With multiple agents constantly sharing and modifying data with each other, it's even harder to track what decision was taken by which agent and why." 

These systems can also learn new strategies from the data they received to get the right answers or make the right decisions, but the results don't always map to human thought processes. It's akin to reverse-engineering the thought process of a chess grandmaster playing a thousand games simultaneously while learning new strategies. Dynamic learning capabilities can lead to better performance but also risk the consequences of accuracy drift, hallucinations or etiquette erosion.

Since data is dynamically processed and modified multiple times during its use, data management and protection issues can be even more complex and critical, including data ownership, provenance, privacy, copyright and control. "The old principle of 'garbage in, garbage out' still applies, and agentic AI amplifies those concerns by operating with greater autonomy and less predictable outcomes," said  Marlene Wolfgruber, product marketing lead for AI at automation platform provider ABBYY.

Along with autonomy, the trustworthiness of data and clearly defined outcomes add to agentic AI ethical concerns. AI agents need reliable data and process insights to guide their decision-making. "If that is not given," warned Wolfgruber, "we risk reinforcing bias, creating black-box systems and diminishing human accountability." 

Ethical responsibility must scale with an agentic AI system's autonomy by ensuring reliable data, transparent processes, clear goals and well-defined use cases. Regularly practicing responsible AI requires translating principles into controls, integrating those controls into the software development lifecycle and continuously measuring outcomes, advised Adnan Masood, chief AI architect at IT consultancy UST, U.S., who has spent a decade operationalizing AI in regulated industries.

Table comparing the attributes of agentic and generative AI.
Agentic and generative AI are related but different.

What are the ethical concerns around agentic AI?

Agentic AI can amplify familiar AI concerns such as bias, transparency, accountability and privacy, and introduce new and unique concerns like lack of human oversight and unintended consequences.

Embedded bias

Biased outcomes can result when retrieval procedures and the tools themselves contain bias. "Unlike the familiar bias problems where we can audit training data, competence boundaries are dynamic and contextual," said Nick Kramer, principal of applied solutions at consultancy SSA & Company.

Lack of transparency

Transparency can be compromised because the outcome depends on a chain of prompts, plans, tool choices, external system states as well as the AI model's output. "The most insidious issue one encounters with agentic systems is that they'll often operate with unwavering confidence even when they're outside their competence zone," Kramer said. "It's not just about understanding why an agent made a decision but understanding whether it should have made a decision at all."

Diminished human oversight

Accountability is spread across the base model, orchestration layer, tools, organizational policies and human supervisor. Humans risk becoming passive approvers instead of active decision-makers. "This makes transparency and explainability more urgent because if humans don't fully understand the system, oversight becomes weak," explained Ravindra Patil, vice president and practice leader of data science at consultancy Tredence.

Erosion of data privacy guardrails

Privacy risks increase when AI agents stitch together context across various tools, sometimes bypassing existing data loss prevention boundaries. Privacy erosion can result when AI systems do their job too well and compromise privacy. "Unlike static AI that processes one data set at a time," said Khana, "agentic AI systems are designed as persistent data aggregators that continuously collect, cross-reference and analyze information from multiple sources. No amount of encryption or anonymization can protect us from a system that never forgets anything and can connect dots across data sets that humans would never think to compare."

Misaligned goals and outcomes

Adaptive agents that constantly optimize may find new ways to achieve goals that don't align with human values. "Beyond traditional bias," Patil noted, "we now face the risk of unpredictable, real-time behaviors that can't always be caught in testing," Patil said.

Recurring failures

Masood said he sees recurring failure modes such as "specification debt, capability overhang and emergent coordination." Specification debt, he explained, occurs when proxy goals are optimized at the expense of the original intent; capability overhang happens when new tool access creates unsafe execution paths overnight; and emergent coordination is when multiple helpful agents produce unintended and harmful side effects such as over-notifying customers or accelerating spending.

Applying best practices to agentic AI development

Businesses can apply several best practices to help mitigate agentic AI ethical concerns and risks during the development, implementation and integration of AI agents into operational workflows, including the following:

  • Begin with a definition of the problem and how agentic AI can solve it, Wolfgruber suggested. Agentic AI shouldn't be viewed as a universal tool. Define the expected outcome, requirements and boundaries. "Without that, the autonomy of agentic AI can become a liability," Wolfgruber said.
  • "Never go from zero to full autonomy," said Kramer, who recommended building what he calls "autonomy ladders." The rungs of the ladder form a systematic progression whereby agents earn increased freedom through demonstrated competence.
  • Since agentic systems can spiral out of control in multiple ways, Kramer suggested an "interruptibility by design" approach in which every agentic system is designed to be stopped, paused or rolled back. An agentic system, he explained, could automatically downgrade its autonomy when its level of confidence drops below certain thresholds, inputs deviate from training distributions or outputs affect more than predetermined thresholds of operations. "It's not just about having a kill switch," he noted, "it's about systems that know when to pull their own emergency brake."
  • Establish an "autonomy budget," Masood said, so agentic systems can only make proposals without risks when first implemented. Then constrained actions can be enabled under specified thresholds according to risk level rather than generic maturity. Prioritize enforcing tool least privilege, shipping with full provenance logging and expanding autonomy under measured, policy-enforced conditions. "These three moves deliver outsized risk reduction without stalling innovation," Masood added.
  • All code should be managed using signed manifests and credentials assigned just in time. Draw inspiration from classic database principles, Masood advised, including role-based access controls, scoped tokens that expire and using allow/deny lists at the orchestration layer.
  • Conduct a "long-horizon evaluation" and "adversarial testing" in a safe sandbox, Masood advised. Scenario-driven simulations can run agents for many steps with realistic tool mockups. They also facilitate adversarial perturbations using bad inputs, stale states and conflicting goals. It's important to measure side effects in addition to task success with an eye toward policy violations, unnecessary data exposure and cost anomalies.
  • Log every aspect of agentic behavior to improve provenance tracking, increase observability and support audit trails. Every action should be immutably logged, including tool inputs and outputs, model version, policy checks and approvals, to enable audits and learning loops, said Mikael Quist, CTO renewable energy consultancy American Power Resources. This data should be stored using cryptographic hashes that alert when logging data is changed.

    Future of agentic AI technology and development

    Agentic AI is poised to transition from a support function to a growth engine by delivering hyper-personalized experiences, autonomous supply chain planning, dynamic pricing and AI-powered partner ecosystems. "Looking ahead, the most disruptive use cases will be multi-agent ecosystems orchestrating entire business workflows and role-based copilots augmenting frontline workers in real time," Patil predicted.

    Agentic AI's future, Quist said, will hinge on broad collaboration across industry, regulators and researchers to prevent agentic systems from running amok. He recommended exploring the scenario simulator described in an "AI 2027" report by the nonprofit AI Futures Project to explore different potential AI outcomes based on past behaviors.

    "The ethical future of agentic AI," Quist conjectured, "will combine stronger technical guarantees, transparent provenance, rigorous testing and thoughtful regulation so that agents can be powerful without becoming uncontrollable." Early regulations, he explained, will focus on action risk rather than model type and likely include mandatory tamper-proof logging and incident reporting, minimum standards for identity and provenance, pre-deployment risk assessments for high-risk agents, and certification or auditability requirements for critical integrations. Reputation and disclosure systems for agents will enable other systems and individuals to establish interaction policies and consent rules.

    We're moving toward ethical co-evolution of systems where human curation and ethics develop with the agent.
    Nick KramerPrincipal of applied solutions, SSA & Company

    Ethical AI's future, Khanna added, "is about building human-in-the-loop intelligence designed with transparency and override options that ensure human domain expertise can step in when business conditions, regulations or ethical standards change. It's about redesigning work to amplify uniquely human capabilities like creativity, critical thinking and emotional intelligence that AI cannot replicate, rather than simply automating tasks."

    Traditional AI governance was akin to putting up a fence around your backyard and calling it secure, but agentic AI requires something more sophisticated, using governance methods that continuously monitor, learn and adapt to new threats. The most effective future governance will likely be a hybrid model that fundamentally shifts responsibility from traditional risk managers to the those building AI systems. Instead of waiting for problems to emerge and trying to fix them through compliance checkboxes and oversight committees, organizations will need to embed ethical decision-making directly into the development process.

    "Think of it like the difference between having food safety inspectors show up at restaurants after customers get sick versus training every chef to understand food safety from day one," Khanna noted. Regulatory bodies still provide the framework and backstop, but the real governance occurs in code repositories, model training sessions and architecture decisions when and where AI systems are created rather than in boardroom risk assessments that occur months later.

    "The trajectory I see isn't toward more autonomous agents, but toward more sophisticated forms of human-AI collaboration," Kramer surmised. Insurers and banks, for example, that are adopting this approach are augmenting human judgment in increasingly nuanced ways instead of replacing it.

    "We're moving toward ethical co-evolution of systems where human curation and ethics develop with the agent," Kramer observed. "The companies that will win in this space won't be those with the most autonomous agents, but those with the most thoughtfully integrated ones. The question isn't whether we can build agents that operate without human oversight, but whether we can build agents worthy of the trust we're placing in them."

    George Lawton is a journalist based in London. Over the last 30 years, he has written more than 3,000 stories about computers, communications, knowledge management, business, health and other areas that interest him.

    Next Steps

    Agentic AI vs. generative AI: What's the difference?

    AI agent vs. chatbot: Breaking down the differences

    Compare AI agents vs. RPA: Key differences and overlap

    Dig Deeper on AI infrastructure