sdecoret - stock.adobe.com

How to escape agentification pilot purgatory for scalable AI

Deloitte exec says redesigning work is key and explains how to do it without stoking job-loss fears -- but only after a rigorous goal-setting process that sets clear priorities.

Agentic AI deployments in businesses so far have been reminiscent of the "rogue IT" days of SaaS, with employees accessing or demanding unauthorized apps, authorized tools just starting to roll out, and business and IT leaders scrambling to articulate a coherent strategy for the organization.

True, sanctioned pilots are plentiful, but many organizations report frustration with scaling up these agentic AI applications, in part because the apps are designed to handle narrow tasks. Security, governance and data foundations are usually works in progress, and multi-agent orchestration -- getting agents to collaborate on complex tasks and goals -- remains unsolved for in-house developers and software vendors.

Deloitte, the Big Four accounting firm and IT consultancy, has been introducing advisory services and online tools designed to help clients draft and execute agentic AI roadmaps that produce tangible business results.

In this Q&A, China Widener, Deloitte's vice chair and U.S. technology, media and telecommunications industry leader, explained how to escape "pilot purgatory" and plot a strategy for scalable agentic AI. She also shared advice on addressing AI job loss fears by redesigning work in ways that benefit both individuals and their organizations.

Widener is approaching her 20th anniversary at Deloitte in May. She previously held C-suite roles in Ohio state government and was an assistant county prosecutor.

Editor's note: This interview was edited for clarity and brevity.

If you could name one thing, what's preventing companies from going beyond pilots and getting enterprise-wide ROI from agentic AI?

China Widener, Deloitte Vice Chair and U.S. Technology, Media & Telecommunications Industry LeaderChina Widener

China Widener: It is more than one thing, without question, but central to all the things is this idea of clarity around the vision. There's now, there's next, and there's the future, and each one wrought value. The question is, what are you starting with?

This is not a technology problem -- the tech works. The question is in what context and to what end, which is about the vision the organization has for itself.

You can extract value in the form of cost containment or mitigation and create value through growth, new products, tools and services or a change of experiences. Where you start your journey is a function of understanding what your goals are. What you identify then becomes the set of questions.

More than half of companies in our survey are early stage or don't have a strategy. That's a challenge because you don't have a roadmap for where you are going or a way to judge the progress you're making, let alone the steps to take.

Deloitte's 2026 "State of AI in the Enterprise" survey captured insights from more than 3,200 business and IT leaders directly involved in AI initiatives. Only a third said they used AI to truly transform their business. The report suggested that's because most are still focused on training employees for AI fluency and haven't done enough to redesign work. Why is redesigning work so important?

Widener: There's value that simply comes from doing a task faster. The notion of upskilling and getting people AI fluent is to get them hands on -- to touch AI, work with it and utilize it when executing their daily duties. Nothing about their job has changed; it's just whether they can do it faster, more efficiently or with more precision.

There's a step in the process that asks how to use AI to augment an employee's cognitive skill set. Research shows the greatest lift is leveling up employees to create a certain level of cognitive parity. But if you just apply it to the work they already do, you make what they do efficient but haven't asked whether they need to do it anymore.

Redesigning work allows you to change the way time is spent and on what. Today, a function may have 17 tasks associated with it. If we redesign the work itself, there may only be 10 tasks in the future because some of them can be executed through some form of automation. You've freed up significant time and can give that same person different or additional work. They can focus on the 10 tasks that you can't automate or shouldn't automate and really require human judgment.

Many workers are worried about losing their jobs or jobs changing so much that they can't keep up. How should organizations handle training and upskilling so employees trust they'll get help moving into new roles?

Widener: It's like most things. There's an evolutionary path of travel. You don't wake up on Tuesday and suddenly have adopted and adapted to the technological change. You start to make things available to employees so they can take the work they already know how to do but augment it and do it faster -- a research job, for example. Ultimately, the research can happen in a more comprehensive, quicker way. The product you produce, which is still your product, is produced with greater quality and more robustness because you've had access to those tools.

Then there's moving the individual forward to cognitive enhancement or support of their thought process and giving them access to tools for that. It's this stepping approach that moves employees forward to having an agent that will execute some of the most repeatable functions. But where judgment and quality are required, the human takes on managing the agent.

This is not a technology problem -- the tech works. The question is in what context and to what end.
China WidenerVice chair, Deloitte

You have to step a workforce through that. It's not just training. Some organizations approach it as just needing to upskill people. Upskilling is important but it's not the whole story.

There's also a change management aspect. Our research has shown that some things are underrated by leaders in organizations -- things like redesigning roles. AI fluency matters, and it enables adoption and execution -- hard measurables, such as how much time was saved and what degree of revenue was generated. Those things are tangible and easily calculable.

The intangibles, though, are equally important, and that's the change management function: how you are thinking about the operating model of the organization when you have this technological capability available to you.

What agentic AI use cases are you seeing in the industries in your purview?

Widener: We were just having this conversation about agentic AI and its capacity to be creative and create efficiency, productivity and effectiveness. It begs a natural question for any organization, whether you are growing, concerned about being disintermediated or looking for your next corporate strategy to take full advantage of agentification and the opportunities it creates.

Agentification isn't impacting every industry in exactly the same way. In some industries, it looks more internally focused and efficiency-driven and changes back-office functionality. For others, such as the entertainment industry, there is impact to the core of the business, the creativity of storytelling. It has a different impact in that industry vs. hardware, where it may be about changing the management of a supply chain.

What are clients' most frustrating challenges in taking agentic AI beyond the pilot stage and getting enterprise-wide use from it?

Widener: That is the biggest pain point. Most companies have done some form of experimentation or pilot. Some think of it as pilot purgatory because the percentage of pilots they've been able to take to scale has been less than 20%. The value proposition of agentification isn't in question. What is in question is how to reap the value and benefit from it faster and at scale.

The challenges that have arisen start with the pilot itself and whether it has been constructed to solve an enterprise problem versus solving a particular productivity problem in isolation for a few or one specific use case. But is that use case one that should be scaled, and is it scalable?

Technological and data choices get made in pilots that don't scale later. The pilot can't be framed against a narrow use case. It has to be framed against the broader infrastructure, understanding the bigger data questions that might arise and recognizing that governance will be significant. A data-quality question in a pilot might be manageable because the pilot is small but no longer manageable when you scale. You have to decide the scale questions as part of the pilot, not build the pilot and say you'll worry about scale questions later.

What should companies put in place to approach agentic AI from a scalability viewpoint and execute on that level?

Widener: Pilots tend to grow in a fairly organic way: Organizations buy access to a particular LLM or AI tool and unleash it on the body as a whole and let people who do a job use the tools to improve something. Then they harvest the best of those ideas and look at whether they should be scaled.

There are organizations that have instead taken a broader, top-down or enterprise approach and said, "These are the tool sets we want to utilize, and here's where we want to focus them."

What's critical in all this, no matter which end of the spectrum you start from, is having a disciplined approach to agentifying any aspect of the business. It doesn't matter who the stakeholder is. If you have a consistent and disciplined approach by which ideas are evaluated and their proposed value is calculated, and you understand the technical implications, then you can get to real consistency.

No organization agentifies overnight. Every organization starts in some way -- maybe by capability, maybe by business unit. But if you don't have a consistent and disciplined evaluative process, everybody starts in a different place, calculates a different benefit and executes against a different vision.

A disciplined approach requires a few components. One is clarity on how you'll choose goals. Is it about cost? Growth? Experience? You have to be clear because there are things that are a better fit for agentification that may bring middle value. Some things may bring higher value, but the fit's going to be a heavier lift.

You also need standards and protocols to evaluate those things to arrive at the right place for your agentification bet, based first and foremost on what you value, and then moving the organization on a periodic basis through the same evaluation to arrive at a focus. You'll get consistency and clarity and understand what you're spending money on, what value to expect in return and be able to calculate and monitor that value.

Agentic AI is trendy, but it's happening in a broader context that includes generative AI and large language models, some of which are older. What are the foundational elements of agentic AI that business and IT leaders should focus on first?

Widener: The more autonomy you want your agents to have, the more important your governance and data quality are, because an autonomous agent is going to work independently. It's not going to seek permission to execute. And the more autonomy you create, the greater the security that is necessary. They are important at the start because they become even more important as you reach more and more autonomy.

Deloitte introduced a tool called Enterprise AI Navigator. What is it, and how do people get it?

Widener: It was created because there wasn't a consistent tool out there that allowed organizations to have the kind of disciplined approach that is applicable in the moment and in the future.

It is not something we sell to clients as a product but the tool we use to help clients move through their agentic journey and arrive at both their roadmap and the outcomes they have identified. It allows you to have greater confidence in the choices you've made.

Your future workflow -- the change to the work itself that we talked about -- is proposed so you see it before the first agent is coded. You understand what your future is going to look like and can start to understand how your operating model changes and the changes you need to make to the workforce in terms of training, upskilling or job shift. You know all of that going in.

Today, those are things discovered along the way, and that's what contributes to pilot purgatory. You need to know them when you start your agentic journey, not when you're already in it.

David Essex is an industry editor who creates in-depth content on enterprise applications, emerging technology and market trends for several Informa TechTarget websites.

Dig Deeper on ERP administration and management