Getty Images/iStockphoto

Reconsider the AI readiness gap in data and analytics

The AI readiness gap is widely framed as temporary, but examining it across three distinct enterprise layers suggests it might be more permanent than the market suggests.

Enterprise demand for AI has pulled the data and analytics industry into one of its most active product cycles in years. Vendors are shipping agentic features, intelligent pipelines and automated remediation, and product roadmaps increasingly assume a level of customer readiness that the research does not quite support.

Gartner's own numbers are hard to reconcile with the market's momentum. Only 1 in 5 AI investments are currently delivering ROI, just 14% of leaders are confident in their data governance, and 57% of IT leaders say they were pushed to deploy before they were ready. The industry calls the mismatch between ambition and organizational maturity the 'AI readiness gap,' and treats the gap as temporary – a condition of an early market that time and effort will close.

That framing deserves some scrutiny – and so does the question of whose problem the gap actually is. If it is transitional, it belongs to enterprises that need to catch up, and time is on their side. If it is structural, it belongs to the buyers building into it, and the vendors have already moved on. The distinction is not semantic. The pace of deployment is set by vendors and boards, while the pace of readiness is set by the things organizations cannot buy their way out of – organizational culture, workforce capability and infrastructure debt. Whether those two realities converge is a question worth asking before the answer becomes obvious in hindsight.

A people and process problem

The widely reported ROI failures are usually treated as a technology maturity problem. Ataccama CEO Mike McKee, when asked how much implementation and change management play a role in those ROI failures, said he was "debating between saying over 50% or over 80%." If he is right, the majority of what the industry is calling a technology gap is something else entirely.

Implementation and change management do not align with product cycles – they move on organizational timescales, measured in years. McKee pointed to a private equity firm that has been working on its data problem for 13 years: five to digitize paper records, five to stand up reporting, and only now beginning to address the quality of the data underneath. Meanwhile, the volume of data organizations are trying to govern doubles every three years, and the number of people qualified to govern it does not. A gap closes when the target being chased slows down enough to be caught.  

A difficult number surfaces around business literacy. "I would say 99% of organizations can't connect data initiatives to business initiatives," McKee said, "let alone AI initiatives." That is a striking figure from a vendor whose business depends on the opposite being true, and it complicates the belief that the gap is purely transitional. It is not only about tooling or architecture, but whether organizations know what the tooling is for, and that is not a problem with a clean timeline.

The infrastructure layer

The people-and-process layer is one reading of the gap, but another lies beneath it, getting less attention because it is harder to narrate and pitch. The question it raises is whether the infrastructure underneath AI deployments is actually built for the job.

Anil Inamdar, head of data services at NetApp's Instaclustr business unit, put the distinction plainly. "The AI tools and agents people are talking about are all very exciting," he said. "But the foundation, nobody is looking at." Instaclustr works with customers on the open source data systems underneath their AI ambitions, and from that vantage point, the exciting layer gets all the development, while the "boring layer" underneath gets all the load.

What the boring layer needs, according to Inamdar, is four specific things that most existing architectures were not built around: real-time event streaming, vector search at scale, distributed state management for agents operating across multiple systems, and transactional reliability underneath it all. These are not incremental upgrades to existing architecture. They represent different design assumptions from the ground up. A data warehouse optimized for batch analytics is not simply a streaming platform waiting to be configured. An organization can have perfectly respectable data architecture and still miss most of that list. The gap between "we have architecture" and "we have AI-ready architecture" is where Inamdar sees most of his customers operating.

Pace is where the transitional reading stumbles. The AI stack sitting on top of the infrastructure layer, he noted, is changing every 12 to 18 months – new tools, new frameworks, new architectures and new assumptions about what the layer underneath needs to provide. Large infrastructure migrations, by contrast, typically run two to five years from design to production, and that estimate assumes organizational alignment.

Previous technology cycles had readiness gaps, and those gaps did close, but they closed in part because the target slowed down enough to be caught – the client-server transition took the better part of a decade to stabilize, long enough for enterprises to actually complete the build. What does closing mean when the target is re-architecting itself on a cycle shorter than most infrastructure projects take to complete?

Governance at machine speed

The third place to look is governance, where the transitional framing has had the hardest time lately. The premise of a transitional gap is that organizations can, in principle, wait to finish the foundational work, get the data in order, build authority structures, and then deploy when ready. The governance vendors closest to the problem are increasingly building their products on different premises.

Blake Brannon, OneTrust co-founder and chief innovation officer, was direct when the question came up. "I don’t think anybody would ever be in a state where they say their data is perfect and ready to go, especially in an enterprise," he said. Brannon breaks governance into three multiplicative quotients: an intelligence quotient covering output quality, a security quotient covering protection from threats and misuse, and a governance quotient covering compliance and data ethics.

The structural claim is that the three quotients are multiplicative rather than additive. "If you get any of them wrong," Brannon said, "the long-term ROI of that agentic system could be zero. If any of them are zero, the whole thing is zero." That reframes the readiness question as a design problem rather than a sequencing problem. It is no longer about whether governance is ready before deployment, but about whether all three quotients remain above zero during deployment.

The reason, in Brannon's telling, is scale. Governance has historically run through small, human-staffed committees – legal, security, compliance, privacy – that organizations consulted with when they wanted to do something new with data. Those teams do not survive contact with a deployment model measured in hundreds of thousands of business users running agents. "You can no longer have humans in the middle of that governing process," he said. "It fundamentally will not scale."

OneTrust's response is to build software that codifies the risk-decision process itself – an intelligence layer that can make the same kinds of calls a privacy or compliance professional would make, in real time, at machine speed. That strategy implies something about the future the company is building for. A vendor expecting the gap to close would build for the moment where organizations catch up. Building for agents governing agents assumes the old governance model is not coming back, and the new one cannot wait for human committees to scale.

A gap worth reconsidering

The question these three perspectives raise is whether they describe the same situation from different angles, or three unrelated problems that happen to rhyme. Any one observation would be easy to view as early-market noise in isolation, but taken together, the three timescales do not obviously intersect, and none describes a gap closing on the horizon the transitional reading implies.

There are valid reasons to defend a transitional reading. Gartner's own hype cycle analysis has tracked technology gaps before – cloud adoption, big data, and even early ERP implementations produced the same predictions of permanent organizational dysfunction that ultimately proved premature. The argument is that enterprises are not uniformly behind; the leaders are already building the right foundations, and market pressure will force laggards to follow. In this view, the current moment is loud because it is genuinely early, not because it is broken.

That case is not unreasonable. If the gap is transitional, the current moment looks like an early market doing what early markets do. Vendors are experimenting, and the conversation will eventually shift from readiness to the results as the foundational work catches up to the deployment curve. That is what the industry's public framing assumes, given that technology cycles have ended this way before.

However, if it is not transitional, the current moment looks very different. It looks like a market in which the people closest to the issue have absorbed the possibility that the gap will not close on a recognizable schedule and have organized their product strategies around operating within it. That reading is what the evidence gestures at, even when the sources do not say so directly – and at some point, a gap that is no longer described as closing stops being a question about the market's maturity and becomes a question about whose problem the gap actually is.

 

Scott Thompson is a site editor for TechTarget's Data Technologies group.

Dig Deeper on Data management strategies