3 essential transformations to escape AI stagnation in 2026
Despite initial hype and investment, many IT orgs are not seeing returns on AI initiatives -- but a shift is coming. Will this be the year that AI changes business as we know it?
Intelligence and advice powered by decades of global expertise and comprehensive coverage of the tech markets.
Published: 09 Jan 2026
This past year, businesses invested massive amounts of money into AI initiatives. In 2026, those investments will climb even higher, but will businesses achieve the radical transformation benefits that AI promises?
While I remain confident in AI's ability to deliver on much of what it promises, I expect that 2026 will be a year when most enterprises take a step back, collect learnings from their early successes and failures, and then better equip themselves to take a massive leap in the coming years ahead.
3 IT transformations for AI success in 2026
The basic reasoning behind this expectation is that AI is very different from transformational technologies of the past. To maximize AI's potential, businesses will need to change how they manage data, as well as how they architect and invest in IT infrastructure, define job descriptions, design business processes, and track and measure success. These shifts will take time.
And with that in mind, here are three transformations that IT organizations will take in 2026 to prepare for an AI future.
1. AI initiatives will transform IT job descriptions
By now, we understand the strong connection between high-quality data and success in AI. This means efficient use of data is paramount for long-term success in both AI and business. Achieving that success requires changing not only the tools that data scientists and engineers use but also the way IT environments are architected, managed and maintained.
My colleague, Simon Robinson, recently highlighted that on-premises data storage infrastructure providers -- such as Dell Technologies, HPE, Hitachi Vantara, Infinidat, NetApp and Pure Storage -- have joined Hammerspace, Vast Data, Weka and other AI-focused storage players in providing more data management platform-like capabilities to their storage portfolios. Most of these additions complement existing tools used by data engineering and operations teams. And as data storage technologies evolve, so too must the responsibilities of their administrators.
The changes in IT architecture responsibility, however, do not stop at data storage. The need for greater silicon diversity and the use of heterogeneous compute environments (CPUs, GPUs, etc.) have increased with AI initiatives. Increasing pressure on both power and cooling -- as well as on infrastructure budgets -- is forcing a greater emphasis on improving utilization based on the demands of the specific part of the AI lifecycle, such as preparation, training or inference.
In addition to storage and compute, networking also plays a critical role in AI success. A survey from Omdia, a division of Informa TechTarget, found that 50% of organizations exploring or investing in private AI initiatives expect to need high-performance networking to support those initiatives.
The takeaway from all this is that infrastructure architects and administrators must play an increasingly strategic role when it comes to AI. They'll have to architect, integrate and manage an infrastructure that can optimize data usage and the larger data pipeline, ensuring efficiency across servers, networking and storage.
In other words, given the importance of data to AI, data infrastructure can no longer be deployed in silos. As a result, it cannot be managed as silos anymore, either. The role of the IT administrator will shift from focusing on a project-by-project basis to being a strategic resource for the continued optimization of the larger data management ecosystem.
2. Demands for greater agility and consolidation will shift IT infrastructure investment priorities
Just as IT responsibilities must evolve, IT architecture procurement and design must evolve as well. Most contemporary IT environments are simply not ready for enterprise AI at scale. According to Omdia research, when organizations make new infrastructure purchases, only 12% actively pursue technologies that deliver a consistent IT experience across hybrid cloud locations. That will need to change moving forward for the sake of data agility.
For years, the primary focus of new infrastructure deployments has been to meet the needs of the specific application environment being served. This approach assumes that if data needs to move, or if demands change, it can be handled by additional tools or services that can be added later.
Given the increased regularity of data movement and the associated increased need for better data governance, new infrastructure deployments in the AI era cannot be separated from the data pipeline architecture. Therefore, the ability to improve data agility (the movement of data from one location to another) and infrastructure agility (the ability to adjust or scale the core capabilities of infrastructure while maintaining a consistent management experience) will quickly become a key requirement for new investments. Platforms that can deliver these superior agility benefits will likely deliver superior value to AI initiatives.
Tied to the importance of agility is the increased prioritization of consolidation. Given the complexity, cost and power burden on infrastructure to support AI initiatives, IT leaders will increasingly look to consolidate the number of systems under management. Consolidation has long been a byproduct of modernization: When you invest in the latest, higher-performance storage, network or server system, you receive the benefit of requiring fewer physical systems to deliver the same results.
But historically, the focus on accelerating deployments and improving time to value has resulted in less-than-ideal utilization levels after deployment. Often, these lower utilization levels early on could be explained as providing room for future growth. Given the cost and complexity pressures of AI, lower utilization rates are a luxury businesses can no longer afford.
IT infrastructure vendors, especially data storage vendors, have made it much easier to scale infrastructure to meet demands or pay for infrastructure based on usage. These flexible payment options, combined with the performance and agility benefits provided by the latest infrastructure options, greatly enhance the benefits of infrastructure modernization for AI initiatives. That, in turn, should translate into greater consolidation to reduce cost and complexity while also improving data movement agility.
With the amount of spending targeted at AI initiatives, the ability to measure success and quantify value will quickly become vital to determining the success of new AI initiatives -- if it hasn't already.
3. Budget realities will transform how AI success is measured
How do you measure success? According to MIT's "State of AI in Business 2025" report, 95% of respondents said their organizations are getting zero return from their AI investments. Measuring a direct financial return is not always straightforward. However, with the amount of spending targeted at AI initiatives, the ability to measure success and quantify value will quickly become vital to determining the success of new AI initiatives -- if it hasn't already.
For organizations in the early phases of their AI initiatives, the priority measurement might often be tied to adoption rather than a specific financial measure. While sufficient for the early phases, this is not viable in the long term. Given the complexity of measuring AI success, I expect that the majority of AI projects will underwhelm. And I do not expect it to be a failure of infrastructure or implementation; rather, I expect that business leaders will start to sour on AI because budget realities will demand that the requirements of success become more rigorous.
Excitement around adoption and training internal teams for the AI era will only last so long. Effectively communicating AI success requires organizations to measure the benefits beyond implementation and tie that measurement to an economic outcome. To do this, teams leading AI initiatives should focus on specific use cases with measurable KPIs, monitor those indicators and be able to adapt the initiative to improve the outcomes.
In other words, AI must move from a science project to a real business initiative. For the many organizations still in the science project phase, I expect the bill will come due in 2026.
Scott Sinclair is practice director with Omdia, covering the storage industry.
Omdia is a division of Informa TechTarget. Its analysts have business relationships with technology vendors.
Dig Deeper on Systems automation and orchestration