How AI workloads are reshaping storage vendor strategies
Many storage vendors are adapting their strategies to meet the demands posed by modern AI workloads, which is influencing the technologies behind modern storage infrastructure.
Intelligence and advice powered by decades of global expertise and comprehensive coverage of the tech markets.
Published: 14 Jan 2026
In November 2025, Wasabi Technologies introduced Wasabi Fire, a high-performance cloud storage service designed to meet the demands of AI workloads and other latency-sensitive applications.
Wasabi Fire is based on all-flash storage that uses NVM Express (NVMe) to deliver the high throughput and low latency required by AI workloads for training and inference. Wasabi Fire also provides S3-compatible object storage, offering a flexible, scalable and cost-effective alternative to other forms of cloud storage.
The introduction of Wasabi Fire represents an important trend in enterprise storage; where storage vendors update or expand their offerings to better accommodate the growing demand for systems that can deliver both the performance and scalability needed to support complex AI workloads. Vendors such as Wasabi, WekaIO, DDN, Pure Storage and Dell Technologies are just a few of the many companies taking steps to accommodate their AI-driven customers.
AI workloads require NVMe and other advanced technologies
Organizations that embrace AI need storage strategies that go beyond the traditional concepts of storage; where storage is seen as nothing but passive backend capacity units that sit in data centers.
AI-driven enterprises require storage systems that not only provide the necessary IOPS and throughput, but also support high-concurrency and durability, include intelligent management capabilities and can integrate with their AI pipelines and frameworks. These storage systems must also be flexible enough to handle fluctuating capacities and distributed workloads that span hybrid and multi-cloud environments.
To meet these requirements, storage vendors rely on a variety of technologies and techniques that enable their products and services to meet the demands of AI workloads. One particularly important technology is NVMe and, by extension, NVMe-oF.
NVMe
NVMe is a data transfer protocol designed to address the limitations of older protocols that can’t take full advantage of an SSD's performance capabilities. NVMe supports highly parallel data transfers, while increasing throughput and reducing I/O overhead. It also facilitates fast communication between a CPU and storage interface by using a high-speed PCIe bus, which provides lower latency and higher data transfer rates than other interface types.
NVMe-oF extends NVMe across network fabrics, making it possible to communicate with remote SSDs over Ethernet, InfiniBand or Fibre Channel (FC) networks. The NVMe and NVMe-oF protocols significantly outperform older transfer protocols, such as SATA and SAS, while reducing latency and increasing IOPS.
SSDs and PCIe
In addition to NVMe, AI requires SSDs that can deliver the level of performance, availability, durability and scalability necessary to support the various types of workloads and data operations that occur throughout the AI lifecycle. All-flash drives might not be necessary for every phase of the lifecycle, but they're needed for operations that require high throughput and extremely low latency. At the same time, the SSDs must be able to deliver the capacity needed to accommodate massive data sets, without compromising performance or durability.
Vendors are also adopting other technologies in their storage products to deliver greater performance. For example, a number of vendors now offer SSDs based on PCIe 5, and a few vendors are already moving toward PCIe 6. In addition, some storage vendors have also added support for Nvidia GPUDirect, a communication technology that enables network adapters and storage drives to read/write data directly to or from GPU memory.
Intelligent data and storage management
Intelligent data and storage management have also become essential to effectively support AI workloads. For instance, a storage system might automate provisioning, caching or data tiering, or it might implement policy-based performance tuning. Some storage systems support hybrid and multi-cloud environments, moving data seamlessly across platforms, while others provide a distributed file system that delivers high performance and scalability.
Integration capabilities
AI applications also benefit from storage systems that integrate with AI pipelines and frameworks. Some systems might incorporate software-defined storage or disaggregated architectures. In others, AI capabilities are integrated into the storage platforms themselves to better manage and optimize resources and operations, including data placement.
AI-driven enterprises must approach storage in a new way
Prior to AI's growing popularity, organizations often planned their storage systems based on cost per terabyte, treating storage as little more than a backend commodity in their data centers, rather than as an integral part of the application workflow. In some cases, deploying storage infrastructure was a reactive process. Storage resources were occasionally adjusted to meet the changing demands of an organization's application workflows, which tended to be stable and predictable, in terms of their capacity and performance requirements.
AI-driven organizations must shift their thinking about storage, treating their storage systems as first-class citizens in the AI infrastructure stack, along with GPUs, CPUs and other critical resources.
Robert Sheldon
Enterprise AI workloads pose a challenge when using traditional storage planning and implementation approaches. These workloads require storage that can deliver the high throughput and low latency necessary to handle operations like checkpointing, model training or model inference. Storage systems that support AI must be flexible and robust enough to accommodate varying demands throughout the AI lifecycle. Storage systems should also support parallel operations and be able to manage massive data sets, which can include both structured and unstructured data.
When traditional storage is used to support AI applications, the systems are often unable to keep up with workload demands. This can result in massive data bottlenecks, which in turn lead to underutilized GPUs or GPUs that sit idly for extended periods of time. Not only can this affect the compute budget and ROI, but it can also affect AI performance, resulting in slower development and deployment cycles and less timely results.
AI-driven organizations must shift their thinking about storage, treating their storage systems as first-class citizens in the AI infrastructure stack, along with GPUs, CPUs and other critical resources. To this end, organizations need to think of their AI storage as a strategic asset.
At the same time, organizations should be prepared for the changes and challenges that will come with supporting a robust AI storage strategy in the future. For example, AI will likely continue its trajectory toward a greater reliance on inference, which requires high IOPS and ultra-low latency to support real time operations.
Organizations should be ready to incorporate new technologies as they become available. They will also have to contend with the increasing demand for SSDs, which could potentially lead to shortages or higher prices, affecting future planning.
In general, the AI storage platforms of the future are expected to be more intelligent, automated and AI aware, as storage becomes an integral player in the AI workflow. Only by embracing AI-ready storage offerings can organizations hope to succeed in their AI initiatives and maintain a competitive advantage going forward.
Robert Sheldon is a freelance technology writer. He has written numerous books, articles and training materials on a wide range of topics, including big data, generative AI, 5D memory crystals, the dark web and the 11th dimension.
Dig Deeper on Storage system and application software