local_doctor - stock.adobe.com

Tip

7 storage vendors responding to growing AI demands

AI workloads are beginning to influence the products that storage vendors are offering. These are seven notable examples of storage vendors that are shifting their strategies.

AI initiatives continue to rapidly spread from business to business, shifting pressure on enterprise infrastructure systems in a way that not every company is prepared to handle.

Many storage vendors, for example, are updating their products and services to meet the demands of their customers' AI workloads. Organizations planning to launch their own AI initiatives require storage systems that deliver both capacity and performance, while helping to streamline their AI operations. Let's examine a sample of what some vendors are offering to support their AI-driven customers.

Amazon

Amazon FSx for Lustre is a fully managed file service that offers high-performance, scalable storage powered by Lustre -- an open source file system designed for applications like AI and HPC that require fast and scalable storage.

FSx for Lustre offers SSD storage and caching options that can achieve sub-millisecond latencies and up to 1200 Gbps per-client throughput for cloud-based GPU instances when used with Elastic Fabric Adapter and Nvidia GPUDirect Storage. The tool helps to accelerate data loading, inference operations, model checkpointing and key-value (KV) caching.

Dell

Dell's PowerScale all-flash storage systems provide unified file and object storage that supports parallel, multi-protocol access, with seamless integration across Dell's AI Data Factory and AI stacks. PowerScale is part of the Dell AI Data Platform and was developed in close collaboration with Nvidia to provide GPU-ready storage.

Dell's ObjectScale all-flash storage systems can support large-scale data collection and AI model training. The platform is built on an exascale architecture that offers fast object storage and comprehensive APIs for modern AI workloads.

DDN

DDN EXAScaler is a high-performance parallel file system that accelerates AI, HPC and other data-intensive workloads. EXAScaler includes data management and integrity filesystem features, which are available to DDN's appliances and cloud offerings. The platform provides hot node capabilities that accelerate data access by automatically caching data on the local NVMe drives of GPU systems. It also includes hot pools that intelligently move data between high-performance SSDs and large-capacity disks, using automated policy and API-based data movement.

HPE

HPE's Alletra Storage MP X10000 is an object storage platform that provides enterprise-grade storage for data-intensive workloads such as AI. The platform is built on a software-defined, disaggregated architecture that can scale from terabytes to exabytes. It uses all-flash storage to deliver both performance and capacity, making it possible to support generative AI and large language models (LLMs). The platform has been built to integrate with both on-premises and cloud-native environments and can be managed through the HPE GreenLake cloud.

NetApp

NetApp's AIPod is a converged infrastructure stack that combines Lenovo servers, Nvidia DGX BasePOD and NetApp ONTAP all-flash storage into a consolidated system. AIPod provides a unified hybrid data architecture that can handle large and diverse data sets across cloud and on-premises environments. It supports file, block and object protocols and can integrate with MLOps platforms and internal processes.

AIPod is supported by the Nvidia AI Enterprise software stack, NetApp BlueXP, NetApp AI Control Plane and NetApp DataOps Toolkit for comprehensive MLOps integration.

Pure Storage

Pure has integrated the Nvidia AI Data Platform reference design into its FlashBlade platform. FlashBlade taps into Nvidia's accelerated computing, networking and AI Enterprise software, making it possible to achieve the speeds needed for AI reasoning.

Pure has also launched its FlashBlade//EXA system, which can power the entire AI pipeline. The system offers high-performing object storage that scales from petabytes to exabytes and provides up to 10+ TB/s throughput.

WekaIO

WEKA NeuralMesh is a software-defined storage system designed for AI and data-intensive workloads at scale. NeuralMesh provides container-native storage services built on a microservices architecture. The platform includes a fully distributed parallel file system capable of scaling linearly, while growing more efficient as the system size grows. It also runs on standard x86 infrastructure and can operate across on-premises, cloud and hybrid cloud environments. It offers a single unified system that supports use cases such as AI inference and agentic AI, as well as AIOps and MLOps.

Other storage vendors

These are by no means the only storage-related products that have been designed to meet the challenges of today's AI workloads. Many of these vendors also offer other products that target AI, and other vendors have introduced their own products, including:

  • Atlas
  • Google
  • Kioxia
  • Micron
  • Microsoft
  • Nutanix
  • Samsung
  • Scality
  • Seagate
  • Silicon Motion
  • Supermicro
  • VAST
  • Vultr
  • Wasabi

AI storage is a dynamic and growing industry that continues to evolve as more organizations embrace AI in their own data centers or in the cloud. Organizations embracing AI workloads need to make sure the tools and infrastructure they possess can meet the demands of AI and other latency-sensitive applications, and in that, should take note of the storage vendors who are providing tools for just this. 

Robert Sheldon is a freelance technology writer. He has written numerous books, articles and training materials on a wide range of topics, including big data, generative AI, 5D memory crystals, the dark web and the 11th dimension.

Dig Deeper on Storage system and application software