Is hyperscale NAS the future of storage in the AI era?

Storage for AI workloads will require high capacity and performance. Some vendors have started to tackle this issue, for example with global file system technology.

The potential for AI to deliver transformational benefits has caught the imagination of the world, but the technology stack supporting AI must also evolve to fully deliver on its promise as a game changer. This will require IT to overcome some tough, longstanding architectural challenges, such as in the storage domain.

The evolution of fast compute power through GPUs opened the door to the AI revolution but, at scale, GPUs need to quickly be fed enormous amounts of data. Some leading organizations already find that traditional storage architectures are simply not fast enough to keep up, and this means expensive compute is sitting idle. While there are workarounds, they are often expensive, complex, limited in functionality and difficult to scale.

This isn't a future problem that can be kicked down the road either; according to recent research from TechTarget's Enterprise Strategy Group, AI will be the No. 1 workload driving new storage infrastructure initiatives in 2024. Many organizations aim to solve these challenges. As the potential of AI continues to be tapped, many more organizations will face these challenges in the future.

According to recent research from TechTarget's Enterprise Strategy Group, AI will be the No. 1 workload driving new storage infrastructure initiatives in 2024.

Hammerspace tries to nail down storage for AI

Hammerspace recently unveiled an alternative approach to the storage performance challenge. The California-based startup has developed global file system technology it initially aimed at unifying vast and disparate data sources into a single logical layer. It has now extended this capability with a performance dimension it says can meet the compute and storage demands of AI at scale. It calls the product Hyperscale NAS.

At a high level, Hyperscale NAS is a software-only approach that uses standard protocols -- chiefly NFS -- but implemented in a new way that Hammerspace says boosts performance and increases scale. The vendor claims it enables companies to use their existing storage hardware infrastructure.

An overview of Hammerspace's new Hyperscale NAS capability
Hammerspace's new Hyperscale NAS separates metadata from the primary data path between compute and the customer's storage namespace.

Why base its approach on an existing technology? Isn't NFS the very antithesis of a modern architecture? Quite the opposite, says Hammerspace.

Indeed, the vendor points to substantial limitations of object storage, which has evolved to store large volumes of unstructured data at cloud scale but has no understanding of the OS. By contrast, the demands of modern approaches, such as AI and containers, are an ideal match for NFS because it's a POSIX-compliant, application-level protocol. In addition, the client, through Linux, is everywhere. In other words, customers don't need to rearchitect their environments to support it.

The challenge for NFS has been to overcome the performance bottlenecks that organizations encounter at scale. Along with improvements to the NFS protocol itself, such as support for scale-out storage architectures, Hammerspace has addressed these issues in its implementation in a couple of ways:

  • It separates the NFS control path from the data path, such that file system metadata can be stored, managed and -- crucially -- scaled, separately. This feature also enables its global reach.
  • It has simplified the data path by reducing the number of hops from compute to storage, from nine to four.

This combination enables Hammerspace to deliver supercomputer-like storage performance to the enterprise, according to the vendor.

Hammerspace says it can do all of this while preserving enterprise-grade NAS data services, such as snapshots, clones and replication. This marks it out from other parallel file system technologies typically used in high-performance computing environments. It can also sit in front of existing storage systems from any vendor, including scale-out NAS, where it can add support for new AI-friendly interfaces such as Nvidia's GPUDirect.

What's on the AI horizon

It's early days for the technology and not all organizations face such extreme storage challenges today, but the market is quickly evolving. Hammerspace is not the only game in town: Companies such as Vast Data and Weka eye a similar opportunity, albeit with different approaches.

IT is only at the dawn of the AI era. As machines generate more data, volumes are going to explode further. With humans increasingly out of the loop altogether and GPUs only getting faster, current architectures might quickly be overwhelmed. Set against this backdrop, Hammerspace should appeal to a variety of organizations looking to tap the potential of AI at scale.

Simon Robinson is a principal analyst at TechTarget's Enterprise Strategy Group who focuses on existing and emerging storage and hyperconverged infrastructure technologies, and on related data- and storage-management products and services used by enterprises and service providers.

Enterprise Strategy Group is a division of TechTarget. Its analysts have business relationships with technology vendors.

Dig Deeper on Primary storage devices

Disaster Recovery
Data Backup
Data Center
Sustainability
and ESG
Close