Weka connects storage AI with new NeuralMesh architecture
Weka's Data Platform adapts to AI data center needs with new NeuralMesh architecture, which can deploy in hybrid-cloud setups, self-heal and prioritize storage paths to the GPUs.
Weka is revamping its parallel file system storage software for AI workloads used by its most demanding customers.
NeuralMesh, the latest update to Weka's data platform offering, rearchitects Weka's parallel file system into containerized microservices for a software-defined storage system that can run on-premises, in cloud environments and within hyperconverged infrastructure.
"If your design patterns change tomorrow, we can handle it," said Ajay Singh, chief product officer at Weka. "We're not selling big iron or big boxes."
The NeuralMesh update is available in limited release for customers upon request, but it specifically targets cloud providers and AI services. The software has specific hardware requirements available through certified partners, according to Singh.
The new architecture follows how many hyperscaler clouds, like AWS or Google Cloud, offer storage, said Steve McDowell, founder and principal analyst at NAND Research.
For Weka, however, this offering is specifically for customers who want that capability for their own workloads, he said.
"This is similar to how cloud service providers do storage," McDowell said. "They're reducing [the platform] down to base functions and reassembling as needed."
Storage for AI
The NeuralMesh platform's components are designed primarily for Weka's AI customers such as Nebius and Stability AI, Singh said.
Speed and throughput are maintained with direct paths between the compute and stored data, avoiding other services and potential slowdowns, he said. The software uses no storage controllers to support scaling-out nodes and multiprotocol access, including Portable Operating System Interface, NFS, SMB and S3.
Management features for the platform include operation visibility, resource monitoring and inefficiency detections available through dashboards, APIs and alert services, according to Weka. The platform offers other industry-standard features for software-defined storage, including a global namespace, snapshot services and security capabilities like access control and encryption.
The efficiencies are for really large-scale training and inference pipelines.
Steve McDowellFounder and chief analyst, NAND Research
Competitors such as DDN, Hammerspace and Vast Data all offer parallel file systems that support AI workloads, but Weka's choice to package services within containers enables customers to burst workloads into the cloud or other data centers as needed, McDowell said.
"What they're addressing here is scalability and SLAs," he said. "The efficiencies are for really large-scale training and inference pipelines."
Weka's AI platform and those offered by competitors should appeal to a fairly small number of customers, said Marc Staimer, founder and president of Dragon Slayer Consulting.
Most enterprises are using smaller models or pretrained AI services to create specific agents or industry uses and aren't creating large language models wholesale, he said.
"The market for what they announced is pretty small [and] niche," Staimer said. "You need to have a really high throughput speed for LLM training, but most enterprises aren't talking 100 billion parameter training."
Tim McCarthy is a news writer for Informa TechTarget covering cloud and data storage.
Dig Deeper on Storage system and application software