Getty Images/iStockphoto

SambaNova AI launches new chip: the SN40L

The AI hardware and software provider's new chip offers enterprises a full stack approach to training LLM. It also makes it possible for customers to train multimodal models.

AI hardware and software vendor SambaNova Systems has a new AI chip: the SN40L. Introduced on Tuesday, SN40L will power SambaNova's full-stack large language model platform, the SambaNova Suite, which the vendor introduced in March.

SN40L was manufactured by Taiwan Semiconductor Manufacturing Company (TSMC) for SambaNova. It can serve up to a 5 trillion parameter LLM with 256K+ sequence length possible on a single system node, according to the vendor.

Privacy and full stack

SN40L enables customers to run the largest LLMs with high performance for training and inference without sacrificing model accuracy or data privacy, according to SambaNova.

Its capacity for a large memory provides the type of privacy enterprises want, according to R. "Ray" Wang, founder and analyst at Constellation Research.

"Enterprises want private GPT capabilities for security reasons, but they also want to scale," Wang said. "To be able to do it on your own premises meets those two fundamental requirements."

SambaNova's full-stack approach to LLMs also differs from competitors like Nvidia, Wang said. Nvidia provides chips but also offers a separate AI software platform.

"Our clients are choosing SambaNova because it's one turnkey solution," he added. "That's the big driver."

Image of new chip
SambaNova Systems introduces new AI chip to power LLM platform.

SambaNova's approach also comes as enterprises shift from having ideas about how they will use generative AI, to implementing those ideas, according to Gartner analyst Chirag Dekate.

"As enterprises start building generative AI applications, they are taking a serious look at what it takes to train and what it takes to deliver some of these models," he said.

While Nvidia is benefiting mainly from that shift with its partnership with major vendors such as Google, HP, Dell and Lenovo, others AI vendors like SambaNova have also started to take advantage.

A different environment for multimodal AI

Unlike Nvidia's GPU-based environment, SambaNova offers a RDU-based (reconfigurable dataflow unit) environment.

This kind of environment is beneficial when enterprises start dealing with multimodal AI, a type of AI model that deals with different inputs and various outputs.

"Anytime you have a multimodal AI problem, it usually translates to a variable compute workload," Dekate said. This imbalance in workload structure best fits RDU architectures or other application specific integrated circuit chips, such as Google TPUs.

While the current generative AI market is mostly still single modal -- text to text or text to image -- the future of generative AI is moving toward multimodal.

As enterprises start building generative AI applications, they are taking a serious look at what it takes to train and what it takes to deliver some of these models.
Dekate ChiragAnalyst, Gartner

"We are very much on a journey where today, we are going from a task-specific model ecosystems towards in the near term, going forward, multimodal AI sort of ecosystems or even composite AI ecosystems," Dekate added.

For enterprises watching the shift, it's essential to start planning not just for today's type of workloads but also for the type of workloads that will emerge in the future.

"They should start designing for an AI in the future where you're not just going to be training models more frequently but also deploying and inferring models more aggressively," Dekate said. "Past architectures and repeating past mistakes might not necessarily be the right way forward."

SambaNova's SN40L is now available on the cloud. For customers who want it on premises, SambaNova will ship those chips in November.

SambaNova also revealed that new models, such as Meta's Llama 2 7B and 70B variants, BLOOM 176B and new embedding models for vector-based retrieval are now available in SambaNova Suite.

Esther Ajao is a TechTarget Editorial news writer covering artificial intelligence software and systems.

Dig Deeper on AI infrastructure

Business Analytics
CIO
Data Management
ERP
Close