your123 - stock.adobe.com
AI hardware and software vendor Cerebras Systems released an updated version of its platform that includes integrated support for the open source TensorFlow and PyTorch machine learning frameworks.
TensorFlow was developed by Google to run machine learning workloads. PyTorch is used for deep learning research.
Giving customers a choice
Before the recent update, Cerebras provided only basic support for PyTorch. The Cerebras CS-2 platform with integrated TensorFlow and PyTorch support is designed to give Cerebras' customers the choice of using whichever machine learning framework they want to work with, the vendor said.
Along with these two options, customers can use Cerebras' CSoft software stack to develop machine learning models quickly, with access to a large number of AI-optimized cores and 40 GB of in-chip memory, according to Cerebras.
The added support for PyTorch and TensorFlow, revealed on April 13, is an important update for Cerebras and is in line with what the competition is doing, said Alex Norton, a high-performance computing (HPC) analyst at Hyperion Research.
"For any emergent technology platform like Cerebras, or companies like SambaNova, Graphcore or others, the software implementation and ease of use is critical to exploiting the capabilities of the new technology," Norton said.
These vendors should follow Nvidia in developing a strong software ecosystem for accelerator technology, he said.
"Any new hardware company must have a strong and easy-to-use software stack that encompasses the needs of the users from an application perspective," Norton added.
CS-2 enables customers to train models with billions of parameters with its weight streaming technology, Cerebras said.
The 2015 startup's CS-2 offering is adequate, but faces stiff competition, according to Andy Thurai, a Constellation Research analyst.
Andy ThuraiAnalyst, Constellation Research
While CS-2 can train models with billions of parameters due to the vendor's custom AI chip, that doesn't compare to GPT-3 models from nonprofit research firm OpenAI, which boast pre-trained models with 175 billion parameters, Thurai said. GPT-3 models can be fine-tuned with custom data for natural language tasks, making giant model training fast, easy and relatively cheap, he noted.
Another competitor is Nvidia's NeMo, which can train language models with trillions of parameters.
"While this may be a decent [offering] for enterprises looking for alternatives, the competition against Azure and Nvidia makes this somewhat lukewarm," Thurai said.
Cerebras also competes with HPC vendors with similar offerings in the cloud, such as Intel and Dell.
Large computer chip
Cerebras CS-2 is powered by the Cerebras Wafer-Scale Engine 2 (WSE-2), a processor for the vendor's deep learning computer system. The vendor claims that WSE-2 is the largest computer chip ever built and can provide enterprises with the deep learning compute resources on a single device equal to a group of legacy machines.
WSE-2 is a "mighty competitor to Nvidia's on a one-on-one comparison," Thurai said. The processor boasts 850,000 cores, 2.6 trillion transistors and 40 GB of on-chip memory, with high bandwidth. But "it's only a matter of time until Nvidia GPUs can catch up," he said.
CS-2 is built to address some challenges enterprises encounter while training large models with GPUs, such as time. The vendor claims that setting up the largest models on CS-2 takes a few minutes.
However, there might not be a need for enterprises to set up custom environments this quickly, in most situations, to use this scale of power, according to Thurai. But "for enterprise customers who need private cloud clusters, this might be worthy of consideration," he said.
While maintaining GPU clusters can be challenging, "Azure and Nvidia have taken steps to provide deep learning HPC environments that either rival or excel" those of competitors, Thurai added.