your123 - stock.adobe.com
The AI hardware and software vendor unveiled the new chip, the Nvidia HGX H200, during a special address at SC23, a supercomputing, network and storage conference in Denver.
The product comes as more enterprises are using generative AI and large language models.
Bigger memory for AI workloads
However, those AI workloads require considerable memory, which is one of the key challenges in running AI jobs, according to Jack Gold, founder and analyst at J. Gold Associates.
Jack GoldAnalyst, Jack Gold Associates
"A lot of AI workloads are constrained not just by CPUs and GPUs, but also by getting information into and out of the memory," he said. "Adding a high-speed memory bus makes a significant difference in being able to run heavy-duty AI workloads."
The Nvidia HGX H200 is also compatible with HGX H100 systems, enabling Nvidia's customers to use the H200 without redesigning their server systems.
AWS, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure will be among the first platforms to deploy H200-based instances starting next year, according to Nvidia.
The H200 will be available starting in the second quarter of 2024.
Grace Hopper superchip
Nvidia also revealed that its Grace Hopper GH200 Superchip will power the Jupiter class of Exascale supercomputers for AI-driven scientific research. The GH200 offers an NVLink chip-to-chip interconnect that enables both GPUs and CPUs access to 624 GB of memory. Depending on the needs of the workload, the chip can dynamically share power between the CPU and GPU to optimize application performance, according to Nvidia.
With the GH200, Nvidia is looking to expand its market capabilities beyond GPUs, Gold said.
Located in Germany, Jupiter is being built by a consortium of European companies in collaboration with Nvidia. The goal is to accelerate the creation of foundational AI models in climate and weather research, industrial engineering and quantum computing.
Jupiter will feature a new quad Nvidia GH200 Superchip configuration that has a node architecture with 288 Arm Neoverse cores. It will be capable of achieving 16 petaflops of AI performance using up to 2.3 TB of high-speed memory, according to Nvidia.
With these updates, Nvidia is going after what AI researchers need, such as trustworthy computing and quantum performance, according to Dan Miller, analyst at Opus Research .
Amid a worldwide shortage of AI chips, "I'm always trying to figure out how a single vendor has sufficient fabrication capacity to keep up with demand," Miller said, referring to Nvidia.
Meanwhile, though some argue that Nvidia possesses the most AI horsepower among the big AI hardware vendors – including AMD and Intel – that doesn't necessarily mean it's in the best market position, Gold said.
"You can't just focus on the maximum number here," he said. "It's like the old days, when we used to talk about megahertz. The faster the chips, the better it was. It's not that clear anymore."
Intel has a number of its own AI chips, and AMD is also in the GPU market.
HPE and Nvidia
In other Nvidia developments, the AI vendor revealed that its technology will also power a new HPE supercomputing system for generative AI.
The vendor's AI Enterprise suite will enable HPE customers -- including large enterprises and research and government organizations -- to train and tune AI models and create their own AI applications, according to HPE. The supercomputing system is also powered by GH200 Superchips.
The supercomputing system is expected to be generally available in December from HPE in more than 30 countries.
Esther Ajao is a TechTarget Editorial news writer covering artificial intelligence software and systems.