Sergey - stock.adobe.com

Nvidia unveils A100 GPU for demanding AI workloads

Designed for demanding AI and data science workloads, Nvidia's new A100 processor is a powerhouse, but it's also power-hungry and high-heat-producing.

Nvidia unveiled its next-generation Ampere GPU architecture and new hardware that will use it for AI and data science-intensive workloads.

An advancement on Nvidia's Volta architecture, released three years ago, Ampere will power the Nvidia A100, a new GPU built specifically for AI training and inference, as well as data analytics, scientific computing and cloud graphics.

The chip and software giant unveiled the new products at its GTC 2020 virtual conference Thursday.

AI super users

Nvidia A100 is "a tremendous improvement as Nvidia's own comparisons with Volta bear out," said Peter Rutten, research director of infrastructure systems, platforms and technologies at IDC.

"For super users, this is an exceptional processor," he said.

Nvidia A100, with more than 54 billion transistors and a die size of 826 mm square, is the world's largest 7 nm chip, according to Nvidia. It also boasts a third-generation Tensor Core with TF32 precision, representing up to 20 times higher performance compared to the previous generation with no code changes, as well as an additional two times boost with automatic mixed precision and FP16, the vendor said.

"The larger number of tensor cores, the different precision levels allowing for different performance scenarios, the partitioning capability, all that is very relevant and important," Rutten said.

But the new chip, with a thermal design power (TDP) of 400 watts, is more power-intensive than Nvidia's Volta GPU chip, the V100, which had a TDP of 350w, Rutten noted.

With the A100, Nvidia clearly has supercomputing in mind, or at least some highly demanding AI inferencing and training workloads. Its need for great power could be unattractive to some enterprises that don't necessarily need such large-scale power for many AI jobs.

Nvidia A100
Nvidia A100

"Whether this is the kind of capability [with the A100] that you need for a new AI initiative, that I would doubt," Rutten said.

Supercomputing applications

Nvidia also revealed a new product in its DGX line -- DGX A100, a $200,000 supercomputing AI system comprised of eight A100 GPUs. The DGX A100, providing 320GB of memory for training huge AI datasets, is capable of 5 petaflops of AI performance. Another new product, the DGX SuperPOD, a cluster of 140 DGX A100 systems, is capable of hitting 700 petaflops of AI computing power, Nvidia said.

As far as the core Nvidia A100's position in the market, it's "objectively fair to say that there is currently no competing product," said Rutten.

For super users, this is an exceptional processor.
Peter RuttenResearch director of infrastructure systems, platforms and technologies, IDC

"In this sense, Nvidia has again succeeded in being far ahead of the competition," Rutten said. "Of course, we don't know what's brewing at competing companies, both incumbents and startups, but I don't expect anything competitive to A100 to be launched anytime soon."

Meanwhile, the DGX SuperPOD will likely be one of the more powerful supercomputers available. Nvidia's previous SuperPOD, built on V100s, is on the Super Computer Top 500 list, Rutten noted.

"So, safe to say that a SuperPod with A100s will be listed as well -- and pretty high up," he added.

AI at the edge

Nvidia also unveiled two new products to its edge computing EGX line -- the EGX A100 and EGX Jetson Xavier NX.

Enterprises can integrate the EGX A100 converged accelerator, built on the Ampere architecture, into their servers to carry out real-time AI or 5G signal processing on up to 200 Gbps of data.

EGX Jetson Xavier NX, meanwhile, is a tiny, 70 mm by 45 mm supercomputer designed for high-performance compute or AI workloads in edge systems. The device can deliver up to 21 trillion operations/second (TOPS).

For Nvidia, AI at the edge is a critical piece of its market, Rutten said.

"AI inferencing will become a larger market than AI training in the near future," he said. "A lot of inferencing will happen at the edge. Nvidia cannot allow itself to let that market slip away to other vendors."

Yet, Rutten questioned whether the core A100 platform is the answer for many enterprises' edge computing needs as they evolve.

"It could be overkill for an edge solution, except perhaps for very intensive, larger-scale edge deployments," he said.

Next Steps

Nvidia extends its AI reach with cloud-based development hub

Dig Deeper on AI infrastructure

Business Analytics
CIO
Data Management
ERP
Close