Siarhei - stock.adobe.com

Tip

Don't get caught up in the neural processing unit hype

To differentiate new chip technology from existing GPUs, mobile tech companies (along with software titans) are slapping a 'neural' label on their products. Analysts say to be skeptical.

The surge of interest in AI is spawning the birth of new companies, new products, new titles -- and a new tech vocabulary to go along with them.

One example of this is the neural processing unit or the neural engine, both of which have been used by mobile tech companies to describe purpose-built AI processors. But, according to analysts, the label of neural processing unit -- better yet, neural anything -- should be viewed with skepticism.

Competing against GPUs

Although the term is used by marketers and the media alike, the definition of neural processing unit (NPU) is imprecise and immature. David Schatsky, managing director at Deloitte LLP, said there is no single definition of an NPU just yet. Instead, "it's a processor architecture designed to make machine learning more efficient -- to happen faster and with lower power consumption," he said.

Indeed, new processor architectures associated with terms like neural processing unit are useful when tackling AI algorithms because training and running neural networks is computationally demanding. CPUs, which perform mathematical calculations sequentially, are ill-equipped to handle such demands efficiently.

The roadblock paved the way for graphics processing units (GPUs), chips that use parallel processing to quickly perform mathematical calculations. Originally developed to render video game images, GPUs have become the de facto workhorse for machine learning algorithms. That makes the GPU market -- dominated by Nvidia and AMD -- ripe for competition.

Alan PriestlyAlan Priestly

"All of the other semiconductor vendors are looking for opportunities to pitch products into the market to compete with GPUs," said Alan Priestley, research director at Gartner.

What is a neural processing unit?

To differentiate themselves from Nvidia and AMD, Priestley said companies are using "any combination of 'N,' 'P' and 'U' to qualify that these chips are targeted to execute AI algorithms and compete against the GPUs already being used in this sector of the market."

That includes wireless technology vendors like Qualcomm, Huawei Technologies and Apple, all of which use NPU or some variation of the term to describe some of their latest tech. Huawei's Kirin 970 chip uses a neural processing unit while Qualcomm's Snapdragon 845 mobile platform uses a neural processing engine. With the Apple A11 Bionic processor, it's a neural engine that powers the machine learning algorithms.

It's a processor architecture designed to make machine learning more efficient -- to happen faster and with lower power consumption.
David Schatskymanaging director, Deloitte

And therein lies some of the confusion. Unlike a GPU or a CPU, a neural processing unit or a neural engine doesn't refer to standardized hardware or even to specific AI functionality. Instead, analysts suggested that the ability to process data in parallel and at the edge are the commonalities that tie terms like these together.

"There's a lot of marketing hype around this stuff at the moment," Priestley said. "To run AI, you need to process lots of bits of data in parallel. That's what's being added to these chips -- the ability to do massively parallel processing."

In fact, Priestley said he wouldn't go so far as to label technology such as the Apple A11 Bionic processor as a neural processing unit. "Some would call this an [intellectual property] block, which is just that part of a chip is being added to support this," he said. "It's not the only purpose of the chip. The chip's main purpose is to handle the phone's stuff. It's just got this highly parallel processing element added to the chip to support the AI-type software that's running."

What's in a name?

So, while the NPUs in edge devices today aren't next-gen GPUs, there is a race to build chips that are.

Mike GualtieriMike Gualtieri

Google has already thrown its hat in the AI hardware ring with its TPU, or tensor processing unit. TPUs are chips specifically designed to process TensorFlow, a popular open source deep learning software library. Earlier this year, Google made TPUs available in its cloud environment. But whether a TPU rivals a GPU "depends on the work load," said Mike Gualtieri, analyst at Forrester.

He said established vendors like Amazon and Microsoft are also rumored to be developing AI chips, citing this as evidence of, "how important deep learning is to these internet giants." They'll be going head-to-head with a slew of AI hardware startups -- a recent New York Times article reported they've collectively raised $1.5 billion in the last year.

Gualtieri, who called Graphcore the most prominent startup of the bunch, said they're all trying to architect chips that process AI algorithms more efficiently such as finding ways to get data to a processor faster or designing a chip that can run multiple neural networks at the same time.

And the emphasis on AI hardware is helping to muddy the vernacular waters even more. Graphcore calls its chip, which is specifically designed to process neural networks, an intelligent processing unit, or IPU. And then there's Wave Computing, which calls its AI chip a DPU, or dataflow processing unit. Intel released its first NNP, or neural network processor, last fall.

Regardless of advances in chip development or how companies are labeling their AI tech these days, Gualtieri said, "the most important thing about this chip conversation is that Nvidia dominates with their GPUs. These other companies are getting into the game."

Dig Deeper on Digital transformation

Cloud Computing
Mobile Computing
Data Center
Sustainability and ESG
Close