Getty Images/iStockphoto

Meta's new silicon shows a growing trend for AI hyperscalers

The new hardware highlights a growing trend of hyperscalers designing custom chips for internal use. This move will help vendors rely less on hardware providers such as Nvidia.

Days after Meta said it will release Llama 3 in the coming weeks, the social media giant revealed it is building the next generation of its AI chip.

After unveiling the first Meta Training and Inference Accelerator (MTIA) a year ago, the next generation of the chip is nearly ready, the tech giant said.

The new processor doubles the compute and memory bandwidth of the previous chip, according to Meta.

It will serve ranking and recommendation models for Meta's array of social media platforms.

The introduction of the next generation of the MTIA, on April 10, comes after Meta said it plans to obtain 350,000 Nvidia H100 GPUs by the end of the year.

It also comes as the generative AI race shows no sign of slowing down as others in the race are expected to introduce new models soon.

For example, OpenAI is pushing to release a new version of its GPT large language model (LLM) soon -- GPT-5.

The model, like Meta Llama 3, could provide more problem-solving capabilities than current LLMs and take on more complex tasks like fielding more controversial prompts with fewer hallucinations or inappropriate outputs.

A growing trend

Meta's unveiling of the  next generation of the MTIA speaks to a growing trend among hyperscalers such as Meta, Google and AWS developing their own silicon, said Gartner analyst Gaurav Gupta.

During Google's Next '24 conference earlier this week, the vendor revealed it is developing its custom Arm-based CPU to support AI workloads in data centers. This new chip, called Axion, will power YouTube ads, Google Earth Engine, and other Google services. The new hardware joins Google's stable of TPU (Tensor Processing Unit) chips, the vendor's mostly in-house version of GPUs.

Custom AI silicon gives hyperscalers a few benefits, despite the chips only being used internally.

"It gives them improved performance at a lower cost and optimizes the use of compute and bandwidth memory for their workloads," Gupta said.

Moreover, while hyperscalers like Google and Meta have relationships with Nvidia and have contracted for thousands of chips from the AI hardware vendor, designing their own chips enables them to avoid vendor lock-in.

"They will be held hostage to Nvidia less," Futurum Group analyst David Nicholson said.

Moreover, the trend of custom silicon will lessen the emphasis on what AI hardware vendors are using and focus more on the actual service provided, Nicholson said.

"Because of the importance of AI, we're focusing on what's going on under the covers," he said. In the future, Nvidia's chips will be part an array for running general-purpose LLMs, but hyperscalers will also have their own custom-tuned chips, for company offerings and services like advertising, e-commerce and other AI-intensive applications, Nicholson noted.

Nvidia can't provide these custom services because it doesn't make business sense for the hardware provider.

"We're going to see these things coexist," Nicholson continued." "People want services, and those services are going to be delivered on a mix of AI custom silicon moving forward."

Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems.

 

 

Dig Deeper on AI infrastructure

Business Analytics
CIO
Data Management
ERP
Close