E-Handbook: Programmable processor technology for next-gen data centers Article 2 of 4

Getty Images/iStockphoto

Vendors investigate chiplets as CPU expansion option

As chipmakers expand the amount of processing power in CPUs, some are considering the use of chiplets. These modular silicon pieces offer a possible option for scalable designs.

Multicore CPUs were fundamental to scaling data center performance and supporting a range of nascent technologies. Since the early 2000s, manufacturers have consistently increased CPU core counts and combined many processors on a single chip to boost performance. These advancements in die yields are linked to more powerful servers, lower data center latencies and improved processing bandwidths.

The success of scalable server CPUs is possible due to the introduction of chiplets. These sub-CPU silicon dies improve yields for manufacturers because they're less prone to defects, offer new data center performance and power advantages and reduce compute costs. Chiplets can help organizations scale organizational processing power and set a new precedent for CPU design.

Current CPU options: Increased sockets and multithreading

Instead of a single processing core, most of today's CPUs consist of multiple cores that simultaneously execute numerous functions. In addition to dual- and quad-cores, manufacturers make six-, eight-, 10-, 12- and 16-core CPUs. Increased cores and multithreading enable the large scale-out systems and diverse use cases that organizations rely on.

CPU design advancements entail billions of microscopic transistors fitted onto a computer chip. As the number of transistors on a chip maxed out, manufacturers subsequently focused on increased die sizes. However, this approach resulted in increased power consumption, as well as lower manufacturing yields.

CPU makers are moving away from single monolithic chips to build processors comprised of smaller modular pieces that can be recombined in new ways. This is a result of greater diversity of integrated components, such as graphics circuits and field-programmable gate arrays.

Though dual-socket processors are the long-established status quo, organizations can underuse CPUs, which leads to cost and resource inefficiencies. The growing rack power problem and rise in computational demand driven by data-hungry technologies have increased rich single-socket and edge server adoption; chip manufacturers continue to improve CPU performance and capacity.

In addition to multiple-core CPUs, multithreading exploits the concurrency potential of data-heavy use cases. Moreover, multithreading compensates for processor inefficiencies by simultaneously running multiple instructions streams and, as a result, improving overall server performance.

For example, in a four-core processor with hyperthreading, the CPU achieves the equivalent threading of an eight-core processor. These virtual cores share the same resources and can significantly boost overall compute power, which helps augment physical CPUs.

How chiplets can help processing capabilities

As shrinking transistors become more of a design and manufacturing challenge, vendors have turned to chiplets: smaller silicon bits arranged in a single large packet. Designed to achieve higher performance and more efficient power goals, chiplets help data to move faster, and enable smaller, cheaper and more connected compute systems. With parts of chips stacked together on an interposer to form a multichip module, communication between multiple dies becomes crucial.

Interconnects help chiplets communicate through high-speed, high-bandwidth connections and function as a single chip. Communication occurs through 2D-horizontal placement and 3D-vertical connections of logic chips. The creation of more capable silicon lets architects mix and match IP blocks and process technologies with memory and I/O elements in new device form factors.

This design provides higher bandwidth and lower latencies for processors, because the multi-dies and interconnects deliver a level of performance that's on par with a single, unbroken piece of silicon. Separate chip manufacturing teams can design and optimize chiplets, then mix and match them to quickly form new systems with increased processing power.

The future of chiplet design and CPUs

The industry continues to move forward with chiplet integration and, in 2019, created the Open Domain-Specific Architecture working group to establish industry standards and a viable ecosystem. In order to independently build chips, manufacturers need standardized chiplet products. The success of this approach depends on establishing comprehensive open standards.

Processor size limitations have chipmakers focused on creating more CPU capacity through multicores, caches and systems on a chip instead of faster processors. With multiple stacked chiplets in a single integrated circuit, manufacturers can quickly assemble products from a diversity of interchangeable CPU components.

Chiplets effectively set the stage for future CPUs modular constructs assembled for specific processing tasks using readily available system components. Engineers, too, could design individual chiplets without worrying about conflicts with interposer networks or other manufacturer's chiplets.

Dig Deeper on Data center hardware and strategy

SearchWindowsServer
Cloud Computing
Storage
Sustainability
and ESG
Close