Getty Images

Compare smartNIC products and use cases

Not all smartNIC use cases are the same, and not all products offer the same functionality and features. Compare offerings from three of the main smartNIC market segments.

Network interface cards, or NICs, have considerably boosted the rate at which servers can interface to networks, from 10 Mbps to 200 Gbps and soon 400 Gbps. As a result, hardware-based infrastructure acceleration offerings, or smartNICs, have emerged. These smartNICs use optimized processing architectures that are more suited to processing and forwarding network traffic than general-purpose CPUs.

SmartNICs can typically be programmed to offload storage, networking and security protocols, freeing up the server for its primary application tasks. Additional smartNIC applications include packet capture, network management, network visibility and telemetry.

The economic premise of smartNICs is a tradeoff between costs and application processing benefits. While smartNICs have slightly higher costs, they enable server CPUs to perform their primary task instead of running network and infrastructure applications.

A key enabler of smartNIC growth is the emergence of a new category of processors focused on data processing. These data processing units (DPUs) or infrastructure processing units accelerate data communications, just like CPUs do for general-purpose processing and GPUs for accelerated computing.

According to market research firm Dell'Oro Group, the smartNIC market will grow at a 26% compound annual growth rate, from $270 million in 2020 to $848 million by 2024, vastly outpacing the overall Ethernet controller and adapter market.

A new generation of smartNICs and DPUs has emerged, supporting a range of use cases and data center applications. This article looks at those use cases and reviews smartNIC offerings from Achronix, Napatech and Nvidia.

Editor's note: The author chose to evaluate three smartNIC products that reflect a mix of market segments:

  • smartNIC platform offerings that target OEMs and integrators;
  • smartNIC options based on field programmable gate arrays (FPGAs) for out-of-the-box deployments with networking and security applications; and
  • smartNICs based on DPUs for storage, networking and security workloads in enterprise and cloud data centers.

Achronix

Achronix Semiconductor features FPGA-based data acceleration offerings, which include Speedster7t FPGA devices and Speedcore embedded FPGA IP. With these options, users can deploy the technology as either a standalone product or in an application-specific integrated circuit (ASIC) or system-on-a-chip design. Achronix also offers the VectorPath S7t-VG6 accelerator card, built using the Speedster7t FPGA, for development purposes or for a complete, production-ready option.

The foundation of the Speedster7t architecture is a 20+ terabits per second (Tbps) two-dimensional network on a chip. The chip connects all I/O, hardened IP blocks and the FPGA fabric. The FPGA fabric and its external GDDR6 memory controllers support 4 Tbps bandwidth for high performance.

Achronix smartNIC use cases

Achronix smartNIC platforms and adapter offerings enable applications like telecom, AI and machine learning (ML), automotive, high-performance communications, storage, networking, and military and defense. A range of features supports networking applications, such as 5G, network acceleration, high-throughput packet processing, traffic management and data path security.

SmartNICs differ broadly according to functionality, network capacity and price.

Developers can use the flexibility of Achronix smartNIC offerings to innovate with 5G telecom network deployments, as 5G network topologies require different integration strategies for diverse applications.

Compared with GPUs, FPGA-based smartNIC offerings can improve acceleration capabilities for AI and ML algorithms, providing efficient energy use and comparable performance for inference operations. OEMs can also use the offerings to create custom designs for storage data plane processing, such as indexing, searching or analyzing data using AI and ML algorithms. The offerings also provide traditional storage data plane services, such as encryption, compression and deduplication.

Napatech

Napatech provides FPGA-based smartNICs for end users to accelerate standard networking and security applications in cloud, enterprise and telecom data center networks. Specifically, Napatech's programmable smartNICs include options for 1 Gigabit Ethernet, 10 GbE, 25 GbE, 40 GbE and 100 GbE. They are powered by software packed with features to offload compute-intensive network and security processing from server CPUs. This offloading enables standard commercial off-the-shelf servers to deliver line-rate networking performance, while freeing up compute resources for network and security applications.

Napatech smartNIC use cases

Use cases for Napatech smartNICs span the cloud and enterprise data center, cybersecurity, telecom, financial services, and military and defense markets. For data center operators, Napatech smartNICs offer actionable insight into network traffic through high-performance data access to network monitoring applications.

IT teams can correlate network data across large-scale networks using network insight from the smartNICs and deploy optimal security measures on demand. These security measures prevent untrusted sources from staging attacks on critical infrastructure, while also protecting public and private information.

Napatech smartNICs also offer on-demand data delivery across telecom networks, including 5G networks, to applications. This data delivery provides real-time analysis of traffic across telecom networks. As a result, telecom operators can customize their offerings, while providing new services that address growing demand.

An emerging use case is reliable, high-performance data delivery for time-critical financial trading applications. Napatech smartNICs capture all packets up to 100 Gbps line rate, while accelerating data delivery to internal, commercial or open source trading applications.

Nvidia

Nvidia's 10 Gbps to 200 Gbps ConnectX smartNICs include embedded acceleration engines for remote direct memory access over Converged Ethernet (RoCE), IPsec and Transport Layer Security (TLS) crypto processing, Accelerated Switch and Packet Processing (ASAP2) for virtual switching and routing, and non-volatile memory express over fabrics (NVMe-oF) for storage, in addition to standard networking offloads. Select ConnectX models also support time synchronization capabilities to support 5G wireless infrastructure and distributed databases.

Developers can use programmable packet processing technologies to accelerate server-based networking functions. They can also offload data path processing for compute-intensive workloads, including network virtualization, security and storage functionalities.

The ConnectX smartNIC is the core of the Nvidia BlueField DPU. It incorporates a software-programmable, multicore Arm CPU and high-performance network interface that can parse, process and transfer data at line rate. It also includes a set of programmable acceleration engines for AI and ML, security, telecommunications and storage, among others.

Nvidia smartNIC use cases

Nvidia ConnectX smartNICs focus on providing Ethernet-based protocol offloads to accelerate cloud, security, storage, broadcasting, AI, edge and telecom applications.

They offer ASAP2 technology that uses an ASIC-embedded switch to offload a large portion of the packet processing operations, freeing up the host's CPU and providing higher network throughput. The Nvidia ASAP2 technology stack provides a range of network acceleration features, including support for single-root I/O virtualization and legacy environments using VirtIO.

ConnectX smartNICs also provide hardware engines that offload and accelerate security with inline encryption and decryption for IPsec and TLS protocols to reduce latency and save on CPU usage. ConnectX supports block-level encryption offload, so data is encrypted and decrypted during storage and retrieval with reduced latency and CPU overhead.

AI and ML applications require high throughput and low latency to train deep neural networks and improve classification accuracy. ConnectX adapters support bandwidths up to 200 Gbps, providing ML applications with the performance levels and scalability they require. The Socket Direct feature enables direct Peripheral Component Interconnect Express access to multiple CPU sockets, maximizing the throughput available for AI and ML applications.

ConnectX smartNICs also enable high-performance storage and data access with RoCE and GPUDirect Storage. ConnectX adapters offer NVMe-oF protocols and offloads for enhanced use of NVMe-based storage appliances. Finally, these smartNICs provide a time synchronization service to help support 5G networking and digital video applications.

SmartNICs differ based on functionality

Organizations with expanding private and public cloud data centers and telecom services would do well to examine the advantages of smartNIC products. SmartNICs differ broadly according to functionality, network capacity and price. As a result, organizations can choose smartNICs that optimize the specific workloads and applications that may benefit from being offloaded from server CPUs.

About the author
Saqib Jang is founder and principal of Margalla Communications, a market analysis and consulting firm with expertise in cloud infrastructure and services. He is a marketing and business development executive with over 20 years' experience in setting product and marketing strategy and delivering infrastructure services for cloud and enterprise markets.

Dig Deeper on Network infrastructure

Unified Communications
Mobile Computing
Data Center
ITChannel
Close