Editor’s note: In this opinion piece, industry analyst Zeus Kerravala shares his thoughts on 400 Gigabit Ethernet adoption and Arista Networks’ 400 GbE approach. Arista is a client of Kerravala’s ZK Research, a consulting firm based in Westminster, Mass.
Speeds of 400 Gigabit Ethernet might seem futuristic. Currently, however, some use cases make sense. For example, advanced applications — such as artificial intelligence, virtual reality and serverless computing — require three components: fast networks, storage and compute. If any of these three components falters, the application won’t work optimally.
The rise of GPUs as a data center resource has seen compute speeds grow exponentially. Flash storage and NVMe have given storage performance an exponential jump. And 400 Gigabit Ethernet will enable the network to keep up.
As an example, I recently chatted with a data scientist at a healthcare group in the Boston area. He told me his organization’s biggest AI challenge is ensuring GPUs are fed enough data to keep them busy.
In this case, a next-generation server helps. But the organization also needs a network that can send the volume of data to increase the use of the GPUs to near peak. Over-investing in any one area wastes money. So, the three legs of the application stool must be kept in lockstep.
400 Gigabit Ethernet needs solid support system
The technology is not the problem. Early adopters, such as web-scale companies, will likely adopt 400 GbE as soon as it’s available because they deploy all technology this way.
For the rest of the world, though, adoption of 400 GbE networking requires greater ecosystem support. For example, the technology needs better availability and lower price optics, cabling and server connectors.
Next year will bring 400 GbE products to market, but it won’t deliver the large ecosystem required for mass adoption. Also, over the next 12 months, the price of 400 GbE, particularly the optics, will fall — making it more affordable for everyone.
A network that operates at 400 Gigabit Ethernet may seem like overkill. But, if I’ve learned one thing in nearly 40 years in this industry, no matter how much bandwidth is available, we find a way to consume it. Even if 400 GbE is not right for your business today — because of price or other factors — you should still educate yourself on the different options so you can make the right decision when the time comes.
Network needs to keep up with other trends
One networking vendor, Arista Networks in Santa Clara, Calif., provided some details this week on its 400 Gigabit Ethernet roadmap. The vendor’s new 7060X4 Series offers 32 ports of 400 GbE in a 1 rack unit chassis. The products are based on Broadcom Tomahawk 3 silicon that offers 12.8 Tbps of switching capacity.
Customers that deploy the switch have the option of splitting each port into four 100 GbE ports for a total of 128 100 GbE ports. Network managers can deploy them as 100 GbE today and migrate to 400 GbE, if required.
In 2015, Arista introduced its 7060X with 32 ports of 100 GbE enabled by 128 lanes of 25 Gig serializer/deserializer (SerDes). In 2017, the 7260X3 Series brought 64 ports of 100 GbE using 256 lanes of 25 Gig SerDes.
Now, in 2018, the 7060X4 has 32 ports of 400 GbE on a switch with 256 lanes of 50 Gig SerDes. This evolution represents a fourfold increase in capacity in about four years. Additionally, the 7060X4 Series features new traffic management and load balancing capabilities.
Given the growth in data and bandwidth use, this kind of Moore’s Law performance is crucial for the network to keep up with other tech trends, such as GPUs and flash storage.
A debate over optics
Arista’s 400 GbE series is available in two configurations that are virtually identical except for the optical connectors they support. The 7060PX4-32 model uses OSFP optics, and the 7060DX4-32 supports QSFP-DD optics.
Currently, in the networking industry, there’s a debate as to which optic is “better” — and the answer depends on the customer. QSFP-DD is backward-compatible with QSFP-100 optics, making it ideal for customers who want to migrate slowly from 100 GbE to 400 GbE.
From a technology perspective, OSFP optics are better because they cool easier, consume less power and have more options. OSFP connectors support 100km standard mode fiber, making it well-suited for data center interconnect, while QSFP-DD does not provide this support.
Arista does offer a passive OSFP-to-QSFP adapter, enabling customers to deploy a 400 GbE switch today and right it at 100. Ultimately, though, customers make the choice on whether they want better rear-facing features for ease of migration or better forward-looking ones.
Personally, I favor the OSFP connector because the type of organization that would deploy 400 GbE today would benefit from its additional capabilities. I understand the appeal of the QSFP-DD connectors. But, migrating slowly to 400 GbE seems to be an oxymoron.