.shock - Fotolia


Data center network design trending toward faster, denser

Expect data center networks -- and interconnects between DCs and cloud providers -- to get both denser and faster, while topologies continue to evolve.

Data center network design trends have always been toward higher speeds and more endpoints, with pendulum swings...

around "flatness." Right now, we are seeing some of the same things happening on the outside -- where data centers connect to other entities.

Data center network design: Faster at every level

The big changes happening in data center network design come as compute and storage infrastructures are called on to accommodate even more immense streams of data. These streams not only occur on the back end, but also among themselves in east-west flows within a data center and in north-south flows that travel from one data center to another.

It wasn't that long ago when 10 GbE (gigabit Ethernet) connections seemed to have plenty of elbow room for both east-west and north-south traffic. Today, they no longer seem so roomy. Microservices architectures, big data, rich media and greater density in virtualized compute infrastructures conspire to drive rapid adoption of 25 GbE and even 100 GbE networks. This is not just at network chokepoints, where many streams aggregate, but throughout the infrastructure -- to cope with rapidly changing and less predictable flows.

The data center network's fast friends

Many of the same things can be said of connections among data centers, as all the same data-size and application-architecture-driven forces push flow numbers and volumes ever higher. Throw in cloud services and the need to move a lot of bits into and out of and among clouds, and it is easy to see why so many enterprises are getting multi-gigabit interconnects, whether via Ethernet or dedicated wavelengths, tried and true dark fiber, or ever-fatter internet links.

Special relationship: Direct cloud connections

Enterprises need speed to connect their data center applications to cloud resources or vice versa. They need it to provide great application performance to on-premises users working with cloud services. These needs are driving another network trend: Organizations in need of speed are turning to technologies such as Microsoft Azure ExpressRoute or Amazon Web Services Direct Connect to connect the edge of their networks to a cloud provider's network. This is typically accomplished in a "meet-me" facility -- a carrier hotel or big colocation facility, usually -- where the cloud service provider has a point of presence (POP). The enterprise can then do one of two things. It can place infrastructure there to create its own POP. Or it can lease a port on the service provider's infrastructure, through which it can connect to its own router through a piece of physical cable. This creates a high-capacity, low-latency, jitter-free conduit for traffic.

Playing the field: WAN-cloud exchanges

The WAN-cloud exchange is another network trend. If an enterprise doesn't want to go on its own and manage the physical infrastructure of a direct connect, it can go for a virtualized version of the same service by subscribing to a WAN-cloud exchange.

Instead of connecting directly to a cloud provider's router, the enterprise connects to an exchange router. That exchange is, in turn, connected to multiple cloud service providers; the enterprise can spin up virtual direct connects to any of them.

In addition to hiding the mechanics of managing many physical connections, the cloud exchange approach permits more variation in capacity on links, more agility (virtual connections can be created in minutes rather than days or weeks) and the ability to connect to more providers without linearly scaling the management burden of doing so.

Data center network design trend: Is flat in or out?

Data center networks have a depth -- the number of layers a piece of data has to traverse, in a worse-case scenario, to get from point A to point B within the facility. These networks started flat, as bridged and then switched networks; then went to two tiers, aggregation and edge; to three tiers, adding a backbone layer; to fabrics, one layer again, flat; and now are shifting to two tiers again. The result, leaf-spine architectures, have become a data center standard in a relatively short time, achieving the right balance between flatness and resilience for most users.

On the outside, the pendulum is swinging every which way. Flat MPLS clouds with internet access concentrated in data centers -- two tiers, in a way -- are starting to give way to three-tier structures in some places: Intermediate network aggregation points connect branches in a region to each other and the internet -- for some traffic -- and to the data centers -- where the rest of the internet traffic goes, as well as inward-bound stuff -- and to other aggregation points. Or, they are giving way to even flatter structures, with direct internet access at the branch. Or they are becoming software-defined WANs with multiple topologies, with some traffic using a two-tier, hub and spoke structure, and other traffic using a full -- flat -- mesh, even as other traffic follows even more baroque paths to specific cloud partners or services.

Coming soon: Data centers on the edge

The next major disruption we can expect in data center network design is the rise of edge computing. This will radically redefine where work gets done and how much data needs to flow, and where, to make that work possible. Meantime, we can expect the data center network to get denser and faster and the mesh of interconnects among data centers and cloud providers to do so as well, while topologies evolve to fit future needs and practices. 

This was last published in December 2017

Dig Deeper on Data Center Networking