Sergey Nivens - Fotolia


Software containers can create problems in the network

Software containers offer attractive alternatives to hypervisors, but in large-scale deployments, they can cause problems for network engineers.

Editor's note: In part one of our two-part series on the challenges of container networking, expert Jeff Loughridge explores issues related to network address translation (NAT). Part two examines ways to mitigate those challenges and other problems that might arise. This material is not specific to any one software container technology, and applies equally to Linux containers, Docker, Canonical LXD and others.

The popularity of infrastructure as a service, based on virtual machines (VMs), has changed data center networking in large-scale environments. Likewise, the rise of container virtualization will force data center engineers to rethink network construction yet again. Depending on the environment, the life of VMs is typically measured in weeks or days. Containers will be much more ephemeral, springing to life as needed in any part of the data center, then performing a specific task before disappearing into the ether. The upshot? The sheer scale of containers in large deployments will ensure data center networking will never be the same.

But container networking is simple, right? Any developer who has used Docker can launch a software container using the default NAT configuration with limited knowledge of IP networking. In a development environment, such networking configurations function without problems. However, the use of software containers triggers many networking issues, particularly in large-scale environments.

While NAT-Protocol Translation engendered the historic growth of the consumer Internet in the late 1990s, the technique has properties that hinder network scaling. Additionally, its complexity presents challenges for network operators. Here are some reasons why:

NAT prevents the operator from using the IP address as a unique identifier for an endpoint. While you might question the wisdom of relying on IP literals (e.g., or 2001:db8::1) rather than on the domain name system (DNS), consider any troubleshooting effort that involves packet captures. The packet is modified in flight and will look different depending on the capture point.

NAT makes logging difficult. Ideally, logging functionality uses a DNS name; however, this is not always the case. A non-unique identifier in the NAT'ed address makes grokking the logs tougher.

NAT obfuscates the troubleshooting process. Any middleware introduced in the data path presents another component that can fail in strange ways, fundamentally violating the end-to-end principle of the Internet. Some groups, such as mobile operators, are very familiar with troubleshooting NAT in the relatively closed ecosystem of mobile handsets. As the number of containers in your data center approaches the millions mark, however, ask yourself whether the value that NAT adds sufficiently offsets its complexity.

Port mapping is inelegant. Take Docker's EXPOSE keyword in the Docker configuration file, which maps a Layer 4 port on the host to a port in the container. If multiple software containers want to run a Web server on a well-known port, such as 80 or 443, they need a reverse proxy. Do you want to have to manage another software package in the data path? In addition, ports are limited to 64 K. (Admittedly, you'd really need a massive number of containers per host to exhaust the 16-bit port range.)

Next: In part two, we explore ways to mitigate container virtualization issues, as well as the related problems of MAC address proliferation and overall network performance degradation when using virtual Ethernet interface.

Next Steps

Choosing a container platform

What network engineers should know about Linux containers

VMware's NSX could play role in container networking

This was last published in August 2015

Dig Deeper on Network virtualization technology