Konstantin Emelyanov - Fotolia
IT organizations choose network and scheduling technology to support Docker containerization for scalable deployments, redeployments of failed components, shared microservices and enterprise-wide application integration -- with particular focus on the critical areas of address and workflow management. Kubernetes is the pathway to resolve most container network issues with Docker, if Kubernetes users plan their model thoughtfully.
Docker networking is simple. A Docker host is essentially a private IP subnet, similar to many home networks. Inside the subnet, everything can communicate; nothing outside can see in, unless some inside port is deliberately exposed. Kubernetes networking design presumes that every container can talk with every other container. It might seem that Kubernetes networking offers more than Docker, but it is just different. To get it right can take additional tools and technologies that support the container network.
How you build the Kubernetes network is likely the most important container deployment decision. The design must satisfy the Kubernetes network model of universal connectivity among containers but also expose applications to the correct users, enable connections with non-Kubernetes components and support integration with public cloud for some applications or components of applications.
Numerous tools create container network connectivity. One approach is via external control and software-defined networking (SDN), with tools such as Cisco Application Centric Infrastructure, VMware NSX, Juniper Contrail and Nokia Nuage. Another model favors Kubernetes-integrated approaches. Popular examples include Kube-router, Multus and other Kubernetes plug-ins. SDN-based connectivity setups are by far the most flexible and most likely to support existing, companywide networking needs, so give SDN serious consideration.
Kubernetes features for container network management
To expose containerized applications to users in Kubernetes, use Services. Services are abstractions of component interfaces, mapped to the Kubernetes Pods that support them on one side and to an external interface on the other. The external interface can reach an intracluster private address (the default), an external address or a load balancer that directs the work to one instance of the service.
The Services approach also manages container scaling, using the load balancer interface to link a Service to an elastic group of Pods. The load balancer distributes the work; the user can update the load balancer's list of available resources by commissioning new resources and containers. Those new resources will be used wherever they're hosted. A scaling-down action removes the resources both from the Pod hosting them and from the load balancer's inventory.
Keeping the container network straight can be a challenge, and it doesn't end there.
Kubernetes Container Networking Interface is a way to automate and orchestrate networking tasks within Kubernetes, including the interface mapping of Services, but the user must provide the specific network tool to use, such as those SDN products listed above, and manage the address spaces so everything is unambiguously addressable. Most Kubernetes users use private IP addressing for their clusters and expose the addresses used by workers and non-Kubernetes elements onto the virtual private network (VPN). It is also often necessary to accommodate public cloud resources.
Container network addressing
Addressing is the second key issue in Kubernetes container networking. A container network is almost always based on private IP addresses, which are local, meaning they operate only inside a domain. The domain could be a Kubernetes cluster, a company or another set group; these local IP addresses cannot be accessed from the internet or from another company. Internet request for comment 1918 lays out the private IP address space: one Class A address (16 million addresses), 16 Class B addresses (65,000 addresses each) and 256 Class C addresses (256 addresses each). Most Kubernetes users adopt the Class A address because that choice offers the greatest flexibility when they assign addresses. Avoid overlapping private IP addresses for both the VPN and Kubernetes or overlapping within separate clusters. Pick a different address space or part of the Class A space to prevent confusion.
Act globally; process locally
Build Kubernetes clusters, whenever possible, from physical hosting resources that are either colocated or at least generally local and well-connected enough that the resources in them work interchangeably. Proximity eliminates the risks to quality of experience caused by connectivity limitations across a container network when a piece of an application deploys on some remote server in another data center. Consider using different clusters where there are distinct geographic separations within the resource pool. Also, follow this general rule: Give clusters a common address space, and then, decide whether specific clusters should support only related applications and users or whether the clusters should provide backup for each other.
A Kubernetes user can pass workflows among independent clusters by exposing the interfaces for those workflows on the company's VPN. If the clusters back each other up, use the Kubernetes federation capability to define applications that are synchronized across clusters and provide uniform access to those resources. Federation presumes that Kubernetes must synchronize the state of applications across clusters -- it can generate significant overhead. Turn to traditional data center tools and Kubernetes deployment tools directly, rather than to federation, for independent clusters or to control the synchronization of application versions and databases across clusters.
For large-scale container deployments or deployments that are likely to grow into large ones, start with Kubernetes along with Docker, rather than only Docker, and plan the Kubernetes networking model for the extent of expected container growth and then some. It is daunting to make the change to Kubernetes or to a different Kubernetes network model once the deployment has reached a sizable scale. Increase the chances that the container network is done right by anticipating scale from day one.