How to approach Kubernetes multi-tenancy for resource isolation Four must-haves in Kubernetes security best practices

Kubernetes networking explained: Start with these building blocks

To understand Kubernetes networking, admins must first master the core components of the container orchestration platform. Then, they need to map deployments to the right virtual networking model.

If Kubernetes is the hottest thing in IT hosting, then, by association, it is also the hottest thing in networking.

But Kubernetes networking is complicated, and many believe the Kubernetes ecosystem overall gets more complex every year. To simplify, we'll divide Kubernetes networking into three elements -- networking within pods, networking between pods and networking that unites users and containerized resources -- and see how a unified approach can support all three.

Core Kubernetes components

First, let's open with a short course in Kubernetes terminology.

Kubernetes collects containers into logical units called pods, which run on hosts called nodes. Applications can combine pods hosted on a collection of nodes via an abstraction called a service. Kubernetes deploys nodes into collections called clusters. The purpose of Kubernetes networking is to merge all these elements into a cooperative, cohesive ecosystem.

Kubernetes isn't the only model for container networking, but what differentiates it from the original Docker model is that in Kubernetes, pods and nodes can communicate without address translation -- i.e., addresses are assigned globally within a cluster, and subnets are allocated to pods as needed. In Docker, for example, applications have a private address, which requires address translation to communicate outside the application.

Kubernetes supports two basic network models: unified and overlay. The unified model uses standardized ethernet and IP network practices and protocols. The default approach built into Kubernetes is based on the traditional IP-subnet model -- which is an implementation of the unified model -- that's used in both branch offices and data centers. This approach can have scalability issues and can be difficult to adapt to hybrid and multi-cloud deployments.

The overlay model resolves these issues but requires what's essentially a new cloud network on which to deploy containers. Both these models interwork with Kubernetes the same way and support the same connectivity.

To learn more about networking for Kubernetes clusters, check out this video tutorial.

Kubernetes networking within pods

Everything in Kubernetes starts with a pod. The containers or application components inside a Kubernetes pod are inside a local host, which means they're on the same system and thus reachable by the localhost mechanism. Kubernetes implements this connection as a virtual bridge, which means the containers or components within a pod are on a virtual ethernet network, akin to an office or home network. Communication is direct, with no need for routing. One designated "bridge" container within each Kubernetes pod provides the pod networking, so most questions about Kubernetes networking are about how to connect pods and users -- the second and third of the three Kubernetes networking elements discussed here.

Networking between pods

The pods in a Kubernetes node, or host, are externally accessible through the aforementioned bridge container. Pod addresses are assigned as applications deploy, which means that if the pods redeploy or scale, their addresses change. That leads to the next level of Kubernetes networking, which is services.

Each Kubernetes service has a persistent address, which is mapped to the current pod addresses that make up the service. The Kubernetes service is essentially the boundary between the virtual network and dynamic IP addresses from cloud and container hosting, and the company's internet or VPN, where addresses must be consistent for each resource and user.

Networking that unites users and containerized resources

The final piece of the Kubernetes network puzzle is service exposure. The mapping between services and the internet or IP VPN space has two pieces: ingress, which handles access to services, and egress, which handles traffic from within the service out to the rest of the world. If features such as load balancing and service discovery are in place, they're part of the ingress element. A service mesh, such as Istio or Linkerd, can perform the ingress and egress gateway functions and provide service discovery and load balancing.

Kubernetes includes a virtual network connector that enables admins to define a single virtual network, which the container orchestrator will then use to provide all the connectivity up to the ingress/egress gateway. The default Kubernetes network model mimics basic Docker networks, but a wide variety of vendors offer virtual networking tools that integrate with Kubernetes.

Kubernetes supports two basic network models: unified and overlay.

There are multiple ways to set up a virtual network model for Kubernetes. For example, an organization might adopt a Kubernetes networking strategy that fits directly to their VPN model and is from a single vendor. Alternatively, an organization could adopt a Kubernetes host network as a separate data center network and link it to its VPN.

Most Kubernetes users will have a company VPN that integrates with their data center network. Simple Kubernetes networking models -- such as flannel and VXLAN -- can integrate easily with most VPNs as Kubernetes data center networks. For those organizations that want to transition a traditional data center environment to Kubernetes, this is the best approach.

Organizations that plan to deploy a private cloud, with all the availability and scalability features of public cloud, will want a robust data center network. There are a number of specialized Kubernetes virtual networking options available, such as Apstra, Big Switch and Nuage, that have features designed for cloud data centers.

Users with a major commitment to a single public cloud provider should consider using the Kubernetes virtual networking tool provided by, or designed for, their cloud provider. Amazon, Microsoft, Google and IBM all offer Kubernetes virtual networking services compatible with hybrid cloud configurations.

Most enterprises with large-scale Kubernetes deployments in hybrid cloud should strongly consider a Kubernetes ecosystem product from providers such as IBM, Google or VMware -- and it should include a service mesh. These products typically provide support for multi-cloud deployments, as well as the data center and private clouds, and they unify all pieces of Kubernetes deployments and administration -- including networking.

Dig Deeper on Containers and virtualization

Software Quality
App Architecture
Cloud Computing
Data Center