Dig into Kubernetes pod deployment for optimal container use
Improve your Kubernetes deployment strategy with balanced application hosting and sound pod-to-nodes mapping. Learn the ins and outs of resource pools and Kubernetes communication.
While it's tempting to think of Kubernetes infrastructure in terms of pods per node and nodes per cluster, that's a narrow view of how to plan a deployment.
Instead, admins will be better served thinking of Kubernetes pods as application abstractions, nodes as resource abstractions and clusters as a unification of the two.
Kubernetes infrastructure components
Kubernetes container deployments are assigned to pods, which represent abstractions of the necessary hosting resources for the containers they hold. Kubernetes pods are assigned to nodes, which provide the actual hosting resources, such as physical or virtual machines. Clusters are collections of nodes that work as a single unit to run a Kubernetes pod deployment and the applications they support.
An application is a collection of one or more software components. Containerization often encapsulates software components into related but independent containers. Therefore, a working deployment should put this tightly coupled collection of software components in a shared pod. Then, you must assign that deployment to a Kubernetes node, which is the virtual computer that runs the application. The number of pods per node isn't a decision at the operations level; it's a decision that an application's architecture and resource demands will dictate.
This article is part of
What is container management and why is it important?
By distributing application components in containers, application designers gain the flexibility to scale features up and down independently. However, too many distributed app components increase network latency and complexity, which reduces availability.
An early part of a Kubernetes deployment strategy should be to determine the number of application pieces that can scale under load. This number is highly variable and based on the specific nature of the application, not how it's deployed.
Map a Kubernetes pod deployment to nodes
Some capacity planning is required to map pods to nodes, but it's not a straightforward, one-to-one relationship.
Think of pods as collections of containers and assign closely related containers to the same pod. Nodes host containers that are organized into a pod.
A Kubernetes pod deployment usually spans multiple nodes in a cluster, so don't try to count how many pods will fit in a node. Kubernetes assigns the containers in a pod to specific nodes within the associated cluster based on scheduling rules. When a given node lacks the available resources it needs to run a pod's containers, Kubernetes assigns it to another node in the cluster.
Kubernetes scheduling rules require that the user define resource requirements for each container in a pod. The more detailed these rules are, the more likely Kubernetes will optimally schedule container assignment to nodes for an efficient and high-performance application deployment.
Kubernetes clusters, pods and nodes: How does it all work?
As we discuss Kubernetes pod deployments, it's helpful to reinforce a basic understanding of the Kubernetes cluster setup. Here's a quick summary that explains how it all works:
There's no general rule for how many containers a node can host; it depends on the node's resources, as well as how the container resources are defined. Kubernetes users, on average, run between 10 and 20 containers per node, but that number won't necessarily apply to your deployment.
The best approach to map containers to pods and pods to nodes is to enable Kubernetes cluster monitoring and assess the resource usage. Then, adjust the resource parameters as needed. Admins can overstate -- or understate -- usage for any resource class to manipulate the scheduling if needed.
Optimize cluster size
Admins have more control over cluster size than they do over mapping pods to nodes, but this process is more complicated.
For node management, the Kubernetes master must connect to the kubelet agent on each node in the cluster. The sum of these connections is called the control plane. Control plane communications must be reliable and fast enough to carry deployment management traffic, even if there's a massive failure. As a result, clusters often consist of local concentrations of resources.
The optimal cluster size depends on the resource pool and the extent to which specific applications require dedicated resources. The maximum size of a Kubernetes cluster is usually limited to a single data center or public cloud availability zone, and the minimum size is typically made up of the nodes necessary to deploy a single application.
However, the broader application of Kubernetes, driven by hybrid cloud adoption, is increasingly seen as a federation approach, in which a higher-level Kubernetes element manages lower-level clusters/domains. Define a cluster as the scope in which you expect to scale or redeploy normally.
On average, Kubernetes users build clusters with anywhere from a dozen to a thousand nodes. Smaller clusters enable more precise application hosting control, but they also decrease hosting efficiency by dividing the resource pool. Don't create clusters that are smaller than the sum of the necessary resources to host a related set of applications.
For deployments that require a large number of clusters -- or clusters hosted on the public cloud, which benefit from a more agile version of the Kubernetes control plane due to the cloud's elastic nature -- consider a cluster management tool for Kubernetes, such as those offered by Rancher or NetApp's StackPoint.
Resilience and scalability do not depend on specific limitations of Kubernetes pod deployment to nodes and clusters, but rather on an optimized resource pool size, control plane and monitoring tools to determine whether nodes are overloaded or underutilized. Kubernetes enables organizations to enforce unique rules and policies, but setting them is, as it has always been, the organization's responsibility.
Four must-haves in Kubernetes security best practices