To manage Kubernetes clusters successfully, you must make the most of your resources by setting proper CPU and memory requests. Choosing the right Kubernetes resource requests and limits is critical to the performance, cost effectiveness and reliability of this open source platform.
That said, rightsizing Kubernetes workloads is not a one-and-done process. You must find the right values to establish requirements, and it's not always easy to get started. To save time and money, practitioners must understand best practices around rightsizing workloads and how to optimize the entire process with Kubernetes capacity planning.
Find the right resource requests and limits
There are two separate types of resource configurations for containers in Kubernetes: requests and limits. Resource limits, also known as throttle, establish the shutdown point for a container taking up too much memory. Resource requests determine the amount of CPU or memory a Kubernetes container is allocated.
Overprovisioning in Kubernetes happens when values are set too high and nodes are negatively affected. But underprovisioning by setting values too low can lead to poor application performance. Resource requirements must be set correctly to ensure the right amount of resources are available to the cluster and that the cluster has the proper number of nodes for optimization. Checking out an application's behavior at runtime is one of the best ways to establish effective requests and limits.
Paying close attention to resource limits helps reduce resource overcommitment and ensures application deployments have what they need to run. Regardless of how much memory an application uses, Kubernetes always checks requests to identify the best node for a certain pod.
Resource limits set without a request are inherently addressed by Kubernetes, which sets memory and CPU requests with the recognized limit. Although this is a conservative approach, setting these limits intentionally helps you gain control over clusters to avoid issues like overprovisioning, pod eviction, running out of memory and CPU starvation.
Why not just rely on the Kubernetes scheduler as the default resource allocation and optimization process? When engineers and DevOps teams use the scheduler, they miss out on optimization opportunities to bin pack more efficiently. Learning tactics to accurately establish Kubernetes requests and limits independently can save the business considerable money and consolidate containerized workloads.
Rightsizing instances and workloads in Kubernetes can optimize your compute power and maximize the business value of your investment, both in terms of the time spent adopting Kubernetes and the financial costs of containerization. By applying rightsizing best practices, businesses can repurpose freed capacity for additional workloads and minimize data center energy consumption and carbon emissions.
There is a significant challenge, however: Practitioners must use the right metrics, such as disk and network consumption, when making capacity planning decisions about Kubernetes workloads. On one side of the spectrum, there are many smaller nodes. On the other side, there are fewer larger ones.
It might seem sensible to choose from the middle, but this still doesn't solve the resource problem. Although Kubernetes' autoscaling capabilities can add or remove resources based on demand to provide some basic optimization, underprovisioning is still an issue.
Because rightsizing and metrics collection both happen at the container level, the Kubernetes autoscaler launches, bin packs and retires nodes to meet the established limits of rightsized resource requirements. With this enhanced visibility into cluster use, you can rightsize while enabling the autoscaling feature to address the underlying infrastructure.
Tips for Kubernetes capacity planning
While it might be tempting to rightsize the most expensive applications and environments, the quickest results come from smaller, less complex systems that can be readily located and overprovisioned. This low-hanging fruit provides a good opportunity for initial testing.
Best practices for Kubernetes capacity planning include the following:
- Overprovision with generous limits or requests on the first deployment.
- Use several smaller pods instead of a few larger ones to find higher availability for applications.
- Avoid resource exhaustion, overload and slow deployment by minimizing the number of pods running at once.
- Periodically perform corrective actions and review past resource use because measuring capacity use over time helps avoid rampant resource consumption.
- Ensure performance does not suffer by testing workload performance on rightsized instances.
- Provide a feedback loop for developers to provide clear capacity requirements.
- Start this checklist again on a regular basis.
Even with established resource settings, regularly monitoring containerized workloads is necessary. Find out what tools DevOps teams need to improve visibility, and keep a close eye on configuration and overall optimization, as these are critical measures to maintain security and efficiency.