Assess managed Kubernetes services for your workloads Rising use of Kubernetes in production brings new IT demands
Definition

Kubernetes

What is Kubernetes?

Kubernetes, also referred to as K8s, is an open source platform used to manage Linux Containers across private, public and hybrid cloud environments. Businesses also can use Kubernetes to manage microservice architectures. Containers and Kubernetes are deployable on most cloud providers.

Application developers, IT system administrators and DevOps engineers use Kubernetes to automatically deploy, scale, maintain, schedule and operate multiple application containers across clusters of nodes. Containers run on top of a common shared operating system (OS) on host machines but are isolated from each other unless a user chooses to connect them.

How does Kubernetes infrastructure work?

Here's a quick dive into Kubernetes container management, its components and how it works:

Pods are comprised of one or multiple containers located on a host machine, and the containers can share resources. Kubernetes finds a machine that has enough free compute capacity for a given pod and launches the associated containers. To avoid conflicts, each pod is assigned a unique IP address, which enables applications to use ports.

A node agent, called a kubelet, manages the pods, their containers and their images. Kubelets also automatically restart a container if it fails. Alternatively, Kubernetes APIs can be used to manually manage pods.

A Kubernetes Replication Controller manages clusters of pods, using a reconciliation loop to push for a desired cluster state, to ensure that the requested number of pods run to the user's specifications. It can be used to create new pods if a node fails, or to manage, replicate and scale up existing pods.

The Replication Controller scales containers horizontally. It ensures there are more or fewer containers available as the overall application's computing needs fluctuate. In other cases, a job controller can manage batch work, or a DaemonSet controller may be implemented to manage a single pod on each machine in a set.

Basic structure of a Kubernetes cluster.
The basic structure of a Kubernetes cluster shows the master, which creates and schedules pods; nodes that host one or multiple pods; and several pods, which can encapsulate one or more containers.

Other Kubernetes infrastructure elements and their primary functions include:

Security. The master node runs the Kubernetes API and controls the cluster. It serves as part of the control plane and manages communications and workloads across clusters.

A node, also known as a minion, is a worker machine in Kubernetes. It can be either a physical machine or a virtual machine (VM). Nodes have the necessary services to run pods and receive management instructions from master components. Services found on nodes include Docker, kube-proxy and kubelet.

Security is broken up into four layers: Cloud (or Data Center), Cluster, Container and Code. Stronger security measures continue to be created, tested and implemented regularly.

Telemetry. An abstraction called "service" is an automatically configured load balancer and integrator that runs across the cluster. "Labels" are key/value pairs used for service discovery. A label tags the containers and links them together into groups.

Networking. Kubernetes is all about sharing machines between applications. As each pod gets its own IP address, this creates a clean, backward-compatible model. Pods can be treated like VMs in terms of port allocation, naming, service discovery, load balancing, application configuration and migration.

Registry. There is a direct connection between Amazon Elastic Container Registry (Amazon ECR) and Kubernetes. Each user in the cluster who can create pods can run any pods that use any images in the ECR registry.

What is Kubernetes used for?

Enterprises primarily use Kubernetes to manage and federate containers , as well as to manage passwords, tokens, SSH keys and other sensitive information. But enterprises find Kubernetes is useful in other cases as well:

Enhance service discovery. Businesses can use Kubernetes to automatically detect and customize service discovery for containerized applications across a network.

Manage hybrid cloud and multi-cloud. Kubernetes can help businesses extend on-premises workloads into the cloud, and across multiple clouds. Hosting nodes in multiple clouds and availability zones or regions increases resiliency and provides flexibility for a business to choose different service configurations options.

Expand PaaS options. Kubernetes can support serverless workloads, and this could eventually give rise to new types of platform as a service (PaaS) options from improved scalability and reliability to more granular billing and lower costs.

Wrangle data-intensive workloads. Google Cloud's Dataproc for Kubernetes service, which was released to early testing in late 2019, lets IT teams run Spark jobs which are large-scale data analytics apps.

Extend edge computing. Organizations that already run Kubernetes in their data centers and clouds can use it to extend those capabilities out to edge computing environments. This could involve small server farms outside a traditional data center (or in the cloud) or an industrial IoT model. Edge computing and IoT components may be tightly coupled with application components in the data center, so Kubernetes can help maintain, deploy and manage them.

Benefits of Kubernetes

Kubernetes enables users to schedule, run and monitor containers, typically in clustered configurations, and automate related operational tasks. These include:

  • Deployment. Set and modify preferred states for container deployment. Users can create new container instances, migrate existing ones to them and remove the old ones.
  • Monitoring. Continuously check container health, restart failed containers and remove unresponsive ones.
  • Load balancing. Perform load balancing to distribute traffic across multiple container instances.
  • Storage. Handle varied storage types for container data, from local storage to cloud resources.
  • Optimization. Add a level of intelligence to container deployments, such as resource optimization -- identify which nodes are available and which resources are required for containers, and automatically fit containers onto those nodes.
  • Security. Manage passwords, tokens, SSH keys and other sensitive information.

Challenges of using Kubernetes

Kubernetes often requires role and responsibility changes within an existing IT department as organizations decide which storage model to deploy: whether to use a public cloud or on-premises servers. Larger organizations experience different challenges than smaller ones, and these vary depending on the number of employees, scalability and infrastructure.

  • Difficult DIY. Some enterprises desire the flexibility to run open source Kubernetes themselves, if they have the skilled staff and resources to support it. Many others will choose a package of services from the broader Kubernetes ecosystem to help simplify its deployment and management for IT teams.
  • Load scaling. Pieces of an application in containers may scale differently (or not at all) under load -- this is a function of the application, not the method of container deployment. Organizations must factor in how to balance pods and nodes.
  • Distributed complexity. Distributing application components in containers enables flexibility to scale features up and down -- but too many distributed app components increases complexity, and can impact network latency and reduce availability.
  • Monitoring and observability. As organizations expand container deployment and orchestration for more workloads in production, it becomes harder to know what's going on behind the scenes. This creates a heightened need to better monitor various layers of the Kubernetes stack, and the entire platform, for performance and security.
  • Nuanced security. Deploying containers into production environments adds many levels of security and compliance: vulnerability analysis on code, multifactor authentication, simultaneous handling of multiple stateless configuration requests, and more. Proper configuration and access controls are crucial, especially as adoption widens and more organizations deploy containers into production. Kubernetes also now has a bug bounty program to reward those who find security vulnerabilities in the core Kubernetes platform.
Kubernetes security entry points for attackers.
Kubernetes security is a full-stack affair, as attackers can gain control of everything from a container to a cluster.

Common Kubernetes terms

Here are basic terms to help grasp how Kubernetes and its deployment work:

  • Cluster. The foundation of Kubernetes engine. Containerized applications run on top of clusters. It is a set of machines on which your applications are managed and run.
  • Node. Worker machines that make up clusters.
  • Pod. Groups of containers that are deployed together on the same host machine.
  • Replication Controller. An abstract used to manage pod lifecycles.
  • Selector. A matching system used for finding and classifying specific resources.
  • Label. Value pairs used to filter, organize and perform mass operations on a set of resources.
  • Annotation. A label with a much larger data capacity.
  • Ingress. An application program interface (API) object that controls external access to services in a cluster -- usually HTTP. It offers name-based virtual hosting, load balancing and Secure Sockets Layer. Once you get a grasp on some basic Kubernetes concepts, stay sharp and test your knowledge of Kubernetes terms and meanings.

Who are Kubernetes competitors?

There are other options for companies seeking to schedule and orchestrate containers:

  • Kubernetes vs. Docker. Docker Swarm, a standalone container orchestration engine for Docker containers, offers native clustering capabilities with lower barriers to entry and fewer commands than Kubernetes. Swarm users are encouraged to use Docker infrastructure, but are not blocked from using other infrastructures. In recent years, the two containerization technologies have begun to operate most efficiently when used together. Docker enables a business to run, create and manage containers on a single operating system. With Kubernetes, containers can then be automated for provisioning, networking, load balancing, security and scaling across nodes from a single Dashboard.

    Mirantis acquired the Docker Enterprise business in late 2019 and initially intended to focus on Kubernetes, but later pledged to support and expand the enterprise version of Docker Swarm.
  • Kubernetes vs. Mesos. Apache Mesos, an open source cluster manager, emphasizes running containers alongside other workloads, utilizing pods. It easily integrates with machine learning and big data tools such as Cassandra, Kafka and Spark. Mesosphere DC/OS, a commercialized version of Mesos maintained by D2iQ, has partnered with major vendors such as Hewlett Packard Enterprise, Microsoft and Dell EMC. Mesos remains available upstream, but D2iQ (formerly Mesosphere) is now primarily focused on Kubernetes support.

    Mesosphere existed prior to widespread interest in containerization and is therefore less focused on running containers. Kubernetes exists as a system to build, manage and run distributed systems, and it has more built-in capabilities for replication and service discovery than Mesosphere. Both Mesosphere and Kubernetes provide container federation.
  • Kubernetes vs. Jenkins. Jenkins, also an open source tool, is a continuous integration server tool that offers easy installation, easy configuration and change set support, as well as internal hosting capabilities. Kubernetes, as a container tool, is more lightweight, simple and accessible. It is built for a multi-cloud world, whether public or private based.

Overall, Kubernetes is arguably the most developed of the three systems in many situations -- it was designed from its inception as an environment to build distributed applications in containers. It can be adopted as the upstream, open source version or as a proprietary, supported distribution.

Kubernetes support and enterprise product ecosystem

As an open source project, Kubernetes underpins several proprietary distributions and managed services from cloud vendors.

IBM Red Hat OpenShift is a container application platform for enterprises based on Kubernetes and Docker. The offering targets fast application development, easier deployment and automation, while also supporting container storage and multi-tenancy.

CoreOS Tectonic is a Kubernetes-based container orchestration platform that claims enterprise-level features -- such as stable operations, access management and governance.

Other examples of Kubernetes distributions for production use include Rancher from Rancher Labs; the Canonical Distribution from Ubuntu; and public cloud-based tie-ins, such as Amazon Elastic Kubernetes Service, Azure Kubernetes Service and Google Kubernetes Engine (GKE). Mirantis is another example of an open source product ecosystem based on Kubernetes that can be used for the internet of things (IoT). The product is billed to manage IQRF networks and gateways for IoT applications such as smart cities.

Kubernetes ecosystem
A brief overview of the Kubernetes ecosystem.

What is the history of Kubernetes?

In the past, organizations ran applications on physical servers, with no way to define resource boundaries, leading to resource allocation issues. To address this, virtualization was introduced. This allows multiple virtual machines to operate at the same time on a single server's CPU. Applications can be isolated between VMs and receive increased security because they cannot be readily accessed by others.

Containers are like virtual machines but with relaxed isolation properties. Like a VM, a container has a file system, CPU, memory, process space and other properties. Containers can be created, deployed and integrated quickly across diverse environments.

Kubernetes, created by Google and released in 2015, was inspired by the company's internal data center management software called "Borg." Since then, Kubernetes has attracted major contributors from various corners of the container industry. The Cloud Native Computing Foundation (CNCF) took over hosting Kubernetes in 2018.

Kubernetes is open source, so anyone can attempt to contribute to the Kubernetes project via one or more Kubernetes special interest groups. Top corporations that commit code to the project include Red Hat, Rackspace and IBM. Companies in the IT vendor landscape have developed support and integrations for the management platform, while community members attempt to fill gaps among vendor integration with open source tools.

Kubernetes adopters range from cloud-based document management service Box to telecom giant Comcast and financial services conglomerate Fidelity Investments, as well as enterprises such as SAP's Concur Technologies and startups like Barkly Protects.

What is the future for Kubernetes?

Kubernetes updates in 2019 (versions 1.14 through 1.16) added or improved several areas to further support stability and production deployment. These include:

  • support for Windows host, and Windows-based Kubernetes nodes;
  • extensibility and cluster lifecycle;
  • volume and metrics; and
  • custom resource definitions.

Since then, industry interest has shifted away from updates to the core Kubernetes platform, and more toward higher-level areas where enterprises can benefit from container orchestration and cloud-native applications. These include sensitive workloads that require multi-tenant security, more fluid management of stateful applications such as databases, and fostering GitOps version-controlled automated releases of applications and software-defined infrastructure. For example, version 1.20 in December 2020 delivered volume snapshots, point-in-time copies of a volume in the API from which to provision a new volume or restore an existing one to a prior state. Snapshots are a key functionality for many stateful workloads, such as database operations.

As organizations expand container deployment and orchestration for more workloads in production, it becomes even harder to know what's going on behind the scenes. This increases the need to better monitor various layers of the Kubernetes stack, and the entire Kubernetes platform, for performance and security. Markets to serve these emerging areas with third-party tools have already formed, with startups (some through the CNCF), as well as experienced vendors such as D2iQ. At the same time, the Kubernetes ecosystem continues to consist of dozens of Kubernetes distributions and vendors, which is likely to narrow in the future.

Learn more about Kubernetes

Ready to dig deeper into Kubernetes? Here are some tips and best practices for its use throughout an organization.

DevOps teams can utilize Kubernetes to set policies for automation, scalability and app resiliency, and quickly roll out code changes. Metrics used to guide Kubernetes implementation also can play into an AIOps strategy. However, Kubernetes adds more complexity to the equation and requires skilled staff to make it work properly. Learn more about how Kubernetes helps augment DevOps practices.

Performance testing is critical for any software that goes to production to ensure it is highly available, scalable and stable. For an application deployed via a Kubernetes cluster, test to ensure that the cluster scales to meet changes in request volumes. Follow this tutorial to run Kubernetes performance tests in your data center or in the cloud.

How HCI systems adjust to accommodate Kubernetes clusters.
The difference between a traditional hyper-converged infrastructure node and a hyper-converged node based on Kubernetes open source container management software.

A microservices-based architecture breaks up apps and workloads into independent and specific functions and tasks -- and IT teams can use Kubernetes to manage microservices, too. This walkthrough shows how to deploy microservices via Docker containers and manage and scale them with Kubernetes.

For organizations that use or are considering hyper-converged infrastructure (HCI), containers and Kubernetes enable them to run traditional and modern applications within the same environment. Review how HCI vendors, including Dell EMC/VMware, Nutanix, Cisco and others, now optimize their HCI configurations for Kubernetes.

This was last updated in April 2021

Continue Reading About Kubernetes

Dig Deeper on Containers and virtualization

Software Quality
App Architecture
Cloud Computing
SearchAWS
TheServerSide.com
Data Center
Close