As containers proliferate in the enterprise, IT teams seek new toolsets that can manage and orchestrate containerized applications. And for the majority of organizations, that search begins and ends with Kubernetes.
Kubernetes has become the de facto container orchestration system for enterprise IT shops. In a 2018 survey from the Cloud Native Computing Foundation -- the organization that hosts the Kubernetes open source project -- 58% of the 2,400 respondents said their company runs Kubernetes in production, while 42% said they are evaluating the technology for future use.
Given the widespread use of, or at least interest in, Kubernetes, IT admins should make it a priority to learn the technology.
This brief video on "Kubernetes explained" does just as its name suggests: It breaks down the main components of the container management system to make them accessible to those who are new to the technology, or simply need a refresher. It walks through a brief history of Kubernetes, its primary features and limitations, its architecture and the main ways an organization can acquire the technology in the IT market today.
While this "Kubernetes explained" video reviews the core native functionality of the container management platform, IT admins can also consider Kubernetes add-on components, such as a DNS to handle network names, a dashboard for a web-based Kubernetes management interface and resource monitoring and logging tools that can gather and report on Kubernetes cluster metrics and events.
To learn more about Kubernetes, and the role the technology plays in enterprise container deployments, check out the video above. As another resource, reference the "Kubernetes explained" video transcript here:
Transcript - Kubernetes explained in 5 minutes
Container technology brings speed and flexibility to enterprise IT. But organizations need a way to automate container deployment and manage a container's lifecycle.
Hello, I'm Steve Bigelow, senior technology editor with TechTarget, and today we're going to talk about the Kubernetes container management platform, its main features and limitations, and its core components.
Kubernetes was introduced in 2014 by a team of developers at Google. Later, Google donated Kubernetes to the Cloud Native Computing Foundation as an open source project.
To understand Kubernetes, it's important to first understand containers. Similar to virtual machines, containers virtualize a computer's resources, and then provision those resources into instances that can run software services and applications. But containers are smaller than virtual machines, and they're ephemeral, meaning they only exist for short durations.
This makes it impossible to create, start, organize, destroy and monitor containers manually. So Kubernetes and similar tools bring order to the chaos by providing a platform to schedule and run containers, while automating related operational tasks.
The core capabilities of Kubernetes are container provisioning and management. Administrators can use the platform to create and monitor containers across clusters. Kubernetes can automatically restart failed containers, remove unresponsive ones, release an update, and continuously check container health.
Kubernetes can also use DNS names or IP addresses to control how containers are made available. Its management capabilities include load balancing, which distributes high traffic volumes across multiple container instances. Kubernetes also handles storage, allowing admins to use varied storage types, from local storage to cloud resources, for container data.
Other worthy features include rollouts and rollbacks. Kubernetes can set and modify the preferred state for a container deployment. This allows admins to create new container instances in a desired state, and then migrate existing containers to the new instances, while removing the old ones.
Kubernetes can also lend intelligence to container deployments and operations. For example, administrators can tell Kubernetes which nodes are available and which resources are required for containers, and the platform will automatically fit containers onto those nodes to optimize resource use.
Finally, organizations can use Kubernetes to manage passwords, tokens, SSH keys and other sensitive information.
Now, for all of its capabilities, Kubernetes does have a few limitations.
First, Kubernetes is not a software build tool -- it handles the virtual instances that hold software, but it's not part of the CI/CD workflow. Kubernetes works much later in the process for deployment and operations.
Also, Kubernetes doesn't provide middleware or application services. There are countless services that can all be deployed in containers managed by Kubernetes, and accessed by other applications running through Kubernetes, but those services are not native to Kubernetes itself.
Kubernetes is a modular master/node system. A "node" is a set of IT resources, such as physical or virtual machines, on which one or multiple containers run. A node supports a Kubernetes pod, which is a group of containers, and also the smallest unit of deployment in Kubernetes. The "master" is a machine in the Kubernetes cluster responsible for handling control and cluster events.
The heart of the Kubernetes master system is the component called kube-controller-manager. This component runs controllers for nodes, pod replication, service connections to pods, and services and tokens.
A scheduler called kube-scheduler decides which pods should run on a given Kubernetes node. The scheduler makes its decisions based on resource requirements, data locations, network traffic load, affinity and anti-affinity rules, as well as hardware, software and policy limits.
The Kubernetes API is accessed through the Kubernetes API Server component called kube-apiserver.
To make a master/node system work, several components must run on each Kubernetes node to manage the pods and maintain the runtime environment.
The kubelet component is an agent on each cluster node that ensures all containers are in the appropriate pod, while also checking for normal container operation and health.
A packet filtering network proxy called kube-proxy also runs on each node to enforce network rules and manage communication between pods and the network.
A runtime component, such as Docker or Kubernetes Container Runtime Interface, is also needed to operate containers and integrate them with Kubernetes.
As an open source project, Kubernetes can be downloaded and used for free. Enterprises can also choose a vendor-supported distribution of Kubernetes, or use Kubernetes as a service through public cloud providers.
I hope that you've learned a bit about Kubernetes and its place in enterprise computing. I'm Steve Bigelow, senior technology editor at TechTarget. To learn more about Kubernetes and IT operations, visit searchITOperations.com. Thanks for joining me.