Getty Images

Common Kubernetes terminology you should know

What are pods and nodes? How do namespaces differ from volumes? This list of common Kubernetes terms can give you a basic understanding of the container management platform.

Kubernetes has become the de facto system for organizations to manage their Linux containers. It works across multiple cloud environments, and you can use it to manage microservices and deploy applications.

This list of ubiquitous Kubernetes terminology can prepare you to work with Kubernetes and help you to better understand how exactly this popular container management platform functions.

Node. A Kubernetes node is a small collection of resources that support one or more containers. Each node contains Docker, kube-proxy and kubelet -- services that help create the runtime environment and support Kubernetes pods.

A Node Controller manages all aspects of the node throughout its lifecycle. It maintains a list of nodes and available machines and resources, and it can delete unhealthy nodes or remove pods from unavailable nodes. You can use the command-line kubectl to run commands against the node.

Cluster. A cluster is a group of servers or computing resources that behave as a single system. For the purposes of Kubernetes, a cluster usually means the set of nodes you use to manage and run your containerized applications.

A Kubernetes cluster is made up of one primary node and a number of secondary nodes. The primary node controls the state of the entire cluster. It also issues all task assignments for the cluster, including scheduling, maintenance and updates.

Pod. A pod is the smallest unit you can deploy in a Kubernetes environment. A pod has one or more containers working in tandem that share common networking and storage resources from the host node where it resides.

You can either create pods yourself or let the Kubernetes controller make pods. Pods exist only along with the application containers that use them. The Kubernetes controller uses the node where it lives until you terminate the session or delete the pod. When a node shuts down, all pods attached to it automatically delete.

Kubelet. Kubelet is the agent that handles Kubernetes pods for each Kubernetes node. It registers nodes with the API server, and it ensures all containers on a pod are running and healthy. It reports to the primary node regarding the health of its host, and it conveys information to and from the API server. When the control plane requires something from a node, kubelet executes the action.

Kube-proxy. Kube-proxy facilitates networking services for a Kubernetes environment. It handles networking communications both inside and outside of a Kubernetes cluster, and maintains network rules on nodes. It uses your OS's packet filtering layer when available, and when it can't use the packet filtering layer, it forwards network traffic itself.

Namespace. Namespaces partition a Kubernetes cluster into isolated logical units to enable IT teams to better organize resources. Kubernetes automatically creates four namespaces for unassigned resource deployments, system configurations, system usage and lease objects. You can create as many namespaces as you need, and many IT teams use them to logically isolate development, test and production environments from one another.

Etcd. Etcd is the primary data store that Kubernetes uses. It contains all configuration data and information about the state of a given cluster, and it stores and replicates all cluster states. You can deploy etcd as either pods on the primary node or as an external cluster.

Volume. A Kubernetes volume is a directory containing all data accessible for containers in a given pod. Volumes provide a method for connecting containers and pods -- which only exist as long as you use them -- to a more permanent set of data stored elsewhere. When you delete a pod, the volume associated with it is destroyed as well. However, the data within that volume outlasts the containers or pods that use it.

Kubernetes supports about 20 different varieties of volumes, including emptyDir volumes, local volumes and specialty platform-specific volumes.

Kubernetes scheduler. The Kubernetes scheduler controls performance, capacity and availability of resources and containers throughout a given Kubernetes environment. It matches each pod you create to a suitable set of resources on a node and distributes copies of pods across different nodes to increase availability. It upholds affinity and anti-affinity rules and quality of service settings.

You can configure Kubernetes scheduler in one of two ways. The PriorityFunction policy directs the scheduler to rank machines based on best fit for a specific node, whereas the FitPredicate policy follows required rules.

Ingress. Ingress is not a load balancer, but performs load balancing functions for a Kubernetes environment. It controls traffic to and from services, as well as external access to services. It performs load balancing tasks by setting up an external load balancer and directs traffic to that service based on a set of rules. This enables you to use multiple back-end services via the same IP address.

GitOps. GitOps is a Kubernetes-centric paradigm for developers and IT admins to use Git to manage clusters and applications. It calls for you to apply Git to IT operations, and to use it to provision infrastructure and deploy software.

You can use GitOps to enable continuous deployment and continuous delivery for Kubernetes. This means you can build, test and deploy software at a fast rate without the need for an individual deployment system. It also provides you with a single framework for controlling infrastructure, monitoring version control and changing configurations.

Next Steps

Learn how to bootstrap Kubernetes clusters with kubeadm

Dig Deeper on Data center ops, monitoring and management

SearchWindowsServer
Cloud Computing
Storage
Sustainability
and ESG
Close