Kubernetes Why you should use a service mesh with microservices
X
Tip

Rising use of Kubernetes in production brings new IT demands

Kubernetes as an enterprise IT service is still in its infancy, but the market has reached a turning point from developer-maintained to IT-provided Kubernetes deployments.

Emerging technologies must demonstrate value before they become mainstream in enterprise IT. PCs made this transition in the 1980s, then virtualization in the early 2000s. Today, Docker containers and Kubernetes continue their journey into mainstream IT functions.

Kubernetes clusters have increasingly become a core IT infrastructure service, rather than a platform that developers manage inside VMs. As part of this shift, an IT operations team needs better tools to deploy and manage Kubernetes in production.

Understand Kubernetes vs. Docker containers

Docker containers enable application developers to package software for delivery to testing, and then to the operations team for production deployment. The operations team then has the challenge of running Docker container applications at scale in production.

Kubernetes is a tool to run and manage a group of Docker containers. While Docker focuses on application development and packaging, Kubernetes ensures those applications can run at scale. Development teams will still build and package applications using Docker containers -- probably with Kubernetes manifest files. Operations teams will deploy and manage the Kubernetes clusters where the containers run.

Kubernetes explained in 5 minutes

Kubernetes market evolves

When Google released Kubernetes into the open source community in 2014, the container management system was viewed largely as an experimental project for production use. There is no wizard in open source Kubernetes that guides users through setup. One of the best-known Kubernetes deployment guides is "Kubernetes the Hard Way" from Kelsey Hightower, a developer advocate at Google -- but anything that needs to be deployed "the hard way" is not ready for enterprise IT deployment.

The last few years have seen a rise in vendors that offer automated Kubernetes setups -- Platform9, along with Red Hat OpenShift, were among the first. More recent market developments include the NetApp Kubernetes platform, and VMware's native Kubernetes support in vSphere. Major public cloud providers, including Amazon Web Services, Microsoft and Google, also offer managed Kubernetes services.

Organizations look to Red Hat OpenShift

Part of my work involves teaching AWS training courses on both architecture and development. About half of the developers in those courses who build container-based applications run them on Red Hat OpenShift, a packaged and extended Kubernetes platform. Infrastructure personnel also use OpenShift for containerized applications in production. I have very few conversations about deploying Kubernetes into VMs or bare metal servers; almost everyone wants a simple service that deploys Kubernetes for them.

These offerings simplify Kubernetes deployment and use -- and, as a result, have propelled the tool's transition into an enterprise-ready service that IT can offer to development and application teams.

Challenges remain with Kubernetes in production

While market advancements have simplified Kubernetes implementation and production use, IT operations teams still face several challenges with the container management system – namely, monitoring and troubleshooting, and data management.

Monitoring and troubleshooting. A standard task for IT operations teams is to monitor application availability and performance, and then resolve any issues that arise. Kubernetes adds another layer of abstraction that requires additional monitoring to understand overall application health. Existing enterprise IT monitoring tools need to expand their coverage to include Kubernetes as a platform, and enable IT operations teams to monitor Kubernetes in production as well as they monitor hypervisors.

Docker containers were initially expected to be transient and ephemeral, with no persistent data.

Data storage and management. Docker containers and Kubernetes address the need to package and run applications, but there is still the issue of data. Many applications inside Docker and Kubernetes require file and database servers to hold large amounts of data. High volumes of valuable data take time to copy or move, which limits application mobility.

Docker containers were initially expected to be transient and ephemeral, with no persistent data. Now, enterprises can have stateful storage attached to containers as volumes. However, volumes can be even more difficult to copy or move than file and database servers. Data mobility has always been an IT operations challenge, which neither containers nor Kubernetes eliminate.

Sprinkle in some service mesh

A service mesh is an emerging tool for microservice applications that use Docker and Kubernetes. It enables various microservices to discover each other and restricts communication to allowed paths between them. Not every application on Kubernetes will require a service mesh, but the technology has potential to become a standard requirement included in a Kubernetes platform -- and will further support Kubernetes' transition into a core component of IT infrastructure.

Dig Deeper on Containers and virtualization