Getty Images

Guest Post

Kubernetes clusters multiply like Tribbles -- but why?

Like the U.S.S. Enterprise faced a rapidly compounding Tribble problem, Kubernetes admins find themselves in a similar situation with clusters. What do you do now?

While Kubernetes enables workloads to share clusters, Kubernetes adopters are instead deploying independent clusters and creating sprawl. Why? When is multi-cluster the right strategy? What are the costs and benefits? And how can you prepare for the resulting complexity -- before it eats your lunch?

There's no question that Kubernetes has hit the mainstream. A recent Evaluator Group survey of hundreds of enterprises revealed that 51% of respondents use Kubernetes in production, and another 17% are testing it.

The survey -- along with in-depth interviews with a handful of respondents -- also shows that multi-cluster Kubernetes adoption is coming on strong, even if that adds cost and complexity.

Multi-cluster Kubernetes necessary but challenging

The simplest, lowest cost approach to deploying microservices applications in Kubernetes is to consolidate all namespaces on a single cluster. Instead, customers appear to be rapidly expanding to multiple clusters: Seventy percent are operating multi-cluster today, with 39% operating six clusters or more -- and some operating more than 100 -- and 55% expect to be operating six or more clusters within a year. Clusters are multiplying like the proverbial Tribbles on the U.S.S. Enterprise.

Interviews with customers have revealed that a combination of technical, organizational and business factors feed this growth. Cluster-sharing namespaces can improve simplicity and resource use, but that does not enable the isolation -- for security and performance -- customers expect coming from a more isolated VM architecture. Therefore, these customers are segregating sensitive applications into separate clusters. Some customers also believe that isolation offers a less risky maintenance experience, enabling them to update different applications and clusters more easily over time.

The top reason cited by survey respondents for Kubernetes adoption was to create a modern application architecture. Many customers enable their DevOps teams to build and select their own operating architecture, which leads to a model wherein each new application creates a new cluster. Customers interviewed anonymously advised that, while this model is highly productive for development, it can lead to suboptimal compute resource configuration if the development and platform teams do not work together on the deployment plan. One experienced operations executive noted that before his company formalized this collaboration, developers were constantly deploying on a cookie-cutter cloud architecture, which led to both cost and performance problems.

With the benefit of consistent information sharing, developers became more knowledgeable about the effects of their choices, and operations could better prepare for the acquisition and management of the necessary resources and services. The productivity of both teams and application performance improved significantly. Some container management tool and platform providers are starting to offer governance tools -- VMware Tanzu Application Platform, for example -- to ease this collaboration.

Some applications must be multi-cluster to meet the needs of the business application. A regional telecommunications and entertainment company, which deployed its first production application on a single Elastic Kubernetes Service cluster in AWS, is now deploying a second major application with semi-mobile edge servers to address large volumes of local data. Each edge site is its own cluster, managed by a SUSE Rancher installation.The customer expects to operate hundreds of these clusters in a short period of time.This deployment has pushed the customer to adopt two completely different container management strategies, as this new use case is not a good match for their traditional AWS-based model.

So multi-cluster is necessary, but it offers challenges.

Kubernetes is a complex environment; the orchestrator does not stand alone and must also be supported by observability tools -- such as for monitoring and analysis -- image and application lifecycle management, and governance. As each cluster must be managed, multi-cluster consumes more operational management resources.

Half of the surveyed customers are already using or have decided to use global observability and fleet management tools, which allow for single-pane-of-glass operational management of the multi-cluster topology. Interviewed customers also cited an increased need for automation tools to reduce the risk of operator error, and to increase productivity in more complex environments. Customers planning to scale their production Kubernetes architecture should consider these investments.

Open source and commercial multi-cluster options

The good news: Customers have a range of options to manage multi-cluster environments.

For customers who want a purely open source approach, the CNCF community continues to foster teams making multi-cluster improvements to new observability and management tools, such as the Kubernetes Cluster API. Customers who want a commercially supported approach can select either self-managed container management offerings -- such as Red Hat Open Shift Open Platform Plus, SUSE Rancher or D2IQ -- or a container management service from either a cloud service provider or from a Kubernetes specialist such as Mirantis.

Customers beginning their Kubernetes journey should consider the following:

  • Is multi-cluster in your future? Kubernetes customer experience suggests this is likely for technical, organizational or business reasons. This will require an investment in global monitoring, management, governance and, possibly, automation -- sooner than you think. It also affects your talent requirements. Have you budgeted and planned for this?
  • Are development, platform and operations teams collaborating for success? Do they have the tools they need to assure that deployments are cost-effective and performant?
  • Can your container management tools and platforms flexibly extend to meet your future needs? The topology of your organization's early production applications might not match its future deployments.It will be easier and more productive if your container management solution, whether open source or a commercially supported platform or service, can handle both current and future needs.

With proper global monitoring, management and governance, multi-cluster Kubernetes will support the business's current and future needs -- without eating your lunch.

Next Steps

Strategies for Kubernetes multi-cluster management

Build a multi-cloud Kubernetes cluster step by step

Dig Deeper on Containers and virtualization

Software Quality
App Architecture
Cloud Computing
SearchAWS
TheServerSide.com
Data Center
Close