ra2 studio - Fotolia
Enterprises are watching the development of the Kubernetes Cluster API project, which they hope will evolve into a declarative multi-cloud deployment standard for container infrastructure.
With a declarative API, developers can describe the desired outcome and the system handles the rest. Kubernetes today requires users to deploy a series of such APIs separately for each cloud provider and on-premises IT environment. This makes it difficult to take a cohesive, consistent approach to spinning up multiple clusters, especially in multi-cloud environments. Existing Kubernetes deployment procedures may also offer so many configuration options that it's easy for end users to overcomplicate installations.
Enterprises that have taken a declarative, also known as immutable, approach to other layers of the IT infrastructure as they adopt DevOps want to enforce the same kind of simple, repeatable standards for Kubernetes clusters through a standard declarative API. Some IT shops have struggled and failed to implement their own APIs for those purposes, and say the community effort around Kubernetes Cluster API has better potential to achieve those goals than their individual projects.
One such company, German IT services provider Giant Swarm, created its own Kubernetes deployment API in 2017 to automate operations for more than 200 container clusters it manages for customers in multiple public clouds. It used a central Kubernetes management cluster fronted by the RESTful API to connect to Kubernetes Operators within each workload cluster. Eventually, though, Giant Swarm found that system too difficult to maintain as Kubernetes and cloud infrastructures continually changed.
"Managing an additional REST API is cumbersome, especially since users have to learn a new [interface]," said Marcel Müller, platform engineer at Giant Swarm, in an online presentation at a virtual IT conference held by API platform vendor Kong last month. "We had to restructure our API quite often, and sometimes we didn't have the resources or knowledge to make the right long-term [architectural] decisions."
Switching between cloud providers proved especially confusing and painful for users, since tooling is not transferable between them, Müller said.
"The conclusion we got to by early 2019 was that community collaboration would be really nice here," he said. "A Kubernetes [special interest group] would take care of leading this development and ensuring it's going in the correct direction -- thankfully, this had already happened because others faced similar issues and come to the same conclusion."
Marcel Müller Platform engineer, Giant Swarm
That special interest group (SIG), SIG-Cluster-Lifecycle, was formed in late 2017, and created Cluster API as a means to standardize Kubernetes deployments in multiple infrastructures. That project issued its first alpha release in March 2019, as Müller and his team grew frustrated with their internal project, and Giant Swarm began to track its progress as a potential replacement.
Cluster API installs Kubernetes across clouds using MachineSets, which are similar to the Kubernetes ReplicaSets Giant Swarm already uses. Users can also manage Cluster API through the familiar kubectl command line interface, rather than learning to use a separate RESTful API.
Still, the project is still in an early alpha phase, according to its GitHub page, and therefore changing rapidly; as an experimental project, it isn't necessarily suited for production use yet. Giant Swarm will also need to transition gradually to Cluster API to ensure the stability of its Kubernetes environment, Müller said.
Cluster API bridges Kubernetes multi-cloud gap
Cluster API is an open source alternative to centralized Kubernetes control planes also offered by several IT vendors, such as Red Hat OpenShift, Rancher and VMware Tanzu. Some enterprises may prefer to let a vendor tackle the API integration problem and leave support to them as well. In either case, the underlying problem at hand is the same -- as enterprise deployments expand and mature, they need to control and automate multiple Kubernetes clusters in multi-cloud environments.
For some users, multiple clusters are necessary to keep workloads portable across multiple infrastructure providers; others prefer to manage multiple clusters rather than deal with challenges that can emerge in Kubernetes networking and multi-tenant security at large scale. The core Kubernetes framework does not address this.
"[Users] need a 'meta control plane' because one doesn't just run a single Kubernetes cluster," said John Mitchell, an independent digital transformation consultant in San Francisco. "You end up needing to run multiple [clusters] for various reasons, so you need to be able to control and automate that."
Before vendor products and Cluster API emerged, many early container adopters created their own tools similar to Giant Swarm's internal API. In Mitchell's previous role at SAP Ariba, the company created a project called Cobalt to build, deploy and operate application code on Bare metal, AWS, Google Cloud and Kubernetes.
Mitchell isn't yet convinced that Cluster API will be the winning approach for the rest of the industry, but it's at least in the running.
"Somebody in the Kubernetes ecosystem will muddle their way to something that mostly works," he said. "It might be Cluster API."
SAP's Concur Technologies subsidiary, meanwhile, created Scipian to watch for changes in Kubernetes custom resource definitions (CRDs) made as apps are updated. Scipian then launches Terraform jobs to automatically create, update and destroy Kubernetes infrastructure in response to those changes, so that Concur ops staff don't have to manage those tasks manually. Scipian's Terraform modules work well, but Cluster API might be a simpler mechanism once it's integrated into the tool, said Dale Ragan, principal software design engineer at the expense management SaaS provider based in Bellevue, Wash.
"Terraform is very amenable to whatever you need it to do," Ragan said. "But it can be almost too flexible for somebody without in-depth knowledge around infrastructure -- you can create a network, for example, but did you create it in a secure way?"
With Cluster API, Ragan's team may be able to enforce Kubernetes deployment standards more easily, without requiring users to have a background in the underlying toolset.
"We created a Terraform controller so we can run existing modules using kubectl [with Cluster API]," Ragan said. "As we progress further, we're going to use CRDs to replace those modules … as a way to create infrastructure in 'T-shirt sizes' instead of talking about [technical details]."