adam121 - Fotolia
SAN DIEGO -- The death of Kubernetes Helm Tiller in version 3 was the talk of the cloud-native world here at KubeCon + CloudNativeCon North America 2019 this week, as the change promises better security and stability for a utility that underpins several other popular microservices management and GitOps tools.
Kubernetes Helm is a package manager used to deploy apps to the container orchestration platform. It's widely used to deploy enterprise apps to containers through CI/CD pipelines, including GitOps and progressive delivery tools. It's also a key component for installing and updating the custom resource definitions (CRDs) that underpin the Istio service mesh in upstream environments.
Helm Tiller was a core component of the software in its initial releases, which used a client-server architecture for which Tiller was the server. Helm Tiller acted as an intermediary between users and the Kubernetes API server, and handled role-based access control (RBAC) and the rendering of Helm charts for deployment to the cluster. With the first stable release of Helm version 3 on Nov. 13, however, Tiller was removed entirely, and Helm version 3 now communicates directly with the Kubernetes API Server.
Such was the antipathy for Helm Tiller among users that when maintainers proclaimed the component's death from the KubeCon keynote stage here this week, it drew enthusiastic cheers.
"At the first Helm Summit in 2018, there was quite a lot of input from the community, especially around, 'Can we get rid of Tiller?'" said Martin Hickey, a senior software engineer at IBM and a core maintainer of Helm, in a presentation on Helm version 3 here. "[Now there's] no more Tiller, and the universe is safe again."
Helm Tiller had security and stability issues
IT pros who used previous versions of Helm charts said the client-server setup between Helm clients and Tiller was buggy and unstable, which made it even more difficult to install already complex tools such as Istio service mesh for upstream users.
"Version 3 offers new consistency in the way it handles CRDs, which had weird dependency issues that we ran into with Istio charts," said Aaron Christensen, principal software engineer at SPS Commerce, a communications network for supply chain and logistics businesses in Minneapolis. "It doesn't automatically solve the problem, but if the Istio team makes use of version 3, it could really simplify deployments."
Martin HickeySenior software engineer, IBM and a core maintainer of Helm
Helm Tiller was designed before Kubernetes had its own RBAC features, but once these were added to the core project, Tiller also became a cause for security concerns among enterprises. From a security perspective, Tiller had cluster-wide access and could potentially be used for privilege escalation attacks if not properly secured.
It was possible to lock down Helm Tiller in version 2 -- heavily regulated firms such as Fidelity Investments were able to use it in production with a combination of homegrown tools and GitOps utilities from Weaveworks. But the complexity of that task and Helm Tiller stability problems meant some Kubernetes shops stayed away from Helm altogether until now, which led to other problems with rolling out apps on container clusters.
"Helm would issue false errors to our CI/CD pipelines, and say a deployment failed when it didn't, or it would time out connecting to the Kubernetes API server, which made the deployment pipeline fail," said Carlos Traitel, senior DevOps engineer at Primerica, a financial services firm in Duluth, Ga.
Primerica tried to substitute kube-deploy, a different open source utility for Helm, but also ran into management complexity with it. Primerica engineers plan to re-evaluate Helm version 3 as soon as possible. The new version uses a three-way merge process for updates, which compares the desired state with the actual state of the cluster along with the changes users want to apply, and could potentially eliminate many common errors during the Helm chart update process.
Despite its difficulties, Helm version 2 was a crucial element of Kubernetes management, SPS's Christensen said.
"It worked way more [often] than it didn't -- we wouldn't go back and use something else," he said. "It helps keep 20-plus resources consistent across our clusters … and we were also able to implement our own automated rollbacks based on Helm."