Maksym Yemelyanov - stock.adobe.


Container cluster management tools ease cloud operations

Container ops is full of hurdles, even in the cloud. Managed Kubernetes services from the major providers offer a wealth of services to float a process along more easily.

Organizations that use containers as an execution environment quickly discover that, under the hype and promise, there are many details and nuances required to make containers an effective development and operational platform.

Containers need an extensive ecosystem of related software and services to be a viable option for efficient, automated development and deployment. Users face a pivotal choice of container cluster management and workload orchestration systems, as well as ancillary ones around the container registry, with service discovery, monitoring and authentication systems and CI/CD automation software. Each of these tools fits into an overall container lifecycle that must be consistent, repeatable and reliable.

Container cluster management and orchestration tools dispatch, scale, restart and decommission cluster nodes and container workloads. This element of the container ecosystem is the most fundamental piece still open to choice. Kubernetes has emerged as the de facto standard -- and it does boast a fairly complete raft of features and options -- but it is difficult for novices and small-scale IT shops to set up, much less master. Cloud providers have responded to this gap with managed services that insulate users from some or all of the infrastructure and software management involved.

Kubernetes from a cloud provider

Early container cluster management services helped simplify the container deployment and management problem but still required users to manage the underlying compute instances for the cluster. Amazon Elastic Container Service (ECS) is a good example of this setup, with a proprietary orchestrator that lacked many of Kubernetes' features. As the container cluster management landscape filled in, cloud providers rolled out managed Kubernetes environments. Examples include Amazon ECS for Kubernetes (EKS), Azure Kubernetes Service (AKS) and Google Kubernetes Engine (GKE). Compute instances can be sized and scaled out as needed, as the underlying cluster nodes. While these services are a significant improvement over DIY container cluster management, they still require a level of infrastructure management that can be onerous.

Managed container instances address this overhead. Azure and AWS launched services in 2017 that eliminate the final hurdle in container management, the infrastructure, and effectively turn containers into an on-demand runtime engine that conceptually sits somewhere between stand-alone compute instances and serverless functions. Azure Container Instances (ACI) makes containers as easy to deploy as VMs, with the startup speed and microbilling of a serverless function.

Integration of managed instances and Kubernetes cluster management services remains the holy grail of container deployments.

AWS Fargate is a container service without infrastructure management and a great option for organizations that don't need the bells and whistles and accompanying complexity of Kubernetes container cluster management. Although ACI and Fargate are similar, they are quite different under their respective covers, with ACI being more an Azure Functions-like sandbox and Fargate a simplified version of ECS without instance management.

ACI and Fargate present a dilemma for container users: What if you do need all the features and customization options of Kubernetes but would like the convenience and low overhead of a managed instance? Is it possible to have your cake and eat it too? One option is to rely on the cloud providers to develop more interplay between container services, and another is to vary the tools in play at different points in the container development and deployment lifecycle.

Container instances and managed Kubernetes services in tandem

Container packaging is a solved problem, since both the instance and Kubernetes cluster environments support the same image format and Docker runtime. In the future, users will be able to seamlessly migrate workloads from one to another. While Fargate does not support Kubernetes yet, AWS has promised to integrate the two. Once that occurs, a container user could launch a container on Fargate using EKS. In the meantime, it is relatively easy to migrate ECS containers to Fargate, which supports existing ECS configurations, APIs and integration with Amazon Virtual Private Cloud and Identity and Access Management.

Amazon container services
Figure 1. This simple chart shows how AWS Fargate, Amazon EC2, Amazon ECS and Amazon EKS relate to each other in terms of container services.

The integration roadmap is more advanced with Azure, where ACI already has a Kubernetes plug-in, a virtual kubelet node agent that turns an ACI instance into a Node in a Kubernetes cluster. Using the kubelet, a customer can schedule Kubernetes container workloads in the form of Pods on an ACI instance instead of another Kubernetes Node. Users can combine ACI and Kubernetes in a relatively simple process: A so-called unlimited Node -- i.e., the easily scalable ACI instances -- gets added to an existing Kubernetes cluster, which can reside on premises or in the cloud using AKS or conceivably even on an AWS managed instance.

Pick and choose container services

Developers can use Fargate for creative projects, such as Ruby on Rails applications. They can also use Fargate with Lambda to run persistent Docker containers. For example, an admin can extract thumbnails from video files with Amazon Simple Storage Service, AWS Lambda and AWS Fargate.

Services such as ACI and Fargate deploy workloads that don't require the sophisticated management and orchestration controls of Kubernetes or ECS. GKE instantiates a Kubernetes cluster with a single command -- gcloud container clusters create k0 -- and automatically sizes clusters based on resource utilization, ideal for cloud users who need Kubernetes' rich feature set with minimal infrastructure management overhead.

However, depending on the workload, managed infrastructure services could be considerably more expensive than a dedicated Elastic Compute Cloud (EC2) or Azure instance cluster. A two-tier web app that uses containers for both tiers might require two EC2 autoscaling groups of one to eight instances to power ECS clusters. A comparable configuration on Fargate costs almost two and a half times as much as running dedicated clusters. The cost of Fargate is directly proportional to the number of virtual CPUs and GB of memory the application consumes, according to a comparison study.

As Microsoft's work on ACI's Kubernetes plug-in demonstrates, over time, it will be easier to mix managed container instances, such as Fargate and ACI, with dedicated Kubernetes clusters running on EKS and AKS. Such heterogeneous container environments give DevOps architects the flexibility to balance container cluster management and orchestration features with infrastructure overhead.

Partition the lifecycle

Integration of managed instances and Kubernetes cluster management services remains the holy grail of container deployments. In the meantime, it's entirely possible -- and advisable -- to use them at different points in the container lifecycle.

There are undoubtedly as many variants on the container lifecycle as there are container developers, but there's a simple one, courtesy of IBM:

  1. Acquire content using prebuilt Docker images and custom code.
  2. Build custom code either from scratch or around an existing package.
  3. Deliver the application to a container runtime system, and add a test and integration phase here.
  4. Deploy to production systems, and automate updates. This stage will include redundant environments for production and beta/canary code, i.e., blue/green deployments.
  5. Run, orchestrate and scale the application.
  6. Debug and fix problems to maintain the application.

Container instances with automatically provisioned infrastructure, such as ACI and Fargate, are ideal platforms for steps 1 through 3 -- and, possibly, steps 4 through 6 for applications with light workloads or that are infrequently used. Larger workloads or those with highly variable capacity demands that require distributed autoscaling should use a Kubernetes cluster for steps 4 through 6.

Although AWS doesn't position it this way, AWS Fargate is an alternative execution environment, in the same way Kubernetes and ECS provide a choice in container orchestration software.

Dig Deeper on Containers and virtualization

Software Quality
App Architecture
Cloud Computing
Data Center