Organizations reap the benefits of container management systems only if they select the right product, or products, for their deployment. Vendor evaluation for a business's container infrastructure can be a complex process, as IT buyers must consider many layers to container management technology.
Virtualized hosting is a dynamic space. Orchestration is perhaps even more dynamic, and a fusion of containers and orchestration is the most dynamic of all. Products change rapidly, and the market has moved away from simple container deployment models toward a middle ground of complexity and capability that combines Docker and Kubernetes. At the leading edge is the Kubernetes ecosystem, which includes networking, monitoring and management, and service discovery and connectivity.
Container commitments don't shrink; they grow in response to new technology capabilities and business requirements. Since it's wise to plan for the future, enterprises should focus their attention at that leading edge.
While reading these in-depth product descriptions, IT buyers should consider that evolution to an ecosystem. Changes in container technology and capabilities, and company recognition of container potential, mean an organization will likely need more capabilities than expected to run container deployments in scalable production environments. Once the business has a short list of container management software, it's time to explore vendors' online documentation and get details on licensing terms and conditions.
Container management systems can be divided into three primary categories. For companies that deploy containers on a public cloud, there are managed Kubernetes services from cloud providers. Primary options include Amazon Elastic Container Service, Amazon Elastic Container Service for Kubernetes, Microsoft Azure Kubernetes Service and Google Kubernetes Engine.
Alternatively, cloud customers have the option to host Kubernetes tools of their choosing on IaaS instances. Many organizations seek Kubernetes ecosystem tools, a pre-integrated suite of container management capabilities updated by the vendor as Kubernetes matures. Ecosystems -- including Red Hat OpenShift, VMware Enterprise Pivotal Container Service, Cloud Foundry, IBM Kabanero and Google Anthos -- are common toolkits for an enterprise to build applications on containers and from microservices, on a CI/CD pipeline. These selections also work with cloud containers, preventing lock-in with one cloud provider's Kubernetes service.
Finally, certain container adopters should consider complete freedom from vendor lock-in, using open source Kubernetes. Small and medium-sized businesses, as well as enterprises with broad and complex container needs, may want to pick and customize foundation toolkits, and supplement them with additional offerings from their preferred sources. These container management toolkits could incorporate the diverse offerings of VMware Pivotal.
Container platform comparison tool
Editor's note: Evaluator Group, an IT analyst firm based in Boulder, Colo., recently released its 2022 EvaluScale Insights for Container Management Systems. This interactive product comparison tool aims to help users select a container management system, based on their unique business and technology requirements. Analysis is based on vendor interviews, reviews of user/administration guides and hands-on testing or client engagements. The first iteration of the EvaluScale Insights tool includes comparisons for container management services and container management platforms.
Amazon ECS, EKS: Container management through AWS
Amazon is the leader in public cloud computing, so any business that plans to use containerized applications should consider its container hosting support capabilities even if the organization doesn't currently use the public cloud or Amazon's specific services.
Amazon has two container options -- Elastic Container Service (ECS) and Elastic Container Service for Kubernetes (EKS) -- and both enable organizations to deploy containers with their choice of software on Elastic Compute Cloud (EC2) instances. For now, only ECS can use Amazon's Fargate technology, which enables container systems to work on virtual machines (VMs) without the user having to manage the underlying EC2 instances. Fargate simplifies container deployments on EC2, a task that many users find problematic compared with the use of containers on other public clouds.
Navigating the options here can be complicated, but for SMBs, Amazon EKS should be the default choice. Enterprises may want EKS as well, because it requires less on-site management. Enterprises that don't want a managed container service in the cloud, and will use Kubernetes on premises for orchestration, should focus on ECS on Fargate. Be cautious when looking at any Amazon container service simply to facilitate integration of other AWS features, as these might create lock-in and limit application portability between data center container hosts and the cloud. Amazon's container approach isn't open source itself, so verify that its current Kubernetes component is fully up to date.
Amazon provides specific guidance to integrate EKS with the Istio service mesh so that an Istio framework for extensive microservice deployment and connection works on AWS as well as other cloud providers and in a customer's data centers. Thus, an organization should base its multi-cloud microservice commitment on Istio.
For hybrid cloud users, particularly those who want to keep cloud provider options open or treat public and private cloud(s) as a unified resource pool, EKS is the best strategy because it offers greater portability. VMware users should be aware that Amazon has a non-exclusive relationship with VMware that facilitates hybrid cloud integration. See the VMware Kubernetes ecosystem description below for more details.
Amazon ECS offers two charge models:
- Fargate: Organizations pay for the amount of virtual CPU and memory resources that containerized applications request on a per-second basis, with a one-minute minimum. Specific pricing varies by region.
- EC2: Pay for AWS resources created to store and run applications in EC2. After a free trial, there are four ways to pay:
- On-demand: This method is best for users who do not want to be tied down in a long-term commitment and want lower costs and flexibility. The organization pays for compute capacity per hour or per second, depending on which instances it runs.
- Reserved instances: This is a good option for users that need a specific storage capacity and have apps with steady state usage. It provides users with up to a 75% discount compared to on-demand pricing. Reserve instances require a one- to three-year commitment.
- Spot instances: This is a good choice for users who require more flexibility in their start and end times and who need large amounts of capacity available quicker and for a lower price. Users can request spare computing capacity for up to 90% off the on-demand price.
- Dedicated hosts: The physical EC2 server can also accommodate the user's existing server-bound software license to reduce overall costs. This can be purchased on demand or as a reservation for up to 70% off the on-demand price.
Amazon EKS pricing is $0.20 per hour for each cluster created, plus any AWS resources created to run Kubernetes worker nodes. EKS also can run on Fargate, and on EC2, per pricing terms above.
Microsoft shifts to Azure Kubernetes Service, Azure Service Fabric
Microsoft originally offered two container-based options for application hosting in the cloud, for organizations to choose one based on the selection flow chart described earlier in this guide. For typical container use, meaning for a Kubernetes-based approach, Azure Kubernetes Service (AKS) is a good option. For users interested in only basic container hosting in the public cloud, another option was Azure Container Instances (ACS), a kind-of-serverless container service. However, Microsoft will retire its ACS support in January 2020, so users should migrate to AKS or plan to support their own ACS clusters.
Microsoft Azure is popular with SMBs that adopt Microsoft's Windows Server technology and the vendor's broad reach of business applications. AKS is generally compatible with all those applications, but companies should check with a company rep to confirm specific compatibility.
Microsoft offers a service mesh, Azure Service Fabric, for companies that use containers to support microservices-based deployment. Service Fabric works well for hybrid clouds, but it's also a Microsoft-specific offering and thus isn't as open as competing service mesh products such as Istio or Linkerd. Still, it's a good choice for hybrid cloud users who plan to use Azure alone and need a service mesh.
Like the container tools from Google and Amazon, Microsoft's container support integrates with other web services to support development of mobile apps, web front ends and other components. Azure Service Fabric as a component orchestration tool is particularly well-regarded by enterprises, and is a big asset for Microsoft's container management offering.
AKS is a free container service. Users only pay for VMs, associated storage and the networking resources consumed. The AKS website includes a Container Services calculator.
Google Kubernetes Engine: Kubernetes geared toward hybrid cloud
Google invented Kubernetes, and many enterprises find Google's cloud services are highly competitive with both Amazon and Microsoft. One area of emphasis is to provide stateful resources to containers, which can be a challenge in other public clouds. As more enterprises adopt hybrid cloud container deployments, Google has augmented and positioned its container services to align with this opportunity.
Google Kubernetes Engine (GKE) is a managed Kubernetes service in which Google monitors the clusters' health. GKE appeals to container users who want public cloud support without the details, and it offers similar capabilities to Microsoft's AKS or Amazon's Fargate.
Google fully aligns GKE with the latest Kubernetes features, which is important for users who want a cloud-based Kubernetes service but also want to use Kubernetes on premises. In fact, for those who think of container requirements more in terms of Kubernetes clusters than Docker, Google might offer the best experience overall. SMBs that aren't locked into a cloud provider and want the easiest possible Kubernetes experience might find Google's implementation the best choice for their organization. Google's Anthos container ecosystem, detailed below, is especially promising for organizations that plan to use microservices on a large scale.
GKE uses Google Compute Engine (GCE) instances to run nodes in the cluster, and organizations are billed per instance according to GCE's pricing, until the nodes are deleted. Billing for Compute Engine resources is on a per-second basis, with one-minute minimum usage cost.
Google Anthos: Multilayer container management package
Google's most recent container offering is Anthos, the broadest Kubernetes ecosystem offering from any public cloud provider -- and a bold attempt to broaden Google's appeal to enterprise cloud computing prospects.
Anthos, strictly speaking, is a Kubernetes ecosystem that federates or combines Kubernetes clusters operating in any cloud or data center and unites them to create a common resource pool and provide deployment, redeployment, scaling and load-balancing based on a common set of policies across all hosting options. Google Anthos combines basic containers, Kubernetes orchestration, the Istio service mesh and Knative serverless computing, plus monitoring and management systems, and supports all public clouds and hybrid cloud configurations. For microservice applications, Anthos is perhaps the most comprehensive product available, but customers can duplicate its capabilities with the addition of a separate service mesh or cloud-native tools to other cloud-based managed Kubernetes services.
For a company with the goal of container management in the fullest sense, Anthos is a multilayer package that includes application operations, security, platform operations and infrastructure. This centralizes cluster management for Kubernetes, so users can control hybrid and multi-cloud deployments more efficiently, and retain compatibility with Kubernetes-based cloud hosting.
Anthos pricing varies widely across instance types and region. Anthos is available as an annual monthly subscription at a list price of $10,000 per month per 100 virtual CPUs (vCPUs), with a minimum one-year commitment. It also is sold in blocks of 100 vCPUs of schedulable compute capacity, independent of the underlying infrastructure. The subscription fee covers "included usage" for several components that are priced independently but considered part of Anthos. Currently, for each 100 vCPU block, these include the following:
- Stackdriver Logging: 5,000 gibibytes per month
- Stackdriver Monitoring: 375 mebibytes per month
- Stackdriver Trace: 50 million spans per month
- Traffic Director: 300 endpoints (concurrent)
A Google Anthos monthly subscription does not include support, which Google requires. The company recommends enterprise-level support, which is $15,000 per month or a percentage of a company's total spend, whichever is greater.
IBM Kabanero: Container management with CI/CD emphasis
Many organizations see a future in which applications are specifically designed to run in the cloud, not "migrated" to the cloud. With that in mind, IBM's Kubernetes ecosystem offering, Kabanero, aims to facilitate cloud-native development and deployment. In practical terms, IBM Kabanero supports microservice-based applications and CI/CD practices. Kabanero also supports competitive public cloud deployments of Kubernetes and container components.
Kabanero's distinction within the Kubernetes ecosystem is its CI/CD support, via Tekton, and developer support via CodeWind, but that doesn't mean it lacks the other elements of an ecosystem. Kabanero includes Istio and Knative, as well as some new open source projects to manage container images and the deployment and operations tasks associated with CI/CD.
Early users of Kabanero report that it simplifies container and microservice development and deployment, but many of the key elements of Kabanero, though open source, are still primarily IBM projects. It's also unclear how Kabanero and OpenShift will combine now that IBM has officially acquired Red Hat.
Kabanero is a 100% open source project.
Red Hat OpenShift: Container management in data centers or cloud
Red Hat is a well-known provider of open source tools, particularly Linux software, standardized into distributions that are all seamlessly integrated with each other and fully supported. Businesses large and small use Red Hat products, and those with data centers based on Red Hat Enterprise Linux should seriously consider Red Hat OpenShift, a suite based on Docker and Kubernetes. Its level of integration with the rest of Red Hat's products, especially tools for software project control, makes OpenShift unusually trouble-free for organizations that don't have a lot of Linux or container expertise. It also helps enterprises that develop for and in a Red Hat Linux environment. SMBs might find Red Hat and OpenShift a single-stop data center and cloud shop.
What makes OpenShift into a Kubernetes ecosystem is that its latest Version 4 release is built around Red Hat's Kubernetes Operator framework, a custom controller that monitors and manages the state of Kubernetes pod, services, etc. If something falls into an abnormal state, Operator can then remediate them to restore normal operations. Version 4 refocuses on the complete application lifecycle that's so important to enterprises in general, and CI/CD in particular.
OpenShift is available as Container Platform for data center deployment, and in cloud-centric versions called OpenShift Dedicated for AWS and Google, and Red Hat/Azure for Microsoft Azure. The cloud-centric versions of OpenShift create hybrid cloud platforms in conjunction with the popular public cloud providers. Red Hat manages the OpenShift Dedicated service on AWS and Google, and co-manages with Microsoft on the Azure version. In both cases, the cloud integrates with virtual private cloud (VPC) on-premises deployments based on OpenShift.
Red Hat does not include either service mesh or serverless in its Kubernetes ecosystem. Enterprises who have, or plan, a major microservice commitment and use Red Hat at the server or on-premises container level should probably explore a combination of IBM Kabanero and OpenShift Container Platform, rather than OpenShift alone.
OpenShift offers three plans:
- Azure Red Hat OpenShift provides fully managed Red Hat OpenShift clusters on Microsoft Azure. Both Microsoft and Red Hat engineer, operate and support the service. A highly available, fully managed cluster, starting with four application nodes, starts at $16,000 per year. On-demand scaling with additional application nodes starts at $0.761 an hour. Prices do not include Azure Compute costs.
- OpenShift Dedicated is managed Kubernetes in a virtual private cloud on AWS. (An option for Google Cloud Platform was discontinued and will be reconfigured to accommodate OpenShift 4, according to Red Hat.) Users can deploy in cloud provider accounts owned by Red Hat: A single-availability cluster starts at $36,000 a year and a multiple-availability cluster starts at $81,000 a year. Bring your existing cloud provider discounts and settings: single-availability clusters start at $16,000 per year, and multiple-availability clusters start at $36,000 per year.
- With the OpenShift Container Platform, pricing is based per-core, with premium or standard service-level agreement options, on any infrastructure, and is the same whether deployed on premises or to the public cloud. Specific pricing is based on deployment size and other factors.
VMware Enterprise PKS, a multi-cloud container management option
VMware has a fairly complicated set of container products, largely because multiple use cases and market missions drive the company's strategies. Organizations should think of its offerings in either cloud or vSphere areas. The former focuses on public cloud deployments through VMware Enterprise PKS. The latter, covered in the next section, adds container capabilities to vSphere hosting, which has evolved as a VM-centric data center model.
VMware Enterprise PKS basically expands on the Pivotal Container System (PKS, described later in this roundup) with enhanced security and scalability features, and global support. It has evolved more because of competition with Red Hat than with other Kubernetes ecosystem vendors. Thus, its features match more closely with the cloud-centric OpenShift offerings than they do with more development-centric ecosystem packages from Google or IBM. In addition, VMware has established a close, but not exclusive, relationship with Amazon because Amazon lacks a direct presence in data centers and the private cloud space.
Enterprise PKS is primarily a multi-cloud option to integrate public cloud container deployments with vSphere data center deployments. For current VMware vSphere users, it's a strong product and a logical strategy, and Enterprise PKS also integrates with other VMware offerings, such as the Harbor container registry. It does not currently provide service mesh or serverless features, but customers could obtain these through integration with other services.
One of the strongest VMware tools provided with Enterprise PKS is VMware's virtual network or SDN offering, NSX. Kubernetes has an integration hook to introduce virtual networking, and often includes several basic virtual network tools, but NSX is a much broader virtual network capability with significant benefits in scale and performance. In addition, VMware is integrating SD-WAN capability, via its VeloCloud acquisition, into NSX, which would deliver a complete data-center-to-cloud scope. NSX integration is most important for enterprises with a large number of data centers and plans to offer container hosting.
VMware's entire container approach is likely to evolve given the recent and rapid changes in the Kubernetes ecosystem space -- including VMware's acquisition of Pivotal. Check with the company for the latest product and pricing information.
Cloud Foundry: Container support through runtime platforms
In many ways, Cloud Foundry (CF)is a different sort of Kubernetes ecosystem. It could be seen as an adjunct to most of the other Kubernetes ecosystems cited in this guide. Also, many of the companies who offer Kubernetes ecosystems are Cloud Foundry partners.
Cloud Foundry is less Kubernetes-centric or even container-centric than it is an integrated set of runtime platforms based on open source tools. The Cloud Foundry Container Runtime provides the integrated Kubernetes container support that's the primary subject of this guide. The Cloud Foundry Application Runtime provides language- and cloud-independent application development and deployment; it supports containers, but without the granularity and ecosystemic control of Kubernetes.
The CF Application Runtime is an independent container ecosystem, based on a limited form of orchestration. In terms of Kubernetes ecosystem completeness, Cloud Foundry builds on its own runtimes via the Open Service Broker API and the Foundry service integration library. Through this, Cloud Foundry integrates with service mesh technologies such as Istio or serverless deployment tools such as Knative.
The Application Runtime is suitable for much of today's container use and is compatible with both data center and multi-cloud environments. Companies that want a cloud and container transformation and have done little to date should consider Cloud Foundry as an element in their container future. The Application Runtime is more developer-centric, so other tools are better for complex container applications.
The CF Container Runtime offers its own benefits as a Kubernetes ecosystem. It includes CF BOSH, a fairly comprehensive lifecycle management tool to handle deployment and redeployment, scaling, VM healing, rolling upgrades, etc., all through Kubernetes. The full Application Runtime support is also available with the container runtime, so the pair is a strong choice for users as they start out with container development.
Some users may have concerns with the Cloud Foundry approach, even with the Container Runtime, because the Cloud Foundry ecosystem's long-term direction is unclear. So far, it's possible to integrate both service mesh and serverless capabilities, for example, but how far this will be taken is an open question.
While Cloud Foundry is an open source foundation, it's also part of the Pivotal product line, positioned as a multi-cloud platform, and many users acquire the product in that form. VMware's acquisition of Pivotal could bring significant changes to Cloud Foundry. On the one hand, it could tighten the relationship between Cloud Foundry and Pivotal Kubernetes Services and VMware's Kubernetes ecosystems, perhaps to supplement or replace CF Container Runtime. It's also possible that VMware will continue to emphasize Kubernetes, with less stress on Cloud Foundry. Prospective users should monitor the VMware relationship and factor in any changes in direction before they make a decision on Cloud Foundry.
VMware shuffles deck with Pivotal Container Service, vSphere Integrated Containers
VMware, the foundation of the whole virtualization movement, is better known for its VM technology than for containers. Its initial container strategy was a Linux version, Photon Platform, optimized for container hosting, and Photon is still an open source project. However, a partnership with Pivotal and Google became VMware's container focus.
Pivotal Container Service (PKS), like so many container suites available today, is built around Kubernetes. PKS is built in a partnership with Pivotal, independent of Pivotal's Cloud Foundry software. VMware's acquisition of Pivotal almost certainly will change VMware's position in container management to focus more on PKS elements, though the nomenclature may change.
PKS also runs under vSphere, and like vSphere Integrated Containers (VIC), it has deep connections into the vSphere/software-defined data center (SDDC) framework to facilitate management of mixed environments. Its strongest point may be its deep integration with NSX-T, VMware's software-defined networking architecture, which facilitates configuration and reconfiguration of complex applications.
To further confuse things, VMware also offers VIC, which is an Integrated Container Engine that's largely Docker-compatible at the UI and API levels. VIC's deep integration with vSphere makes containers a more natural part of the VMware world, which is defined by VMware's SDDC framework. SDDC provides a level of infrastructure abstraction and enables both container and VM hosting on the same infrastructure.
Selection of container management tools among the VMware options can be complicated. Generally, view VIC as a container augmentation to vSphere, primarily for data center application migration. Most enterprises should select Platform Services Controller as the mainstream approach well-suited for hybrid and multi-cloud or microservice deployment. VMware also offers Pivotal Cloud Foundry, which is often recommended for cloud-native development and extensive microservice use.
The VIC software bundle includes the basic software (PKS, Integrated Container Engine) and also VMware's open source projects -- Harbor for container registry support and Admiral, a scalable container management tool. PKS includes Harbor, but not Admiral.
Pivotal sells PKS as a subscription license, typically for one- or three-year terms, in two models:
- Core-based: Based on the number of cores deployed for PKS.
- Pod-based: Based on the number of pods run on PKS-created Kubernetes clusters.
VIC is available to all vSphere 6.0 and above Enterprise Plus customers, and requires no separate license subscription.
Kubernetes: Open source container orchestration, bundled or stand-alone
First developed by Google, Kubernetes is an open source orchestration tool under the Cloud Native Computing Foundation that makes it easier to manage application lifecycles for containerized applications. It's exploded in popularity, to the point where it has eclipsed Docker as the most recognized software for containers. However, organizations still require container hosting (usually Docker, more rarely CoreOS rkt) to use Kubernetes. Kubernetes is usually bundled with basic container hosting software, so it may be best to utilize Kubernetes as an orchestration tool to supplement basic container software and get both from one source.
Kubernetes offers strong support for clusters of hosting points, an abstraction technique that helps companies reliably deploy complex, multicomponent applications. Kubernetes provides an online guide to find the best source for the Kubernetes runtime. Its utility explains why Kubernetes is included with the great majority of container management software bundles available, as well as proprietary, closed source components from some suppliers, including Amazon.
If container plans suggest an eventual need for Kubernetes orchestration, it would still be smart to get both Kubernetes and basic container software from one source, as a bundle. To orchestrate hybrid or public cloud containers, consider getting Kubernetes and container hosting from, or with the advice of, the selected public cloud provider(s). Unless experts in open source software are on staff, avoid compiling Kubernetes directly from source code. If that task is undertaken, be sure to compile the container hosting choice, too, in the same environment.
Kubernetes itself is open source, but is the basis of many paid container management software packages.
Editor's note: With extensive research into container management software, TechTarget editors focused this series of articles on vendors that provided the following functionalities: orchestration, container networking and hybrid cloud portability. We've featured vendors that either offer leading-edge, unique technology or hold significant market share or interest from enterprises. Our research included Gartner and TechTarget surveys.