GP - Fotolia

Tip

Assess five ways to deploy AWS containers to production

Enterprises can choose from a range of AWS container deployment options. To make the right choice, they should first consider how much control they want over the infrastructure.

Containers have evolved from a niche developer tool to a mainstream enterprise deployment platform. And along with that evolution, IT and DevOps teams now face an embarrassment of riches in terms of options to run containerized applications on the cloud.

AWS offers more choices than most public cloud providers: a fully managed container service, container cluster orchestrators, container-ready PaaS stacks and IaaS resources on which you can define your own containers. These AWS container deployment options enable cloud architects to choose between simplicity and usability with control and customization.

AWS containers à la carte

Developers can choose from the following options for AWS containers, which are in order by increasing complexity:

  1. AWS Fargate. With this container-instances-as-a-service option, AWS manages the underlying physical and virtual infrastructure.
  2. AWS Elastic Beanstalk. A PaaS-like orchestration service that can automate the deployment and management of application infrastructure. Elastic Beanstalk supports Docker containers and container clusters.
  3. Amazon Elastic Container Service (ECS). A managed container orchestration service that uses pools of EC2 instances to create a container cluster. Amazon ECS automates container management, scaling and workload scheduling.
  4. Amazon ECS for Kubernetes (EKS). A managed cluster management service that replaces ECS' proprietary orchestration software with open source Kubernetes.
  5. Amazon EC2 with self-managed orchestration software. A fully DIY option in which the user installs and manages a container orchestration platform of his or her choice on raw EC2 instances. EC2 supports Kubernetes, Docker Swarm, Apache Mesos and HashiCorp Nomad.

With Fargate and Beanstalk, AWS manages the deployment of EC2 instances needed to run a Docker-compatible container runtime -- now standardized by the Open Container Initiative -- along with cluster scaling and workload orchestration.

ECS was the first managed container management service on AWS, but it still requires users to run, monitor, restart and update a cluster of EC2 instances to host containers. However, ECS does not automate the operation or scaling of the underlying EC2 resources.

EKS works similarly to ECS, but it uses open source, cross-platform Kubernetes orchestration software to manage container clusters. EKS is similar to Azure Kubernetes Service, but unlike both the Microsoft service and ECS, EKS automatically distributes Kubernetes master schedulers and controllers across three availability zones (AZs) for cluster resiliency. Thus, if a master scheduler or an entire AZ goes offline, it won't affect the remaining clusters or cause applications to fail.

The final option, container orchestration software that runs on top of EC2, is conceptually equivalent to an on-premises container cluster. The user deploys, configures and operates cluster nodes, management and control nodes, orchestration software and other components of a container ecosystem, such as an image registry. In this scenario, AWS simply provides the raw virtual infrastructure through the AWS Management Console or AWS Command Line Interface.

The convenience-control trade-off

When you opt for managed instances -- such as Fargate -- you're essentially left with a SaaS product for AWS container deployment. As with any SaaS product, you cede control over some implementation details in exchange for ease of use.

However, Fargate isn't a dead end that locks users into a limited platform. Instead, it works as a pluggable module with ECS' or EKS' more advanced orchestration software. Fargate uses the same APIs as ECS, which makes it relatively easy to migrate workloads between AWS-managed containers and user-managed container hosts. Likewise, if you want to change container control planes from ECS to EKS, Amazon says it will be possible to incorporate Fargate instances into a Kubernetes pod via EKS alongside containers hosted on EC2.

Container orchestrators, like ECS and EKS, eliminate much of the complexity and administrative overhead needed to install and operate a cluster manager like Kubernetes. IT professionals regularly complain about the complexity and steep learning curve of Kubernetes. While the Kubernetes project has excellent documentation for cluster installation on AWS, it's still a multistep process that doesn't integrate with the AWS Management Console. Why go to that trouble, unless you need to migrate an existing on-premises configuration to the cloud?

Likewise, if you're a technically sophisticated container shop, you might have the need and resources to develop and deploy a custom container manager. Netflix, for example, has its Titus container management platform. But that's an uncommon practice for the vast majority of enterprises. Netflix cited the need to massively scale around 200,000 clusters per day and add features while maintaining tight integration with native Amazon cloud services. But realistically, only hyperscale cloud operators and massive online services would need these capabilities.

Which option best fits your needs?

As stated by Nathan Peck, a developer advocate for container services at AWS, these pointers can help you evaluate your options for AWS containers:

  • When you choose a container control plane, ECS provides the best experience for applications that will use other AWS tools and services, such as databases, data analytics platforms or artificial intelligence.
  • EKS is the better option for workloads that might run in multiple locations, whether on premises or another public cloud.
  • When you select a container execution environment, self-managed EC2 instances provide the most flexibility and control over sizing, configuration and cost, such as the ability to use On-Demand, Reserved or Spot Instances.
  • Fargate is the most convenient option for those who want to minimize management overhead and want the quickest path to container deployment.

Regardless of whether you use other AWS application or data services, ECS deployments generally require other AWS tools, such as load balancers, CloudWatch, AWS Identity and Access Management security controls, and private networking via Virtual Private Cloud. You can and should automate these services with CloudFormation resource templates, which you can adapt from snippets provided in AWS documentation.

It's hard to make a case for the DIY approach to container management on AWS, unless you have existing on-premises infrastructure as well as the expertise to manage it and you use another product, like Docker Enterprise. Even experienced Kubernetes users will likely find that it's easier to let AWS handle the orchestration layer and just port over existing configuration and customizations to EKS, which should be a fairly simple project, as EKS is based on open source Kubernetes code.

There's also a strong business case to containerize applications that run on stand-alone instances, and a managed service like ECS significantly reduces the learning curve. One company, Mapbox, switched from stand-alone EC2 instances to ECS-managed containers and cut its AWS bill in half. It doubled resource utilization, gained the ability to use EC2 Spot Fleets for some workloads and centralized EC2 cost management under a container umbrella.

Next Steps

A production deployment checklist for enterprise apps

Dig Deeper on AWS infrastructure

App Architecture
Cloud Computing
Software Quality
ITOperations
Close