What is Amazon EKS?
Amazon Elastic Kubernetes Service (EKS) is a cloud-based container management service that natively integrates with Kubernetes to deploy applications.
The EKS service automatically manages and scales clusters of infrastructure resources on AWS with Kubernetes. With Amazon EKS, an enterprise can use Kubernetes without having to install, operate or manage the container orchestration software.
How does Amazon EKS work?
Amazon EKS operates using the familiar Kubernetes platform, including a control plane and worker nodes.
The control plane uses three load-balanced master nodes arranged in a high-availability (HA) configuration. Master nodes are managed by AWS and support all the functionality needed to implement and support Kubernetes -- including access to the EKS API.
Worker nodes are created by EKS users within Amazon EC2 instances. They are responsible for hosting pods of containers that compose container-based applications. Nodes are typically organized into groups and different node groups can exist within a Kubernetes cluster.
Clusters of worker nodes communicate with the control plane through the EKS API, role-based access control (RBAC) and Amazon VPC. This acts to restrict traffic and maintain security between the control plane and user-run clusters.
Amazon EKS can be deployed in several ways. First, a user can operate one EKS cluster for each application. Second, one cluster can be used to support multiple applications. The latter will require the use of Identity and Access Management (IAM) security and Kubernetes namespaces to isolate applications within the cluster.
Amazon EKS components and features
A developer or administrator that uses EKS will provision the worker nodes and link them to Amazon EKS endpoints. AWS then handles all management tasks for the Kubernetes control plane, including upgrades, patches and security configurations. AWS also scales API servers and back-end persistent layers through EKS. The service integrates with multiple native Amazon services, such as Elastic Load Balancing, AWS IAM, Amazon Virtual Private Cloud (VPC), AWS PrivateLink and AWS CloudTrail.
A user must create an IAM role, VPC and security group for its clusters. A single cluster can run multiple applications. Different VPCs should be used for each cluster for improved network isolation.
Kubernetes uses pods, or groups of containers, to orchestrate and scale servers. Amazon EKS automatically replicates master schedulers across three availability zones in each AWS region for higher availability. It also scans for unhealthy nodes and automatically replaces them.
Amazon EKS relies on many open source tools, including Kubernetes and Docker. This means that a user can move pods to non-AWS environments without application code changes.
Amazon EKS provides a broad array of features that focus on several important operational areas including the following:
- Cluster management. EKS offers a managed control plane, managed node groups, a hosted Kubernetes console, extensive AWS service integrations and support for a large library of Kubernetes add-ons.
- Network management. EKS handles networking and security through support for IPv6, service discovery, IAM authentication and compliance with an array of regulatory requirements (such as SOC, PCI, IRAP, HITRUST and others).
- Load balancing. EKS supports load balancing through the Application Load Balancer, Network Load Balancer and Classic Load Balancer.
- Serverless computing. EKS allows for serverless computing through AWS Fargate.
- Logging. EKS uses AWS CloudTrail and Amazon CloudWatch for logging and analysis of the EKS environment.
- Updating. EKS supports easy updates. This allows rapid updates to the latest Kubernetes version without significant disruption to existing clusters or applications. Users can select the desired Kubernetes version through the SDK, CLI or AWS Console.
- Eksctl. The eksctl tool can simplify the creation and control of EKS clusters.
EKS vs. ECS and others
Amazon EKS is an offshoot of Amazon EC2 Container Service (ECS), which was one of the first managed container services. Amazon ECS has a proprietary orchestration layer, which, compared to EKS, makes it easier to integrate with other AWS offerings. But it is more difficult to move containerized applications off Amazon's cloud.
Amazon EKS provides Fargate support and completely offloads the management of the underlying EC2 instances to AWS. As a result, a developer will have to tend only to the containers themselves. They won't need to provision, scale or patch any servers. EKS also operates through AWS Outposts, though the resulting clusters are deployed to the AWS cloud (rather than local Outposts systems).
Amazon EKS joins several other Kubernetes-based container services already on the market, including Microsoft Azure Kubernetes Service, Google Kubernetes Engine, RedHat OpenShift, VMware Tanzu and Docker Enterprise Edition. Plugins, scripts and cluster configurations can be moved across these platforms because they all rely on the same orchestration layer.
EKS works with G3, G4, inf and P instance families intended to support x86-accelerated Amazon EKS optimized AMIs. More traditional A, C, HPC, M and T families are intended for x86 and Arm AMIs.
AWS charges $0.10 per hour for each Amazon EKS user-created cluster, as of July 2022. An enterprise user is also responsible for any charges for resources used by the cluster, including compute and storage. Users can opt to save cluster costs by using a single cluster to run multiple applications. However, there may be additional charges that vary depending on EKS deployment options. The following are some examples:
- If EC2 is used for worker nodes, users will pay for all EC2 instances and storage utilized for Kubernetes worker nodes. EC2 costs can be reduced through on-demand pricing and savings plans.
- If AWS Fargate is used, costs will include the vCPU and memory used for instancing for the duration of the execution.
- If users operate AWS Outposts, EKS clusters are deployed to the AWS cloud (not Outposts) and users pay $0.10 per hour for each EKS cluster.
Getting started with Amazon EKS
Container management can be a complex and time-consuming task, but there are several best practices that can help to improve EKS cluster security and resilience. Some examples include the following:
- Enable cluster logging. Be sure to enable cluster logging for all EKS clusters. This will help developers and administrators correct problems and improve behavior.
- Restrict incoming traffic. Use EKS security groups to restrict incoming traffic to specific ports (such as TCP port 443).
- Disable public access. Configure AWS EKS cluster endpoint access to prevent public access. This will reduce the potential for malicious activity with EKS clusters.
- Build worker nodes for reliability. Use managed nodes as part of an EKS-managed Auto Scaling Group that spans multiple availability zones to ensure a failure in one node does not fully disable an application.
- Consider your options. Consider using Karpenter rather than Amazon EC2 Auto Scaling groups or the Kubernetes Cluster Autoscaler to dynamically adjust cluster compute capacity.