E-Handbook: Review this ops checklist for containerized applications Article 2 of 4

How to properly prep a container infrastructure

To deploy containers in production, an organization must put in place the appropriate infrastructure. Here are the specific steps to take to begin that process.

Seasoned IT pros are painfully familiar with the scenario of how upper management occasionally becomes enamored with a new, possibly over-hyped technology and makes a top-down decision that the organization needs an implementation strategy -- stat. The latest of these emergency drills is for container infrastructure.

The edict may have come down: "We need to start using containers, so do whatever it takes and lay the groundwork." For others, the push might be from developers telling IT, "Hey, we're building all our apps and development processes around containers, so give us something to run them on. In the meantime, we're using AWS instances."

Whatever the source, IT teams face the daunting task of developing a container strategy that includes many elements, from the selection of a technology and services platform; to integration with operational processes, management systems and networks; to the design of security controls.

So let's clear the fog. IT teams need to understand the various components of a container ecosystem, along with important operational factors and organizational-governance considerations when building a production-worthy enterprise container platform. Consider this a primer to channel your thinking and direct future research into the design and implementation details.

Step 1: Plan the container ecosystem and design

The first step in building anything, whether it's a skyscraper or IT infrastructure, requires an understanding of the scope, namely the various components entailed and user requirements. For container infrastructure, these involve knowing the application characteristics, use cases and developer needs.

Also relevant will be the relationships to existing infrastructure, namely the bare-metal systems, VM platforms and software environments (OS, other systems) an organization already has in place. Proper planning also requires an understanding of the various pieces of a container ecosystem, since the application runtime environment for which it is named is only one small part of a production container platform.

comparison of virtual machines and containers

In particular, a container system's design and implementation strategy should cover the following areas:

Underlying hardware infrastructure. Perhaps the first decision IT must make is to choose the hardware environment for running containers and whether it will be on premises, on one or more cloud services, or both. Each of these compels several other decisions, including the following:

  • whether to host containers on bare-metal systems or VMs;
  • the choice of an OS, such as CoreOS, Ubuntu Core or VMware Photon; a VM platform, such as vSphere Integrated Containers; or a Windows Server container, such as Hyper-V Isolation; and
  • system specifications and cluster sizing for the near-term workload and expected long-term growth.

Cloud container services. IaaS platforms, particularly those from AWS, Azure and Google Cloud, are popular hosts for enterprise container workloads since each makes it simple, fast and convenient to provision the system and storage resources required for a container cluster. All three offer container cluster management and orchestration services based on Kubernetes, the de facto standard for container workload management, scheduling and cluster scaling.

Both AWS and Azure offer a form of serverless container instances in Fargate and Azure Container Instances (ACI), respectively. These provide a container runtime environment without having to provision a VM instance. Indeed, users can provision multiple Fargate or ACI instances as a Kubernetes-run cluster that is managed either by Amazon Elastic Kubernetes Service (EKS) or Azure Kubernetes Service (AKS). By eliminating operational overhead through easily provisioned, self-managed container services, the cloud is a favored platform on which to begin experiments with containerized applications, container networks and storage designs and governance.

Why application containerization?

Containers can run on bare-metal systems, cloud instances and VMs. A containerized application allows virtualization to occur at the OS level, which obviates the need for each app to have its own VM. Multiple applications will run on one host, accessing the same OS kernel. Because of this sharing, containerized applications use fewer resources. An organization's IT overhead will consequently be lowered.

Because they are lightweight, containers boost application performance. Other benefits include a predictable environment in which developers can create applications, and speedy scaling.

Proponents tout the ability of containers to enable portability from one platform to another, though that idea is seen by some as more theoretical than realistic.

Virtual network software (software-defined networking, L3 overlays) and container networking. Like VMs, containers share the physical network interfaces of their host system via an internal software bridge. Things get more complicated in a cluster, where an orchestrator such as Kubernetes assigns a virtual IP to groups of containers called pods that are used for a particular workload. The situation is even more difficult when pods must scale to span multiple server clusters. Those situations generally require you to route packets via a software overlay network, such as ones from Cisco Application Centric Infrastructure, VMware NSX, Nuage Virtual Cloud Services or various open source alternatives.

Storage comes into play because containers are inherently stateless. As a result, long-running applications or those that must be horizontally scaled to other nodes need a way to save persistent data. As with networking, there are many ways to provide persistent storage, including plugins to Kubernetes, such as the Container Storage Interface, that allow the integration of external storage systems with the orchestration stack. Container networking and persistent storage are areas of active open source and product development. They remain some of the trickiest infrastructure problems IT must solve when building production container infrastructure.

Infrastructure configuration and automation software. Many organizations programmatically automate infrastructure configuration and deployment using Ansible, Chef, Mesosphere, Puppet, Terraform or other software. [Editor’s note: Mesosphere relaunched as D2iQ in November 2019.] These can be adapted to container infrastructure to automate the cluster creation. Also, they can be integrated with Kubernetes so that, as pods scale out, nodes use a standard configuration.

To build production container infrastructure, a staffer needs to understand the technology and be able to work in an environment that spans servers, networks, storage and development processes.

Container cluster management and orchestration software. While Kubernetes has become the standard for container orchestration software, on-premises container deployments must be ready to integrate with existing management stacks that many organizations already have in place, including VMware vSphere and vCenter or Microsoft System Center Configuration Manager.

Storage for persistent data and images and container registry software. A critical part of the production container ecosystem is a container image registry. This should include version management, image metadata and an API to automate image retrieval and deployment. The three major cloud platforms offer registry services, and various products are available for on-premises deployments as well.

Step 2: Consider operational and integration considerations

Container infrastructure requires new administrative processes that are unique to the environments. On-premises container deployments require even more operational planning, since an organization runs the underlying infrastructure. When building container infrastructure, IT should plan and design for the following:

  • Multi-tenancy. This is important even for privately run clusters, since they will support users and applications from multiple departments.
  • User and application isolation, authentication and resource constraints. While containers provide some level of runtime isolation, organizations with tight security requirements or that are concerned about potential new container breakout threats should consider VM-based containers. Authentication should be achieved via integration with existing user directories and by role-based access controls (RBAC).
  • Usage and event logging, monitoring and alerts. You will need to ensure that container images and orchestration software log to existing event-monitoring systems. For cloud container services, ensure that containers use services such as AWS CloudWatch and CloudTrail. Some organizations will want to integrate usage data with billing systems to allow for chargeback.
  • Backup of container images and persistent data. This can typically be handled by existing backup systems.
  • License management for third-party software running as a container. Unfortunately, as with the early days of VMs, it can be difficult to comply with licensing terms. And there's no quick-and-dirty solution to this problem since the particulars vary according to the underlying software license models.

Design for security. IT teams must also design for security by using lean base, or minimalist, OSes -- such as CoreOS (now owned by Red Hat), Ubuntu Core and Project Atomic. Designed to host containers, these OSes have limited attack surfaces in base images. Some container registries also support signed images using certificates that can be delivered from an enterprise key-management system or cloud service such as Amazon's Key Management Service. RBAC and access-control lists should be used to limit access to the image registry, orchestration system and log data. IT must also adapt and extend its existing network- and event-based security measures to control network threats and detect anomalies.

Step 3: Address organizational, personnel and governance issues

Containers can be a source of stress because they encourage tighter integration between IT operations and application developments teams. Organizations that have adopted a DevOps structure are well prepared for such an environment; those that haven't done so should consider incorporating some DevOps concepts into their container management processes.

Map out application deployment. One area where IT admins and developers need to cooperate is in the application deployment process and system hierarchy. This is particularly important with how containers will be used in development, test, integration and beta testing. For example, developers who have automated application builds and deployments using continuous integration and delivery tools will want separate container environments for various versions of an application, and will use methodologies such as blue-green, canary or rolling deployments for application updates. Similarly, IT must plan when and how to migrate existing applications to a containerized environment and whether container clusters will initially be used only for new applications.

Broaden the skills base. To build production container infrastructure, a staff member needs to understand the technology and possess the technical breadth to work in an environment that spans servers, networks, storage and development processes. Consequently, organizations must plan for a combination of new hires, ideally with container experience, and staff training for those being reassigned to a container program.

Next Steps

Manage containerized microservices with a service registry

Dig Deeper on Containers and virtualization

Software Quality
App Architecture
Cloud Computing
SearchAWS
TheServerSide.com
Data Center
Close