What are containers (container-based virtualization or containerization)?
Containers are a type of software that can virtually package and isolate applications for deployment. Containers can share access to an operating system (OS) kernel without the traditional need for virtual machines (VMs).
Container technology has roots in partitioning, dating back to the 1960s, and chroot process isolation developed as part of Unix in the 1970s. Its modern form is expressed in application containerization, such as Docker, and system containerization, such as LXC (Linux Containers). Both of these container styles enable an IT team to abstract application code from the underlying infrastructure, simplifying version management and enabling portability across various deployment environments.
Container images include the information that executes at runtime on the OS, via a container engine. Containerized applications can be composed of several container images. For example, a 3-tier application can be composed of front-end web server, application server and database containers, which each execute independently. Containers are inherently stateless and do not retain session information, although they can be used for stateful applications. Multiple instances of a container image can run simultaneously, and new instances can replace failed ones without disruption to the application's operation. Developers use containers during development and test, and increasingly, IT operations teams deploy in live production IT environments on containers, which can run on bare-metal servers, on VMs and cloud.
How containers work
Containers hold the components necessary to run desired software. These components include files, environment variables, dependencies and libraries. The host OS constrains the container's access to physical resources, such as CPU, storage and memory, so a single container cannot consume all of a host's physical resources.
This article is part of
What is container management and why is it important?
Container image files are complete, static and executable versions of an application or service and differ from one technology to another. Docker images are made up of multiple layers, which start with a base image that includes all of the dependencies needed to execute code in a container. Each image has a readable/writable layer on top of static unchanging layers. Because each container has its own specific container layer that customizes that specific container, underlying image layers can be saved and reused in multiple containers. An Open Container Initiative (OCI) image is made up of a manifest, file system layers and configurations. An OCI image has two specifications to operate: a runtime and an image specification. Runtime specifications outline the functioning of a file system bundle, which are files containing all necessary data for performance and runtimes. The image specification contains the information needed to launch an application or service in the OCI container.
The container engine executes images, and many organizations use a container scheduler and/or orchestration technology such as Kubernetes to manage deployments. Containers have high portability, because each image includes the dependencies needed to execute the code in a container. For example, container users can execute the same image on an Amazon Web Services (AWS) cloud instance during test, then an on-premises Dell server for production, without changing the application code in the container.
Containers vs. VMs
Containers are different from server virtualization in that a virtualized architecture emulates a hardware system. Each VM can run an OS in an independent environment and present to the application, via abstraction, a substitute to a physical machine. The hypervisor emulates hardware from pooled CPUs, memory, storage and network resources, which can be shared numerous times by multiple instances of VMs.

VMs can require substantial resource overhead, such as memory, disk and network input/output (I/O), because each VM runs an OS. This means VMs can be large and take longer to create than containers. Because containers share the OS kernel, only one instance of an OS can run many isolated containers. The OS supporting containers can also be smaller, with fewer features, than an OS for a VM or physical application installation.
Application containers and system containers
Application containers, such as Docker, encapsulate the files, dependencies and libraries of an application to run on an OS. Application containers enable the user to create and run a separate container for multiple independent applications, or multiple services that constitute a single application. For example, an application container would be well-suited for a microservices application, where each service that makes up the application runs independently from one another.
System containers, such as LXC, are technologically similar to both application containers and to VMs. A system container can run an OS, similar to how an OS would run encapsulated on a VM. However, system containers don't emulate the hardware of a system -- instead, they operate similarly to application containers, and a user can install different libraries, languages and system databases.
Benefits of containers
Because containers share the same OS kernel as the host, containers can be more efficient than VMs, which require separate OS instances.
Containers have better portability than other application hosting technologies: They can move among any systems that share the host OS type, without requiring code changes. This encapsulation of the application operating code in the container means that there are no guest OS environment variables or library dependencies to manage.
Proponents of containerization point to gains in efficiency for memory, CPU and storage as key benefits of this approach compared with traditional virtualization. Because containers do not have the overhead required by VMs, such as separate OS instances, it is possible to support many more containers on the same infrastructure. An average physical host could support dozens of VMs or hundreds of containers —however, in actual operations, the host, container and VM sizes are highly variable and subject to the demands of a specific application or applications.
A major factor in the interest in containers is they are consistent throughout the application lifecycle. This fosters an agile environment and facilitates new approaches, such as continuous integration (CI) and continuous delivery (CD). They also spin up faster than VMs, which is important for distributed applications.
Disadvantages of containers
A potential drawback of containerization is lack of isolation from the host OS. Because containers share a host OS, security threats have easier access to the entire system when compared with hypervisor-based virtualization. One approach to addressing this security concern has been to create containers from within an OS running on a VM. This approach ensures that, if a security breach occurs at the container level, the attacker can only gain access to that VM's OS, not other VMs or the physical host.
Another disadvantage of containerization is the lack of OS flexibility. In typical deployments, each container must use the same OS as the base OS, whereas hypervisor instances have more flexibility. For example, a container created on a Linux-based host could not run an instance of the Windows Server OS or applications designed to run on Windows Server.
Monitoring visibility can be another issue. With up to hundreds or more containers running on a server, it may be difficult to see what is happening in each container.
Various technologies from container and other vendors, as well as open source projects, are available and under development to address the operational challenges of containers, including security tracking systems, monitoring systems based on log data, and orchestrators and schedulers that oversee operations.
Common container uses
Containers are frequently paired with microservices and the cloud but offer benefits to monolithic applications and on-premises data centers as well.
Containers are well-adapted to work with microservices, as each service that makes up the application is packaged in an independently scalable container. For example, a microservices application can be composed of containerized services that generate alerts, log data, handle user identification and provide many other services. Each service operates on the same OS while staying individually isolated. Each service can scale up and down to respond to demand. Cloud infrastructure is designed for this kind of elastic, unlimited scaling.
Traditional monolithic application architectures are designed so all the code in a program is written in a single executable file. Monolithic applications don't readily scale in the way that distributed applications do, but they can be containerized. For example, Docker Modernize Traditional Applications (MTA) helps users to transition monolithic applications to Docker containers as is, with adaptations for better scaling, or via a full rebuild and rearchitecting.
Container tool and platform providers
Many vendors offer container platforms and container management tools, such as cloud services and orchestrators. Docker and Kubernetes are well-known product names in the container technology space, and the technologies underpin many other products.
Docker is an open source application container platform designed for Linux and, more recently, Windows, Apple and mainframe OSes. Docker utilizes resource isolation features, such as cgroups and Linux kernels, to create isolated containers. Docker is an eponymous company formed to sell enterprise supported container hosting and management products. In November 2019 the company sold the Docker Enterprise business to Mirantis.
Microsoft offers containerization technologies, including Hyper-V and Windows Server containers. Both types are created, maintained and operated similarly, as they utilize the same container images. However, the services differ in terms of the level of isolation. Isolation in Windows Server containers is achieved through namespaces, resource control and other techniques. Hyper-V containers provide isolation through the container instances running inside a lightweight VM, which makes the product more of a system container.
The open source container orchestrator Kubernetes, created by Google, has become the de facto standard for container orchestration. It organizes containers into pods on nodes, which are the hosting resources. Kubernetes can automate, deploy, scale, maintain and otherwise operate application containers. A plethora of products are based on Kubernetes with added features and/or support, such as Rancher, Red Hat OpenShift and Platform9.
Besides Kubernetes, other container orchestration tools are also available, such as DC/OS from D2iQ (formerly Mesosphere) and Docker Swarm. Mirantis, which initially intended to focus on Kubernetes, now plans to support and improve the enterprise version of Swarm, while an open source community edition remains alive.
The major cloud vendors all offer diverse containers as a service (CaaS) products as well, including Amazon Elastic Container Service (ECS) and Elastic Kubernetes Service (EKS), AWS Fargate, Google Kubernetes Engine (GKE), Microsoft Azure Container Instances (ACI), Azure Kubernetes Service (AKS) and IBM Cloud Kubernetes Service, among many more. Containers can also be deployed on public or private cloud infrastructure without the use of dedicated container products from the cloud vendor.
Future of containers
Enterprises have gradually increased their production deployment of container software, beyond application development and testing. For most organizations, their focus has shifted to container orchestration, and especially Kubernetes, which most vendors now support. As organizations consolidate their processes and tool sets for IT operations, they want more granular control to monitor and secure what's inside containers.
Adoption of container software is expanding across various areas of IT, from security to networking to storage. Some organizations are planning how to deploy stateful applications such as databases and machine-learning apps on containers and Kubernetes, to ensure consistent management alongside stateless apps. Some early adopters of containers and container orchestration have begun to look beyond container functionality and orchestration, and plan out what container infrastructure enables them to do as part of broader Agile or DevOps transformations. Examples include developer self-server interfaces, built-in CI/CD, GitOps, service mesh, multi-cloud management and multi-tenant security, serverless and event-driven computing. One potential trend is to proliferate the use of containers and Kubernetes out to the network edge, to remotely deploy and manage software in various locations and on a wide variety of devices. This still is an early trend, however.