GP - Fotolia

Five cons of container technology

Containers promise rapid scalability, flexibility and ease, but they're not right for every workload.

Containers are a legacy virtualization technology that has received a massive new infusion of interest sparked by the emergence of cloud computing, a corresponding shift in application development, and the availability of powerful new container frameworks like Docker. At Gartner's IT Operations Strategies and Solutions Summit 2015 in Orlando, Gartner VP and distinguished analyst Thomas Bittman delivered a session on containers. Bittman's session outlined a range of benefits to container technology, but also underscored a variety of important disadvantages. Let's examine each of those disadvantages and consider ways to address them.

1. Not right for all tasks

Bittman noted that containers provide versatility, but are certainly not a universal replacement for every existing virtual machine (VM) deployment. Just as some legacy applications were better suited to physical deployments in the early days of virtualization, some applications are not appropriate for container virtualization.

For example, containers are ideally suited to microservice-type application development -- an approach that allows more complex applications to be configured from basic building blocks, where each building block is deployed in a container and the constituent containers are linked together to form the cohesive application. The application's functionality can then be scaled by deploying more containers of the appropriate building blocks rather than entire new iterations of the full application.

By comparison, some applications simply need to be monolithic -- they're designed that way, and benefits like scalability and fast deployment don't readily apply. In these cases, containers just constrain the workload. The best approach is often to experiment and see which existing applications can benefit from containerization. New application development paradigms will likely benefit from containerization. Applications that cannot readily be containerized can still run as fully-functional VMs atop a conventional hypervisor. One IT enterprise architect from a major insurance provider noted the pause. "Containers are interesting, but our software team would need to do a lot of catch-up to really use containers right."

2. Grappling with dependencies

Common VMs are extremely self-contained and each VM includes a unique operating system (OS), drivers and application components. VMs can also be migrated to any other system as long as a suitable hypervisor is available. By comparison, containers run on top of a physical OS, sharing much of the underlying OS kernel along with many libraries and binaries. Bittman explained that placing dependencies on containers that can limit portability between servers. For example, Linux containers under Docker cannot run on current versions of Windows Server.

The answer here is not so much a solution as a realization -- containers can be spun up and proliferated in seconds, and OSes are evolving to provide "micro OS" or "nano OS" variants that can provide extraordinary stability and extremely fast restarts. Containers are natively more available in these environments, and can still be migrated or evacuated as long as other servers are available in the data center.

These dependencies are easing as new OSes evolve. For example, Windows Server 2016 promises container support for Docker and native Hyper-V containers. There are also many container platforms to choose from beyond Docker, including LXC, Parallels Virtuozzo, Joyent, Canonical LXD, Spoon and others. It's entirely possible that VMware may enter the container fray at some point.

3. Weaker isolation

Hypervisor-based VMs provide a high level of isolation from one another because the system's hardware resources are all virtualized and presented to the VMs through the hypervisor. This means a bug, virus or intrusion could compromise one VM, but not carry over to other VMs.

Containers are weaker because they share an OS kernel and components and already have a deep level of authorization (usually root access in Linux environments) in order to run in the first place. As a consequence, flaws and attacks have a much greater potential to carry down into an underlying OS and over into other containers -- potentially propagating malicious activity far beyond the original event.

While container platforms are evolving to segregate OS rights and limit vulnerable security postures, Bittman explains that administrators can boost security now by running containers in a VM. For example, it's possible to set up a Linux VM on Hyper-V and install Docker containers on the Linux VM. Even if containers within the VM are compromised, the vulnerability will not extend outside of the VM -- limiting the scope of potential damage.

4. Potential for sprawl

Where VM lifecycle management is important for hypervisor-based environments, lifecycle management is absolutely critical for containers. Containers can be spun up and duplicated at an astonishing rate. This is an important benefit of containers, but it's also possible to consume a vast amount of computing resources without truly realizing it. That's not bad if the application's constituent containers are spun down or deleted when they're no longer needed. But the costs to scale up a containerized application, and then forgetting to scale it back later, can impose significant (and unnecessary) cloud computing costs for the enterprise. Bittman noted that cloud providers love it -- they make money renting computing power -- and the onus is on users to watch how containers are deployed.

5. Limited tools

The kind of tools needed to monitor and manage containers are still lacking in the industry. This is not a new phenomenon. The early days of hypervisor-based virtualization were marked by a shortage of suitable tools. And just as capable VM monitoring and management tools are now readily available, new tools are starting to appear for container management. These include Google's open source Docker management tools Kubernetes, DockerUI to replace Linux command line functions with a web-based front end, Logspout to route container logs to a central location and so on.

Bittman suggested that administrators can work around a shortage of appropriate container tools by using containers within VMs and utilizing VM tools for some monitoring and management functions. Since VM tools are more mature and plentiful, they may serve as suitable surrogates until container tools further mature.

Bittman is ultimately enthusiastic about containers, and is clear to note that containers promise fast, lightweight deployment for high density and scalability, native (non-virtualized I/O) for better performance, readily available development frameworks like Docker, and noteworthy sharing and collaboration tools like GitHub and others. But containers are not a ubiquitous solution for every virtualization task, and will provide yet another tool in the virtualization toolbox -- often working well along with traditional VMs.

Next Steps

Docker doesn't want to replace the VM

Enterprises slow to embrace Docker containers

A brief history of Docker's overnight success

Exploring container limitations in cloud development

Dig Deeper on Containers and virtualization

Software Quality
App Architecture
Cloud Computing
SearchAWS
TheServerSide.com
Data Center
Close