GP - Fotolia

Tip

Explore the benefits of containers on bare metal vs. on VMs

Advances in container and cloud technologies have morphed the debate over container deployment on bare-metal servers vs. VMs, with strong pros and cons for each.

You know why you should consider containers. But do you know which type of infrastructure to deploy them on? Are containers on bare-metal servers a better choice than on VMs?

The answer, of course, depends on lots of variables. There are pros and cons of running containers on bare metal, as well as on VMs. The differences in the way container orchestrators, such as Kubernetes, work when hosted on bare metal as compared to VMs is also a factor to consider.

Bare metal vs. VMs

The debate over the advantages and disadvantages of bare-metal servers vs. virtualized hosting environments is not new. It has been on CTOs' minds since virtualization became widespread in data centers in the 2000s, long before anyone had heard of Docker containers, which debuted in 2013.

The main benefits of bare-metal servers include:

  • higher performance, because no system resources are wasted on hardware emulation;
  • full use of all machine resources, as none of them sit idle during high-demand periods; and
  • easier administration, because there are fewer hosts, network connections and disks to manage in the infrastructure.

VMs, on the other hand, offer the following advantages:

  • Applications can move between hosts easily, with the transfer of VM images from one server to another.
  • There is isolation among applications that run on different VMs. This structure provides some security benefits and can reduce management complexity.
  • A consistent software environment across infrastructure can be created when all apps are on the same type of VM, even if the underlying host servers are not homogenous.
In general, an orchestrator running on a bare-metal environment offers the same advantages as containers hosted on bare metal.

But VMs also come with some drawbacks:

  • Server resources can go underutilized. For example, if you allocate storage space on a server host to create a VM disk image, that portion becomes unavailable for other purposes -- even if the VM to which the disk is attached does not use all of it.
  • VMs can't access physical hardware directly in most cases. For instance, the VM can't offload compute operations to a GPU on its host -- at least not easily -- because the VM is abstracted from an underlying host environment.
  • VMs generally don't perform as well as physical servers, due to the layer of abstraction between the application and the hardware.

Modern virtualization platforms can help admins work around these limitations. For example, an admin can create a dynamic disk image that expands as VM use increases to avoid locking up storage space on a host before a guest actually uses it. Pass-through features also provide VMs with direct access to physical hardware on a host. However, these hacks don't always work well. They are not supported on all types of hosts and guest OSes, and they create additional administrative burdens. If the apps you want to run require bare-metal access, then it's best to run those apps on a bare-metal server.

Or you could run your apps inside containers on bare metal to get the best of both worlds.

The layers of VMs vs. containers
Containers are an OS-level isolation technology, while VMs isolate applications, each with a separate OS.

Square the circle: Run containers on bare metal

Containers on bare-metal hosts get many of the advantages VMs offer, but without the drawbacks of virtualization:

  • Gain access to bare-metal hardware in apps without relying on pass-through techniques, because the app processes run on the same OS as the host server.
  • Have optimal use of system resources. Although you can set limits on how much compute, storage and networking containers can use, they generally don't require these resources to be dedicated to a single container. A host can, therefore, distribute use of shared system resources as needed.
  • Get bare-metal performance for apps, because there is no hardware emulation layer separating them from a host server. (Author's note: Technically, containers don't offer exact bare-metal performance due to the overhead created by the container runtime, but this overhead is minimal compared to the emulation layers that VMs create.)

In addition, by running containers on bare metal, you get the benefits that have traditionally been possible only with VMs:

  • Gain the ability to deploy apps inside portable environments that can move easily between host servers.
  • Get app isolation. Although containers don't provide the same level of isolation as VMs, containers do enable admins to prevent apps from interacting with one another and to set strict limits on the privileges and resource accessibility associated with each container.

In short, run containers on bare metal to square the circle -- do what seems impossible. Reap all the benefits of bare-metal servers' performance and hardware accessibility, and take advantage of the portability and isolation features seen with VMs.

The downsides to containers on bare metal

There are reasons, however, IT organizations don't run containers on bare metal. Consider the following drawbacks of using bare-metal servers, rather than VMs, to host a container engine:

  • Physical server upgrades are difficult. To replace a bare-metal server, you must re-create the container environment from scratch on the new server. If the container environment were part of a VM image, you could simply move the image to the new host.
  • Most clouds require VMs. There are some bare-metal cloud hosts out there, such as Rackspace's OnMetal offering and the bare-metal instances offered on the Amazon Elastic Compute Cloud. But bare-metal servers in cloud computing environments usually cost significantly more -- if the cloud vendor even offers it. By and large, most public cloud providers only offer VMs. If you want to use their platforms to run containers, you'll have to deploy into VMs.
  • Container platforms don't support all hardware and software configurations. These days, you can host almost any type of OS on a VM platform such as VMware or KVM. And you can run that virtualization platform on almost any kind of OS or server. Docker is more limited and can run only on Linux, certain Windows servers and IBM mainframes if hosted on bare metal. For example, Docker runs natively only on bare-metal Windows servers running Windows Server 2016 or later. Earlier versions require a VM on top of the Windows host if you want to use Docker.
  • Containers are OS-dependent. Linux containers run on Linux hosts; Windows containers run on Windows hosts. A bare-metal Windows server requires a Linux VM environment to host Docker containers for an app compiled for Linux. However, there are technological developments in this space (see sidebar).
  • Bare-metal servers don't offer built-in rollback features. Most virtualization platforms enable admins to take VM snapshots and roll back to that captured configuration status at a later time. Containers are ephemeral by nature, so there is nothing to roll back to. You might be able to use rollback features built into the host OS or file system, but those are often a less seamless experience. To take advantage of simple system rollback, host containers on a VM.

Linux containers on Windows

Historically, there was not an easy, efficient way to run Linux containers on a Windows host. However, this is now easier to do using tools such as Linux Containers on Windows and Windows Subsystem for Linux in conjunction with Docker. These tools enable developers to run Linux and Windows containers side by side on a Windows host, which is advantageous when developing containerized apps for both Windows and Linux.

Keep in mind, however, that these tools are intended primarily for developers. They're not a way to deploy Linux and Windows containers side by side in production.

Container orchestrators on bare metal

In addition to considering the pros and cons of running containers themselves on bare metal, consider the implications of hosting a container orchestrator, such as Kubernetes, on bare metal.

Most container orchestrators are compatible with both bare metal and VM-based environments. Some, like HPE Container Platform, which is based on Kubernetes, even market their bare-metal compatibility as a selling point. However, there are certain orchestrators that don't support bare-metal deployments, such as Google Kubernetes Engine. Conversely, all of the major orchestrators support VMs; there is no orchestrator that requires bare metal. Thus, to host containers on bare metal, be careful to select an orchestrator that supports this approach.

In addition, weigh the pros and cons of running the orchestrator master and worker nodes on bare metal vs. VMs. In general, an orchestrator running on a bare-metal environment offers the same advantages as containers hosted on bare metal: There are no infrastructure resources wasted on abstraction layers, which leaves more resources available for apps.

On the other hand, hosting a container orchestrator on bare metal can pose some risks. For one, if you provision each bare-metal server as a single node -- which you have to do if you want to host nodes as bare metal -- you risk more disruption to the cluster if a node goes down, because it will take all of the host server's resources with it. In contrast, a bare-metal server that is provisioned into several nodes, each running in its own VM, is less prone to total failure: If one node fails, the other nodes hosted on the same bare-metal server will remain available.

Running each bare-metal server as a physical node also gives you fewer nodes overall, which can reduce the ability to spread pods or containers across the cluster to optimize availability and load balancing. Likewise, you might end up needing more containers to share a single node, which might increase noisy neighbor issues.

In other words, deploying an orchestrator on bare metal is akin to putting all of your eggs in one basket -- or, at least, to spreading your eggs out among relatively few baskets -- if you think of your nodes as eggs.

Bare-metal orchestrator nodes are also subject to the same portability and OS-dependency limitations as bare-metal containers. A bare-metal node doesn't migrate to a new machine very easily, and bare-metal nodes only run if the host OS supports the orchestrator. As with Docker containers, all mainstream Linux distributions support Kubernetes, but Windows support for Kubernetes is much more limited. Only Windows Server 2019 is compatible with Kubernetes, and it can run only as a worker node; Kubernetes master nodes can only run on Linux.

Next Steps

Why use containers vs. VMs in a modern enterprise

How to run Docker on an Azure VM

Dig Deeper on Containers and virtualization

Software Quality
App Architecture
Cloud Computing
SearchAWS
TheServerSide.com
Data Center
Close