Rawpixel - Fotolia
Windows Server containers and Hyper-V containers explained
It's important to not only understand the differences between these two container types but also the specific requirements, deployment considerations and management practices.
A big draw of Windows Server 2016 is the addition of containers that provide similar capabilities to those from leading open source providers. This Microsoft platform actually offers two different types of containers: Windows Server containers and Hyper-V containers. Before you decide which option best meets your needs, take a look at these five quick tips so you have a better understanding of container architecture, deployment and performance management.
Windows Server containers vs. Hyper-V containers
Although Windows Server containers and Hyper-V containers do the same thing and are managed the same way, the level of isolation they provide is different. Windows Server containers share the underlying OS kernel, which makes them smaller than VMs because they don't each need a copy of the OS. Security can be a concern, however, because if one container is compromised, the OS and all of the other containers could be at risk.
Hyper-V containers and their dependencies reside in Hyper-V VMs and provide an additional layer of isolation. For reference, Hyper-V containers and Hyper-V VMs have different use cases. Containers are typically used for microservices and stateless applications because they are deposable by design and, as such, don't store persistent data. Hyper-V VMs, typically equipped with virtual hard disks, are better suited to mission-critical applications.
The role of Docker on Windows Server
In order to package, deliver and manage Windows container images, you need to download and install Docker on Windows Server 2016. Docker Swarm, supported by Windows Server, provides orchestration features that help with cluster creation and workload scheduling. After you install Docker, you'll need to configure it for Windows, a process that includes selecting secured connections and setting disk paths.
One key advantage of Docker on Windows is support for container image automation. You can use container images for continuous integration cycles because they're stored as code and can be quickly recreated when need be. You can also download and install a module to extend PowerShell to manage Docker Engine; just make sure you have the latest versions of both Windows and PowerShell before you do so.
Meet Hyper-V container requirements
If you prefer to use Hyper-V containers, make sure you have Server Core or Windows Server 2016 installed, along with the Hyper-V role. There is also a list of minimum resource requirements necessary to run Hyper-V containers. First, you need at least 4 GB of memory for the host VM. You also need a processor with Intel VT-x and at least two virtual processors for the host VM. Unfortunately, nested virtualization doesn't support Advanced Micro Devices yet.
Although these requirements might not seem extensive, it's important to carefully consider resource allocation and the workloads you intend to run on Hyper-V containers before deployment. When it comes to container images, you have two different options: a Windows Server Core image and a Nano Server image.
OS components affect both container types
Portability is a key advantage of containers. Because an application and all its dependencies are packaged within the container, it should be easy to deploy on other platforms. Unfortunately, there are different elements that can negatively affect this deployment flexibility. While containers share the underlying OS kernel, they do contain their own OS components, also known as the OS layer. If these components don't match up with the OS kernel running on the host, the container will most likely be blocked.
The four-level version notation system Microsoft uses includes the major, minor, build and revision levels. Before Windows Server containers or Hyper-V containers will run on a Windows Server host, the major, minor and build levels must match, at minimum. The containers will still start if the revision level doesn't match, but they might not work properly.
Antimalware tools and container performance
Because of shared components, like those of the OS layer, antimalware tools can affect container performance. The components or layers are shared through the use of placeholders; when those placeholders are read, the reads are redirected to the underlying component. If the container modifies a component, the container replaces the placeholder with the modified one.
Antimalware tools aren't aware of the redirection and don't know which components are placeholders and which components are modified, so the same components end up being scanned multiple times. Fortunately, there is a way to make antimalware tools aware of this activity. You can modify the container volume by attaching a parameter to the Create CallbackData flag and checking the Exchange Control Panel (ECP) redirection flags. ECP will then either indicate that the file was opened from a remote layer or that the file was opened from a local layer.
Ensure container isolation and prevent root access
Combine microservices and containers
Maintain high availability with containers and data mirroring