Container security is an ever-growing segment of information security. In that regard, IT professionals must scrutinize both the code running within the containers and the host servers on which the container runs.
Docker host hardening is a valuable process for IT organizations to build secure containerization environments. Both containers and the underlying host OS must be secured in tandem to ensure full coverage. To harden a Docker host, follow these five important security measures.
Keep versions up to date
The first security-conscious action is to keep the host OS and Docker packages up to date. As new software versions are released, it is common that those updates will fix any new Common Vulnerabilities and Exposures. If your organization runs older versions of a container service, look at a larger patch management strategy to roll out updates to your container hosts.
Download with discretion
Second on the list -- and, quite possibly, the easiest to implement -- is to use only trusted container images as part of work on the system. Whether you create a new container via a Dockerfile or pull a pre-configured container image from a repository, only download from a trusted location. Docker Hub is a major public repository of container images, but downloading and running unknown images is a security risk. Instead, use container images that have a Verified Publisher or Official Images property to ensure the software's source is a legitimate vendor or community-supported project.
IT organizations can run their own container image registry as well. There are both commercial and open source options available for self-hosting a Docker image registry in cases where an IT organization wants more control over the update process of image pulls from an upstream registry. In this video, we perform the most basic registry creations, but there are third-party products that also include code review as part of their registry software.
While the above security measures are higher-level actions, the next few measures become more involved.
Limit account privileges
Run a Docker container as its own limited-access user account, better known as rootless mode. Create a user account within your Dockerfile, and use the user instruction to specify this new account as a non-root user. This step ensures that this container doesn't use a privileged access account.
By default, containers run as the root user of that container image. That level of privilege is acceptable in learning or lab environments, but if this particular container were in a live environment and succumbed to a security breach, the entire host would be at risk because of the root-level access to the host. This brings into play our next security measure: namespace isolation.
Docker enables IT admins to remap user namespaces with an option called userns-remap, which categorizes both the container and the host OS to run as standard permissions-level user accounts. Rootless mode affects only how an application runs within the container; userns-remap runs the full Docker daemon as a non-root user.
The setup process is more involved for namespace isolation than in the Docker security hardening measures discussed above. Docker recommends users enable this feature from a fresh container installation. Running both containers and the Docker Engine services as non-root users improves security posture in the event of a breach.
Configure with care
The final aspect of Docker host hardening is how to prevent denial-of-service attacks -- intentional or accidental -- through resource overconsumption. By configuring your containers with explicit resource constraints -- for example, containers are allotted a specified amount of memory or CPU -- IT admins can prevent one misconfigured or attacked container from crashing all hosted containers or the overall host.
Dave Pinkawa: Hello, and welcome to this video on hardening your Docker hosts.
The first and most important thing you can do to maintain the security of your Docker hosts is to keep them updated. At the Docker COI, you could use the Docker-- version to get the version number, and then check the Docker Engine release notes page at the Docker website.
As you can see, some of the older versions of Docker do have some common vulnerabilities and exposures, CVEs, that afflict them. It is a good idea to get ahead of your patching and keep your hosts updated so that you are not exposed to any of those particular vulnerabilities.
In the case of most operating systems, you will have installed Docker through your package management solution. So you should install each of those components, make sure that they are updated as well on a regular basis. This can be part of a larger patching strategy that you might have at your organization. But if it is a single host, you can go ahead and install those updates for each of those individual components at the same time. As you can see here, we use the Ubuntu 'apt' package manager to install and update any available updates for the three main packages that Docker CE relies on.
With our update successfully installed, we can once again run the 'docker version--' command and just confirm that our version is up to date. The next security-conscious choice for hardening our hosts is choosing to run only those secure images that we trust. It is common on Docker Hub to have verified publishers and official images from community package maintainers or those organizations that have a commercial backing.
By using only those verified publisher-based images, you are ensured that you're not running any sort of malicious code underneath. So the next time you're going to deploy a solution via Docker Hub, just make sure that you trust the publisher. In the same vein of trusting the publisher on Docker Hub, you can run a Docker Registry local to your own environment. This could be on your workstation; this could be inside of your own data center. But there is a container image that will provide and do this for you.
In our Dockerfile, we are just pulling the latest version of the Docker Registry container image. We're exposing Port 5000 as well. And then we're going to bring this up with our Compose file, just so that we could set the parameter of 'restart: always' so that, even if our host goes down, this container will always attempt to come back up online.
Here we're going to bring up our registry container using the 'docker-compose up' command; we're going to use '-d' to make sure that this comes up as a daemon in the background. Here, we can confirm that our registry container has come online successfully.
And now let's put our local registry to the test.
We're going to pull the Ubuntu image directly from Docker Hub to our workstation. And then we're going to initiate a push to our local registry using just a couple of different tags to specify a new location. So we're tagging that image that we just pulled off of Docker Hub; we're specifying the new location and the name that we want to push it to. And then we're actually going to use the 'docker push' command and specify which location we are pushing this image to. Here we see it is successfully getting pushed to our local registry -- that is awesome. This is a registry that we can then trust to be separate and not exposed to any public users or other modifications that we don't make to it.
And then just to give it one final test, we're going to pull that image from our local registry and as we it is pulled it is successful. The image that we are using on this Docker host is the one that comes from the registry that we own. Our third tactic in hardening our Docker hosts is to actually change which user account is being used within the container itself to run your app. By default, within a container, it will always run the app as the root user. So to change this default behavior, we need to do a few things.
The first is going to be running several commands -- one of them to create a group inside of our container. We're going to create then a new user named 'app_user,' and we're going to add it to that 'app' group. And then we're going to assign the ownership of our 'app' folder within the container to the group in user that we just created. Finally, we're going to use the user command and we're going to specify which user we want our container to run as. So in this instance, we're going to be running the container, and the contents within our container, as the 'app_user' user account.
So by using this user instruction set inside of our Dockerfile, we're specifying, for the container, which user we want to execute all of the 'run' commands, as well as the container itself. By running as the non-root user, in the event that there is a security breach to this container, the rights of that breach will not be of the root user instead, but will be of just some lesser standard user that can only execute this one application.
The next security measure that we're going to take for our Docker host is a much more involved process. It's actually going to be revolved around namespace isolation on our host. To implement this type of remapping feature, Docker does provide two utilities that we're going to enable and leverage -- one of them being the 'dockremap' utility, and then the other is 'userns-remap.'
The first step in this process is going to our etc default Docker configuration file. And we're going to add a Docker option ['docker_OPTs'] to include another configuration file for when it's being run as a service. We're going to tell Docker that when it's run as a service to also look at this daemon.json configuration file for some additional JSON-based options to be enabled.
Now, we're going to both create and edit that daemon.json file, and we're going to specify the 'userns-remap' functionality to be enabled. With our user namespace remapping feature enabled on Docker, our next step is to actually assign the subordinate user and group identification for the 'dockremap' user that the Docker service is going to run us. Without these impersonations, Docker's not going to know which user it is allowed to run as, as a service.
To ensure change takes effect, we're just going to restart the service real quick. And then we can also confirm that our subordinate user and group identification files have the 'dockremap' user as we created above. Docker will now successfully run as these non-root users when executing any of your containers.
And our last security measure for our Docker host is actually something to implement at runtime of your containers. And this is implementing resource limits so that your containers don't accidentally consume more than they should. You can see these limits in effect in our Docker Compose above, underneath the 'deploy' heading in our Docker Compose file. And also under Resources, we have those limits set, the CPU being half of one CPU and then the memory being a maximum of 512 megabytes. And a quick fun way to confirm that these limits are, in fact, in place is by using the 'docker stats' command, and we can see underneath the 'memory usage' heading, that there is a limit imposed on our memory consumption for this particular container.
I hope that this video has been informative on how you can harden your Docker container host. Thank you.