GP - Fotolia

Tip

5 tips to help effectively manage container components

The layered approach is commonly used with Docker containers, but it's important to keep container components under control. Here are five crucial tips.

Every functioning container is built from a combination of discrete components, including a base image file and a series of one or more layer files loaded on top of the base image. This layered, or packaged, approach is commonly used in Docker containers, and it enables containers to share and reuse both the base image and each layer file. It's an elegant and efficient system that lets developers construct complex container instances on the fly with just enough components to support the desired functionality of the container.

However, this system requires careful attention to how those container components, including container images, are managed. Let's review some of the tools and practices that can help.

One application per container

Limit the number of parent processes that are allowed to run within any single container. Container technology is intended to be ephemeral, so the container should only live as long as the application contained within. If a container houses several parent processes but only runs one of them, it is quite easy to lose the efficiencies that come with containers. Instead, it's better to use separate containers and run each component separately.

Name images carefully

Software applications are generally released as container images, which include a base image and a number of added software layers that tweak and extend the base image. However, it's vital to document the container image by applying an appropriate name and tag to each image file.

There is no single uniform rule or requirement with container image naming and tagging, but it's helpful to adopt and maintain a policy that users can understand and follow. Generally, the semantic versioning approach uses a three-pronged x.y.z approach to denote an image's major.minor.patch levels. For example, the tag 10.0.0 denotes a major version 10 release. On the other hand, the tag 10.2.5 denotes the fifth patch for any update for version 10.

Make the most of the build cache

Container image files are built in a succession of layers using an instruction or template file, such as a Dockerfile. These layers -- and their order -- are typically cached by the container platform. As an example, Docker provides a build cache that enables layer reuse. This build cache can greatly accelerate subsequent container builds, but you can only use the cache if all previous layers are present. Utilizing the build cache effectively requires a bit of build planning.

For example, consider a build file with a step A, step B and step C. If a change is made to step C, the build file can reuse steps A and B from the cache, potentially saving time and accelerating the image build process. However, if step A is changed, the build file cannot use the other steps in the cache.

Also, make sure to update any other content needed for the build -- like related repositories -- and use it at build time, even if the build cache is being reused. Otherwise, it could result in an incorrect build that uses older content.

Consider container shutdown handling

Linux uses several signals, such as sigterm, sigkill and sigint, to terminate processes. Each container has its own process identifiers, and container platforms, like Docker and Kubernetes, use the same signals to communicate with containerized processes. However, Linux handles signals differently for containers than other processes, and typical signals don't function by default. Instead, these signals require sigkill to kill container processes, causing errors, interrupting writes, causing alerts and basically preventing any sort of orderly container process shutdown.

The best way to overcome this issue is to employ a specialized init system, such as Linux Tini, which is well-suited to containers. A tool like Tini will register correct signal handlers so that Linux signals will work for containerized applications and shut down any orphaned processes to recover memory.

Optimize container image files

It's usually best practice to build the smallest possible container image. It's easy to accidentally bloat an image with unoptimized base images, unexpected dependencies and unnecessary container components. Smaller image files are faster to upload and download, require fewer compute resources to run and may produce fewer vulnerabilities.

For example, you can profoundly impact the size of a base OS by using a smaller or stripped-down Linux distribution, which can shed numerous potentially unneeded tools and packages. It's also important to declutter the image during the build process by removing unnecessary container components or ancillary tools that act as extra layers.

Finally, consider the use of multistage builds if the container platform supports it. As one example, Docker introduced multistage builds with version 17.05. This feature now enables developers to build one image file and then use that image file as part of another build, all while using the same Dockerfile.

Plan ahead

Container images are easy to create, but effective container image optimization and management require some careful consideration and planning. Attention to factors such as image size, proper naming conventions, security, optimizations and other issues can result in safer, more efficient container usage for the enterprise.

Next Steps

How well do you know container basics?

Dig Deeper on Application development and design

Software Quality
Cloud Computing
TheServerSide.com
Close