imageteam - Fotolia
Most web-based applications in production use multiple containers, which enables you to add capacity to handle increased workload due to high traffic volume. Using multiple Dockerfiles helps you decide how to partition an application so functional aggregation can work, but you must first ask yourself a few questions during the decision-making process.
You might use multiple Dockerfiles to spin up more front-end worker nodes behind a load-balancer to keep up with user demand. But you must consider a host of factors, such as what applications can benefit from Dockerfiles, how many Dockerfiles each application requires and if you have the processes to track Dockerfiles.
Considerations for configuring applications with Dockerfiles
Building and managing a distributed application can be a complex process, but with Dockerfiles and tools such as GitHub you can keep your application configurations as simple possible. Prior to using Dockerfiles, ask yourself these questions to ensure your projects are successful.
What types of applications benefit from multiple Dockerfiles?
Any application with a user-facing front end, a logic or processing piece and some type of storage back end benefit from multiple Dockerfiles.
The Docker Voting App sample available on GitHub is one example of this architecture. This application uses Python, Node.js, Redis, PostgreSQL and a .NET worker application. If you decide to test this application on your local machines, you must enable drive sharing from the Docker settings application.
Each of the nodes in this application comes with a custom configuration Dockerfile. In some cases, you will have implementation choices that can handle specific tasks, such as Python or ASP.NET Core for the front-end web application. You have a total of five different functional nodes to implement the sample application, with Dockerfiles available for each node.
How do you decide which services require their own Dockerfile?
From the Docker Voting App example, you have two front-end applications which can be a single node, one cache and one database node plus a single worker node. Breaking an application down into functional areas makes it easier to offer different Dockerfile options for each node.
How many Dockerfiles should you have per application?
You should have one Dockerfile per distinct service or function. For example, in the Docker Voting App you'll find one Dockerfile for each of the five functional nodes.
Each service has multiple Dockerfiles for the different implementation methods; the worker node has two Dockerfiles with specific instructions for the ASP.Net version and a second for the Java implementation. Both contain a minimal number of instructions focused on that specific service.
How do you containerize an application with multiple Dockerfiles?
Docker Compose is the most common way to build an application that uses multiple Dockerfiles. This requires a YAML file to create the container based on a series of commands. The Docker docs site has a listing of the Compose file for the Docker Voting App. Each functional service has a separate section, which includes everything necessary to build the application. The dockerfile keyword enables you to specify a different filename than dockerfile in the docker-compose.yml file.
How do you avoid excessive resource consumption?
Resource limits or quotas are the top ways to limit resource usage in a container architecture. With Docker, you can enforce hard memory limits using command-line options, such as --memory-swap and --kernel-memory. It is also possible to limit CPU usage in a similar fashion.
Kubernetes also supports resource consumption limits on a namespace basis using a YAML file with a set of definitions and keyword limits.
A namespace in Kubernetes is similar to a programming language feature where you can provision or instantiate individual resources with appropriate management and security controls. It's also somewhat analogous to a virtual workspace in Python.
What's the best way to organize and keep track of multiple Dockerfiles?
Building a distributed application spanning multiple containers is a prime candidate for a DevOps approach. DevOps lends itself well to using some type of source code control such as Git to manage versioning and dependencies of each release.
Keeping your application configurations as simple as possible will go a long way to make your projects successful. A good place to start is with one of the example applications for your initial development and deployment testing. Then you should begin adding in your code to incrementally build out your first release. If nothing else, you will learn a lot about your code and how to get it up and running.