Docker Swarm
Docker Swarm is a clustering and scheduling tool for Docker containers. With Swarm, IT administrators and developers can establish and manage a cluster of Docker nodes as a single virtual system.
Swarm mode also exists natively for Docker Engine, the layer between the OS and container images. Swarm mode integrates the orchestration capabilities of Docker Swarm into Docker Engine 1.12 and newer releases.
Clustering is an important feature for container technology, because it creates a cooperative group of systems that can provide redundancy, enabling Docker Swarm failover if one or more nodes experience an outage. A Docker Swarm cluster also provides administrators and developers with the ability to add or subtract container iterations as computing demands change.
An IT administrator controls Swarm through a swarm manager, which orchestrates and schedules containers. The swarm manager allows a user to create a primary manager instance and multiple replica instances in case the primary instance fails. In Docker Engine's swarm mode, the user can deploy manager and worker nodes at runtime.
Docker Swarm uses the standard Docker application programming interface to interface with other tools, such as Docker Machine.

Docker Swarm load balancing
Swarm uses scheduling capabilities to ensure there are sufficient resources for distributed containers. Swarm assigns containers to underlying nodes and optimizes resources by automatically scheduling container workloads to run on the most appropriate host. This Docker orchestration balances containerized application workloads, ensuring containers are launched on systems with adequate resources, while maintaining necessary performance levels.
Swarm uses three different strategies to determine on which nodes each container should run:
- Spread -- Acts as the default setting and balances containers across the nodes in a cluster based on the nodes' available CPU and RAM, as well as the number of containers it is currently running. The benefit of the Spread strategy is, if the node fails, only a few containers are lost.
- BinPack -- Schedules containers to fully use each node. Once a node is full, it moves on to the next in the cluster. The benefit of BinPack is it uses a smaller amount of infrastructure and leaves more space for larger containers on unused machines.
- Random -- Chooses a node at random.
Docker Swarm filters
Swarm has five filters for scheduling containers:
- Constraint -- Also known as node tags, constraints are key/value pairs associated to particular nodes. A user can select a subset of nodes when building a container and specify one or multiple key value pairs.
- Affinity -- To ensure containers run on the same network node, the Affinity filter tells one container to run next to another based on an identifier, image or label.
- Port -- With this filter, ports represent a unique resource. When a container tries to run on a port that's already occupied, it will move to the next node in the cluster.
- Dependency -- When containers depend on each other, this filter schedules them on the same node.
- Health -- In the event that a node is not functioning properly, this filter will prevent scheduling containers on it.