How To Install Docker Swarm on Debian 12
In this tutorial, we will show you how to install Docker Swarm on Debian 12. Docker Swarm has emerged as a powerful native clustering and orchestration solution for containerized applications. For system administrators and DevOps professionals seeking robust container orchestration on Debian 12, Docker Swarm offers an accessible yet powerful alternative to more complex solutions like Kubernetes.
Introduction
Docker Swarm transforms standalone Docker hosts into a clustered environment, enabling high availability, load balancing, and simplified scaling of containerized applications. Unlike its more complex counterpart Kubernetes, Docker Swarm offers a gentler learning curve while providing essential orchestration capabilities for many production workloads.
Debian 12 “Bookworm” provides an excellent foundation for Docker Swarm deployments due to its stability, security focus, and long-term support. This combination delivers a reliable platform for container orchestration that balances performance with operational simplicity.
This guide targets system administrators and DevOps engineers looking to implement container orchestration in environments where Kubernetes might be overkill. By the end, you’ll have a fully functional Docker Swarm cluster running on Debian 12 with the knowledge to deploy, scale, and manage containerized applications effectively.
Prerequisites
Before beginning the Docker Swarm installation process, ensure your environment meets these requirements:
System Requirements
- At least 2 Debian 12 servers (1 manager and 1 worker minimum, though 3+ nodes recommended for production)
- 2 CPU cores per node (minimum)
- 2GB RAM per node (minimum, 4GB+ recommended for production)
- 20GB available storage space (minimum)
- Stable network connectivity between all nodes
Network Requirements
- All nodes must be able to communicate with each other
- The following ports must be open between nodes:
- TCP port 2377 for cluster management communications
- TCP and UDP port 7946 for node-to-node communication
- UDP port 4789 for overlay network traffic
Access Requirements
- Root or sudo access on all nodes
- Basic understanding of Linux command line operations
- Familiarity with basic Docker concepts
Ensuring these prerequisites are met will help create a smooth installation experience and a stable Docker Swarm environment.
Environment Preparation
Proper environment preparation creates the foundation for a reliable Docker Swarm cluster. Let’s set up each node with the correct configurations.
Hostname Configuration
Start by setting meaningful hostnames for each node. This improves cluster management and troubleshooting by making nodes easily identifiable.
# On manager node
sudo hostnamectl set-hostname manager
# On worker nodes
sudo hostnamectl set-hostname worker-01
# (Repeat for additional worker nodes with incremented numbers)
Host File Configuration
For proper inter-node communication, configure the /etc/hosts
file on each node:
sudo nano /etc/hosts
Add entries for all nodes in your cluster:
# Docker Swarm Cluster Nodes
192.168.1.10 manager
192.168.1.11 worker-01
192.168.1.12 worker-02
# Add additional nodes as needed
Replace the IP addresses with the actual IPs of your nodes. This configuration enables nodes to resolve each other by hostname, which simplifies cluster management.
System Updates
Update all nodes to ensure they have the latest security patches and packages:
sudo apt update
sudo apt upgrade -y
Required Packages
Install the necessary dependencies for Docker installation:
sudo apt install -y ca-certificates curl gnupg lsb-release dpkg
Firewall Configuration
If you’re using UFW (Uncomplicated Firewall), configure it to allow Docker Swarm communication:
# Allow SSH connections
sudo ufw allow 22/tcp
# Allow Docker Swarm ports
sudo ufw allow 2377/tcp
sudo ufw allow 7946/tcp
sudo ufw allow 7946/udp
sudo ufw allow 4789/udp
# Enable the firewall
sudo ufw enable
Verify the firewall configuration:
sudo ufw status
With the environment properly prepared, you’re ready to install Docker Engine on all nodes.
Installing Docker Engine on Debian 12
Docker Engine provides the core container runtime needed before configuring Swarm functionality. Follow these steps to install Docker on each Debian 12 node.
Adding Docker’s Official GPG Key
First, set up the repository and add Docker’s GPG key:
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-ce.gpg
sudo chmod a+r /usr/share/keyrings/docker-ce.gpg
Setting Up Docker Repository
Add the Docker repository to your system:
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-ce.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Installing Docker Packages
Update the package index and install Docker Engine:
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Verifying Installation
Confirm Docker installed correctly by running a test container:
sudo docker run hello-world
You should see a message confirming that Docker is working properly. If you encounter errors, check that the Docker daemon is running:
sudo systemctl status docker
Configuring Docker to Start on Boot
Ensure Docker starts automatically when the system boots:
sudo systemctl enable docker
If you experience installation issues, verify that all prerequisites are met and that your system can reach the Docker repositories. Common problems include network connectivity issues or incorrect repository configurations.
Post-Installation Configuration
After installing Docker, some additional configuration enhances security and usability.
Setting Up Docker for Non-Root Usage
By default, Docker commands require superuser privileges. Create a docker group and add your user to it for rootless operation:
sudo groupadd docker
sudo usermod -aG docker $USER
Log out and back in for these changes to take effect, or run:
newgrp docker
Test that you can run Docker commands without sudo:
docker run hello-world
Configuring Docker Daemon Options
Create or modify the Docker daemon configuration file for customized settings:
sudo mkdir -p /etc/docker
sudo nano /etc/docker/daemon.json
Add base configuration options:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"default-address-pools": [
{
"base": "172.18.0.0/16",
"size": 24
}
]
}
This configuration:
- Sets a reasonable log rotation policy
- Configures the address pool for Docker networks
For Docker Swarm functionality, you need to disable the live-restore feature which is incompatible with Swarm mode:
{
"live-restore": false,
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"default-address-pools": [
{
"base": "172.18.0.0/16",
"size": 24
}
]
}
Restart the Docker service to apply changes:
sudo systemctl restart docker
Proper post-installation configuration improves Docker’s performance, security, and compatibility with Docker Swarm.
Docker Swarm Architecture Overview
Before initializing Docker Swarm, understanding its architecture helps in designing an optimal cluster.
Manager vs Worker Nodes
Docker Swarm implements a manager-worker architecture:
- Manager nodes handle cluster management operations and orchestrate workloads
- Worker nodes execute the containers and application workloads
In a production environment, it’s recommended to have multiple manager nodes for high availability, ideally 3, 5, or 7 (odd numbers prevent split-brain scenarios).
Control Plane Components
The Swarm control plane consists of:
- Raft consensus store: Maintains the state of the cluster
- Scheduler: Assigns tasks to worker nodes
- Dispatcher: Distributes tasks to workers
- Orchestrator: Reconciles desired state with actual state
Service Concepts
In Docker Swarm, applications run as services rather than individual containers:
- Services: Define the desired state of an application
- Tasks: Individual container instances that make up a service
- Replicas: Multiple instances of a task for redundancy and load balancing
Understanding these architectural elements helps you design an appropriate Swarm topology for your workloads and reliability requirements.
Initializing the Docker Swarm Manager
With Docker installed on all nodes, you can now initialize the Swarm cluster, starting with the manager node.
Initializing the First Manager Node
On the designated manager node, initialize Docker Swarm:
docker swarm init --advertise-addr <MANAGER-IP>
Replace <MANAGER-IP>
with the IP address that other nodes will use to reach this manager.
The command output will provide a token for worker nodes to join the cluster. Save this token securely, as you’ll need it when adding worker nodes.
Verifying Initialization
Check the status of your new Swarm:
docker info
Look for the “Swarm: active” section to confirm initialization.
You can also verify the node list:
docker node ls
This should show your manager node with the “Leader” status.
Setting Default Address Pool
If you need to customize the address pools used by overlay networks, you can initialize Swarm with specific options:
docker swarm init --advertise-addr <MANAGER-IP> --default-addr-pool 10.20.0.0/16 --default-addr-pool-mask-length 24
This configuration helps prevent IP addressing conflicts in complex network environments.
Configuring High Availability for Managers
For production environments, implementing high availability ensures your cluster remains operational even if a manager node fails.
Adding Additional Manager Nodes
To add more manager nodes, first retrieve the manager join token:
docker swarm join-token manager
This command outputs the token and the complete join command. On each additional node you want to promote to manager, run the provided join command:
docker swarm join --token SWMTKN-1-<token> <MANAGER-IP>:2377
Understanding Raft Consensus
Docker Swarm uses the Raft consensus algorithm, which requires a majority of manager nodes (quorum) to remain available for the cluster to function. The formula to calculate maximum tolerable failures is (n-1)/2, where n is the number of manager nodes:
- 3 managers: tolerates 1 failure
- 5 managers: tolerates 2 failures
- 7 managers: tolerates 3 failures
Always use an odd number of managers to prevent split-brain situations during network partitions.
Manager Node Distribution
For maximum resilience, distribute manager nodes across different failure domains:
- Different physical servers
- Different racks or power supplies
- Different availability zones if in a cloud environment
This distribution minimizes the risk of multiple failures affecting the manager quorum.
Adding Worker Nodes to the Swarm
With the manager nodes configured, you can expand your cluster by adding worker nodes.
Retrieving Worker Join Token
On a manager node, obtain the worker join token:
docker swarm join-token worker
The command outputs the complete join command with the token.
Joining Worker Nodes
On each worker node, execute the join command provided:
docker swarm join --token SWMTKN-1-<token> <MANAGER-IP>:2377
Upon successful execution, you’ll see the message: “This node joined a swarm as a worker.”
Verifying Node Addition
From a manager node, verify that all nodes have joined the cluster:
docker node ls
You should see all your nodes listed with their respective roles (manager/worker) and availability status.
Adding Labels to Nodes
Node labels help with task placement and can be used to identify node characteristics:
docker node update --label-add datacenter=east worker-01
docker node update --label-add rack=rack1 worker-01
These labels can later be used in deployment constraints to control where containers run.
Managing Your Docker Swarm Cluster
Once your Swarm is operational, you’ll need to know how to manage the cluster effectively.
Viewing Cluster Information
Get a comprehensive overview of your Swarm cluster:
docker node ls
For detailed information about a specific node:
docker node inspect <NODE-ID> --pretty
Changing Node Roles
To promote a worker node to manager:
docker node promote worker-01
Conversely, to demote a manager to worker:
docker node demote manager-02
Remember to maintain an odd number of managers for proper Raft consensus.
Handling Node Maintenance
When performing maintenance on a node, put it in “drain” mode to gracefully remove all running containers:
docker node update --availability drain worker-01
After maintenance, restore the node to active duty:
docker node update --availability active worker-01
Removing Nodes from the Swarm
To remove a node from the Swarm, first drain it (if it’s still running), then run this command on the node itself:
docker swarm leave
For manager nodes, use the force flag:
docker swarm leave --force
Finally, remove the node from the Swarm’s list on a manager node:
docker node rm <NODE-ID>
Rotating Swarm Join Tokens
For security, periodically rotate your join tokens:
docker swarm join-token --rotate worker
docker swarm join-token --rotate manager
This invalidates old tokens, preventing unauthorized nodes from joining your cluster.
Deploying Services on Docker Swarm
The primary purpose of a Docker Swarm cluster is to run containerized applications as services.
Creating Your First Service
Deploy a simple web service:
docker service create --name webserver --replicas 3 --publish 8080:80 nginx
This command:
- Creates a service named “webserver”
- Deploys 3 replicas of the container
- Maps port 8080 on the host to port 80 in the container
- Uses the nginx image from Docker Hub
Understanding Service Modes
Docker Swarm supports two service modes:
- Replicated services (default): Run a specified number of replicas across the cluster
docker service create --name api --replicas 5 my-api-image
- Global services: Run one instance on every node in the cluster
docker service create --name monitoring --mode global prometheus/node-exporter
Scaling Services
Adjust the number of containers in a service:
docker service scale webserver=5
This scales the “webserver” service to 5 replicas.
Constraining Service Placement
Control where services run using placement constraints:
docker service create --name database \
--constraint 'node.labels.type==storage' \
--replicas 3 \
mysql:8.0
This ensures the database service only runs on nodes with the “type=storage” label.
Service Resource Limits
Set CPU and memory limits for services:
docker service create --name resource-limited \
--limit-cpu 0.5 \
--limit-memory 512M \
--reserve-cpu 0.25 \
--reserve-memory 256M \
nginx
This configuration limits the service to use a maximum of 0.5 CPU cores and 512MB of memory, while reserving at least 0.25 CPU cores and 256MB of memory.
Updating Running Services
Perform rolling updates of services:
docker service update --image nginx:alpine webserver
Control update behavior with additional flags:
docker service update \
--update-parallelism 2 \
--update-delay 20s \
--update-failure-action rollback \
webserver
This configuration updates two containers at a time, waits 20 seconds between updates, and automatically rolls back if the update fails.
Working with Docker Swarm Networks
Networking is a crucial aspect of container orchestration in Docker Swarm.
Understanding Swarm Networking
Docker Swarm uses several network types:
- overlay: Multi-host networks for service communication
- ingress: Special overlay network for service publishing
- docker_gwbridge: Bridge network connecting overlay networks to host network
Creating Custom Overlay Networks
Create a custom overlay network for your services:
docker network create --driver overlay --attachable frontend
The --attachable
flag allows standalone containers to connect to this network.
Attaching Services to Networks
Deploy services on specific networks:
docker service create --name api \
--network frontend \
--replicas 3 \
my-api-image
Encrypting Overlay Network Traffic
For enhanced security, encrypt data plane traffic:
docker network create --driver overlay \
--opt encrypted \
secure-network
This encrypts all traffic on this overlay network between containers.
Network Troubleshooting
Diagnose network issues with these commands:
# List all networks
docker network ls
# Inspect a network
docker network inspect frontend
# Check connectivity between services
docker exec -it <container-id> ping <service-name>
Proper network configuration ensures your services can communicate securely and efficiently within the Swarm cluster.
Setting Up a Local Registry
A local Docker registry allows you to store and distribute your container images within the Swarm, reducing external dependencies and bandwidth usage.
Creating a Registry Service
Deploy a registry service in your Swarm:
docker service create \
--name registry \
--publish 5000:5000 \
--mount type=bind,src=/mnt/registry,dst=/var/lib/registry \
registry:2
This command:
- Creates a service named “registry”
- Maps port 5000 to the registry’s standard port
- Mounts a persistent volume to store images
Configuring Nodes to Use the Registry
Configure all Swarm nodes to trust your local registry:
sudo nano /etc/docker/daemon.json
Add or update the following:
{
"live-restore": false,
"insecure-registries": ["manager:5000"]
}
Restart Docker to apply the changes:
sudo systemctl restart docker
Pushing Images to Your Registry
Tag and push images to your local registry:
docker build -t my-application:1.0 .
docker tag my-application:1.0 manager:5000/my-application:1.0
docker push manager:5000/my-application:1.0
Using Registry Images in Services
Deploy services using images from your local registry:
docker service create \
--name my-app \
--replicas 3 \
manager:5000/my-application:1.0
A local registry improves deployment reliability and reduces external dependencies, particularly important in environments with limited internet bandwidth.
Advanced Docker Swarm Features
Docker Swarm includes several advanced features that enhance security, configuration management, and deployment flexibility.
Implementing Docker Secrets
Store sensitive data securely with Docker secrets:
# Create a secret
echo "supersecretpassword" | docker secret create db_password -
# Use the secret in a service
docker service create \
--name database \
--secret db_password \
--env MYSQL_ROOT_PASSWORD_FILE=/run/secrets/db_password \
mysql:8.0
Secrets are mounted as files in the container’s filesystem, allowing applications to access them securely.
Using Docker Configs
Manage configuration files with Docker configs:
# Create a config from a file
docker config create nginx_conf nginx.conf
# Use the config in a service
docker service create \
--name webserver \
--config source=nginx_conf,target=/etc/nginx/nginx.conf \
nginx
This keeps configuration separate from container images, improving maintainability.
Setting Up Health Checks
Implement container health checks for automatic recovery:
docker service create \
--name api \
--health-cmd "curl -f http://localhost/health || exit 1" \
--health-interval 30s \
--health-retries 3 \
--health-timeout 10s \
my-api-image
This configuration periodically checks the container’s health and restarts it if checks fail.
Deploying Stacks with Docker Compose
Docker Compose files can define complex multi-service applications:
# docker-compose.yml
version: '3.8'
services:
web:
image: nginx:alpine
ports:
- "80:80"
deploy:
replicas: 3
api:
image: my-api:latest
deploy:
replicas: 2
database:
image: postgres:13
volumes:
- db-data:/var/lib/postgresql/data
deploy:
placement:
constraints:
- node.labels.type==storage
volumes:
db-data:
Deploy the stack:
docker stack deploy -c docker-compose.yml myapp
These advanced features provide the tools needed for enterprise-grade container orchestration in Docker Swarm.
Troubleshooting Common Issues
Even well-configured Docker Swarm clusters can encounter issues. Here’s how to diagnose and resolve common problems.
Node Connectivity Problems
If nodes can’t communicate:
- Verify firewall settings allow traffic on ports 2377, 7946, and 4789
- Check network connectivity between nodes
- Ensure hostname resolution works correctly
# Test connectivity
ping worker-01
telnet worker-01 2377
Service Deployment Failures
When services fail to deploy:
- Check available resources on worker nodes
- Verify image availability and correct name
- Review service logs for specific errors
# View service logs
docker service logs <service-name>
# Check task status
docker service ps <service-name> --no-trunc
Manager Quorum Loss
If the Swarm loses manager quorum:
- If possible, restore failed manager nodes
- Force a new Swarm on a surviving manager if recovery isn’t possible:
docker swarm init --force-new-cluster --advertise-addr <MANAGER-IP>
This creates a new Swarm with the current state, allowing you to add new managers.
Container Image Pull Failures
When containers can’t pull images:
- Verify registry connectivity
- Check authentication credentials
- Ensure the image exists and is tagged correctly
# Test registry connectivity
curl -v https://registry:5000/v2/
Effective troubleshooting skills are essential for maintaining a reliable Docker Swarm environment.
Performance Tuning
Optimize your Docker Swarm cluster for improved performance and resource utilization.
Docker Daemon Tuning
Adjust the Docker daemon configuration:
{
"live-restore": false,
"storage-driver": "overlay2",
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"default-ulimits": {
"nofile": {
"Name": "nofile",
"Hard": 64000,
"Soft": 64000
}
}
}
Container Resource Allocation
Set appropriate limits for containers based on workload profiles:
- CPU-intensive applications: Allocate more CPU, moderate memory
- Memory-intensive applications: Ensure adequate memory limits
- I/O-intensive applications: Consider storage driver optimizations
Network Performance
For improved network performance:
- Use host networking where appropriate for performance-critical services
- Configure MTU settings appropriate for your network
- Consider network plugin selection based on performance requirements
Storage Driver Selection
The storage driver significantly impacts container performance:
overlay2
offers the best balance of performance and compatibilitydevicemapper
may provide better isolation at a performance costbtrfs
orzfs
require specific filesystem support but offer advanced features
Choosing the right performance optimizations depends on your specific workload characteristics and hardware resources.
Congratulations! You have successfully installed Docker Swarm. Thanks for using this tutorial to install the latest version of the Docker Swarm on Debian 12 “Bookworm”. For additional help or useful information, we recommend you check the official Docker website.