FedoraRHEL Based

How To Install Docker Swarm on Fedora 43

Install Docker Swarm on Fedora 43

Container orchestration has become essential for modern application deployment. Docker Swarm offers a native, straightforward solution for managing containerized applications across multiple hosts, providing seamless cluster management and high availability without the complexity of other orchestration platforms. This comprehensive guide walks you through installing and configuring Docker Swarm on Fedora 43, from initial setup to deploying your first clustered application.

What is Docker Swarm?

Docker Swarm is Docker’s native clustering and orchestration tool that transforms a group of Docker hosts into a single virtual system. Unlike standalone Docker installations, Swarm mode enables you to deploy services across multiple machines, ensuring your applications remain available even if individual nodes fail.

The platform excels in several key areas. It provides built-in load balancing through its routing mesh, automatically distributing incoming requests across available containers. Service discovery happens automatically through internal DNS, allowing containers to communicate using service names rather than IP addresses. Swarm also handles rolling updates gracefully, enabling zero-downtime deployments by updating containers incrementally.

When comparing Docker Swarm to Kubernetes, Swarm shines in simplicity and ease of deployment. The setup requires fewer resources and less operational overhead, making it ideal for small to medium-sized deployments. Teams can get production-ready clusters running in minutes rather than hours, and the learning curve is significantly gentler for those already familiar with Docker commands.

Understanding Docker Swarm Architecture

Swarm Nodes Explained

Docker Swarm clusters consist of two types of nodes: managers and workers. Manager nodes handle cluster orchestration tasks including accepting service definitions, scheduling containers across the cluster, and maintaining the desired state of services. They also serve the Swarm API and maintain the Raft consensus database that stores cluster configuration and state.

Worker nodes execute the tasks assigned by manager nodes. They run the actual application containers and report their status back to managers. While workers don’t participate in orchestration decisions, they’re essential for distributing workloads and ensuring application scalability.

Communication between nodes happens through encrypted channels by default. Swarm uses mutual TLS authentication for all node-to-node communication, ensuring that only authorized nodes can join the cluster. The Raft consensus protocol keeps all manager nodes synchronized, requiring a quorum for cluster operations.

Key Components

Services represent the primary deployment unit in Swarm. A service defines which container image to run, how many replicas to maintain, and how to expose the application. Tasks are individual container instances that implement a service, and Swarm distributes these tasks across available worker nodes.

The routing mesh provides ingress load balancing for all nodes in the cluster. When you publish a service port, Swarm makes that service accessible on every node, regardless of whether that node is running a task for the service. This automatic load balancing eliminates the need for external load balancers in many scenarios.

Prerequisites

System Requirements

Before beginning the installation, ensure your Fedora 43 system meets these requirements. You’ll need a maintained Fedora 43 installation with at least 2GB of RAM, though 4GB is recommended for production environments. A user account with sudo privileges is essential for executing administrative commands throughout this tutorial.

For multi-node clusters, each machine requires a static IP address or reliable DHCP reservations to maintain consistent cluster communication. While you can test Swarm on a single node, a true cluster requires at least three machines: one manager and two workers for proper high availability testing.

Network Requirements

Docker Swarm relies on specific network ports for cluster communication. Port 2377 TCP handles cluster management communications between manager nodes. Port 7946 TCP and UDP facilitates container network discovery across the cluster. Port 4789 UDP carries overlay network traffic using VXLAN.

Firewall configuration must allow traffic on these ports between all cluster nodes. Blocking any of these ports will prevent proper cluster operation and cause node communication failures.

Other Prerequisites

Network Time Protocol synchronization across all nodes prevents certificate validation issues and ensures consistent logging timestamps. Basic familiarity with Docker commands and containerization concepts will help you follow this guide more easily. While optional, having multiple physical or virtual machines available allows you to build a true multi-node cluster for production-like testing.

Step 1: Update Your Fedora 43 System

Start by updating your Fedora 43 system to ensure all packages are current. Open a terminal and execute:

sudo dnf update -y

This command refreshes the package repositories and installs any available updates. Keeping your system updated is critical for security and stability, particularly for container infrastructure that may host sensitive applications.

If the update includes kernel changes, reboot your system to load the new kernel:

sudo reboot

Allow the system to restart completely before proceeding. This ensures all kernel modules and system services run with the latest security patches.

Step 2: Install Docker Engine on Fedora 43

Remove Conflicting Packages

Fedora might include unofficial Docker packages that conflict with the official Docker Engine. Remove any existing installations:

sudo dnf remove docker \
  docker-client \
  docker-client-latest \
  docker-common \
  docker-latest \
  docker-latest-logrotate \
  docker-logrotate \
  docker-selinux \
  docker-engine-selinux \
  docker-engine

The command might report that none of these packages are installed, which is perfectly normal for fresh Fedora installations.

Add Docker Repository

Install the dnf-plugins-core package to manage DNF repositories:

sudo dnf -y install dnf-plugins-core

Add Docker’s official repository for Fedora:

sudo dnf config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo

This repository provides the latest stable Docker releases specifically built for Fedora systems.

Install Docker Packages

Install Docker Engine along with essential components:

sudo dnf install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

This command installs several components. Docker CE (Community Edition) is the core Docker Engine. Docker CE CLI provides the command-line interface for interacting with Docker. Containerd.io serves as the container runtime. Docker Buildx Plugin enables advanced build features. Docker Compose Plugin allows multi-container application definitions.

When prompted, verify the GPG key fingerprint matches 060A 61C5 1B55 8A7F 742B 77AA C52F EB6B 621E 9F35 and accept it.

Start and Enable Docker Service

Start the Docker daemon and configure it to launch automatically at boot:

sudo systemctl start docker
sudo systemctl enable docker

Verify Docker is running correctly:

sudo systemctl status docker

You should see active (running) in green, indicating the Docker service is operational.

Verify Installation

Test your Docker installation by running the hello-world container:

sudo docker run hello-world

This command downloads a minimal test image and executes it. If successful, you’ll see a message confirming Docker is correctly installed and functioning.

Optional: Configure Non-Root Access

By default, Docker commands require root privileges. To run Docker commands without sudo, add your user to the docker group:

sudo usermod -aG docker $USER

Log out and back in for the group membership to take effect. This step is optional but convenient for daily Docker operations. Note that users in the docker group have root-equivalent privileges on the system.

Step 3: Configure Firewall for Docker Swarm

Fedora 43 uses firewalld by default. Open the required Swarm ports:

sudo firewall-cmd --permanent --add-port=2377/tcp
sudo firewall-cmd --permanent --add-port=7946/tcp
sudo firewall-cmd --permanent --add-port=7946/udp
sudo firewall-cmd --permanent --add-port=4789/udp
sudo firewall-cmd --reload

The --permanent flag ensures these rules persist across reboots. Port 2377 TCP enables cluster management between managers. Port 7946 TCP/UDP facilitates node discovery and communication. Port 4789 UDP carries VXLAN overlay network traffic.

Verify your firewall configuration:

sudo firewall-cmd --list-all

Look for the newly added ports in the output. For production environments, consider restricting these ports to specific trusted networks rather than allowing access from all sources. Never expose port 2377 to the public internet, as it grants cluster management access.

If using encrypted overlay networks exclusively, implement additional hardening by customizing the default ingress network and restricting unencrypted VXLAN traffic.

Step 4: Initialize Docker Swarm (Manager Node)

On your designated manager node, initialize the Swarm cluster. First, determine your server’s IP address:

ip addr show

Identify the IP address of the network interface connected to your cluster network. Then initialize Swarm:

sudo docker swarm init --advertise-addr 192.168.1.100

Replace 192.168.1.100 with your actual manager node IP address. The --advertise-addr parameter specifies which IP address other nodes should use to connect to this manager.

The initialization output includes critical information. You’ll see a join token command that looks like:

docker swarm join --token SWMTKN-1-xxxxxxxxxxxxxxxxxxxxxxxxxxxx 192.168.1.100:2377

Copy this entire command immediately and store it securely. You’ll need it to add worker nodes to your cluster. The token contains encrypted credentials that authorize nodes to join your Swarm.

Verify Swarm Initialization

Check that Swarm mode is active:

docker info

Look for “Swarm: active” in the output. This confirms your node is now operating as a Swarm manager.

List cluster nodes:

docker node ls

You should see one node with the status “Ready” and “Leader” under manager status. This command only works on manager nodes, confirming your role in the cluster.

Step 5: Add Worker Nodes to the Swarm

Prepare Worker Nodes

On each machine you want to add as a worker, repeat Step 2 to install Docker Engine and Step 3 to configure the firewall. Ensure all worker nodes can communicate with the manager node over the network.

Test network connectivity from each worker to the manager:

ping -c 4 192.168.1.100
telnet 192.168.1.100 2377

Both commands should succeed. If telnet isn’t installed, use nc (netcat) instead.

Join Workers to Swarm

On each worker node, execute the join command you saved from Step 4:

sudo docker swarm join --token SWMTKN-1-xxxxxxxxxxxxxxxxxxxxxxxxxxxx 192.168.1.100:2377

You should see: “This node joined a swarm as a worker”. Repeat this process for each worker node in your cluster.

Verify Worker Addition

Return to the manager node and list all cluster nodes:

docker node ls

All nodes should display “Ready” status. Workers will show no value under the manager column, while the manager displays “Leader”. If nodes don’t appear, verify network connectivity and firewall rules.

Retrieve Join Token Later

If you didn’t save the join token, retrieve it anytime from a manager node:

docker swarm join-token worker

For adding additional manager nodes:

docker swarm join-token manager

Docker Swarm tokens never expire by default, so consider rotating them regularly for security.

Step 6: Deploy Your First Service on Docker Swarm

Create a Simple Service

Deploy a basic web service across your cluster:

docker service create --name web --replicas 3 -p 80:80 nginx:latest

This command creates a service named “web” running three Nginx container replicas, exposing port 80 on all cluster nodes. The --replicas parameter defines how many container instances Swarm maintains. The -p flag publishes the service on the specified port across the entire cluster.

List Services

View running services:

docker service ls

The output shows service ID, name, mode (replicated), number of replicas running versus desired, and exposed ports.

Inspect Service Details

See where Swarm scheduled your service tasks:

docker service ps web

This displays which specific nodes are running each container replica. Swarm automatically distributes tasks across available workers, balancing the workload.

Test the service by accessing http://any-node-ip in your browser. Thanks to the routing mesh, the service responds from any cluster node, regardless of which nodes actually run the containers.

Deploy Stack with Docker Compose

For multi-service applications, use Docker Compose format. Create a file named docker-compose.yml:

version: '3.8'
services:
  web:
    image: nginx:alpine
    deploy:
      replicas: 3
      placement:
        constraints:
          - node.role == worker
    ports:
      - "8080:80"
  
  redis:
    image: redis:alpine
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.role == worker

Deploy the stack:

docker stack deploy --compose-file docker-compose.yml myapp

List deployed stacks:

docker stack ls
docker stack services myapp

Stacks provide a convenient way to manage related services as a single unit.

Step 7: Manage and Scale Services

Scale Services

Adjust the number of service replicas dynamically:

docker service scale web=5

Swarm automatically launches additional containers or removes excess ones to match the desired state. Verify the scaling:

docker service ps web

You’ll see five tasks distributed across your worker nodes. Scaling happens within seconds, demonstrating Swarm’s rapid response to demand changes.

Update Services

Perform rolling updates with zero downtime:

docker service update --image nginx:alpine web

Swarm updates containers one at a time by default, ensuring your service remains available throughout the update process. Watch the update progress:

docker service ps web

You’ll see old tasks shutting down as new tasks start with the updated image.

Remove Services

Delete a service when no longer needed:

docker service rm web

Remove an entire stack:

docker stack rm myapp

Swarm immediately stops all associated containers and cleans up resources.

Inspect Swarm Cluster

View detailed cluster information:

docker info | grep -A 20 Swarm
docker node inspect self

These commands reveal cluster configuration, node resources, and availability status.

Step 8: Implement Security Best Practices

Enable TLS Encryption

Docker Swarm uses mutual TLS encryption by default for all cluster communications. Every node receives a certificate during join operations, and Swarm automatically rotates these certificates periodically.

Verify TLS configuration on a manager node:

docker system info | grep -i security

For additional security, customize the default ingress network to use encryption.

Rotate Join Tokens Regularly

Change join tokens monthly as a security best practice:

docker swarm join-token --rotate worker
docker swarm join-token --rotate manager

Token rotation invalidates old tokens, preventing unauthorized nodes from joining if credentials leak. Update your documentation with new tokens after rotation.

Use Docker Secrets

Store sensitive data like passwords and API keys using Docker secrets instead of environment variables:

echo "my_secure_password" | docker secret create db_password -

Reference secrets in service definitions:

docker service create \
  --name mysql \
  --secret db_password \
  -e MYSQL_ROOT_PASSWORD_FILE=/run/secrets/db_password \
  mysql:latest

Secrets are encrypted at rest and in transit, providing secure credential management for your applications.

Network Security

Create encrypted overlay networks for sensitive applications:

docker network create --driver overlay --opt encrypted secure-network

Deploy services to the encrypted network:

docker service create --name api --network secure-network myapp:latest

Implement firewall rules restricting port 2377 access to known manager nodes. For the data path port 4789, consider iptables rules that drop unencrypted packets:

sudo iptables -I INPUT -m udp --dport 4789 -m policy --dir in --pol none -j DROP

Access Control

Limit SSH and Docker access to manager nodes to essential personnel only. Implement jump hosts for accessing cluster nodes. Consider integrating with external authentication systems for user management. Regular security audits help identify configuration drift and potential vulnerabilities.

Step 9: Backup and High Availability

Backup Swarm State

Manager nodes store cluster state in /var/lib/docker/swarm/. Create regular backups:

sudo systemctl stop docker
sudo tar -czvf swarm-backup-$(date +%Y%m%d).tar.gz /var/lib/docker/swarm
sudo systemctl start docker

Store backups on external systems or cloud storage. Schedule automated backups using cron. Test restoration procedures regularly to ensure backups are functional.

Multi-Manager Setup for High Availability

Production clusters should run multiple manager nodes for fault tolerance. Docker recommends odd numbers of managers (3, 5, or 7) to maintain Raft consensus quorum. More than seven managers is rarely necessary and can impact performance.

Promote a worker to manager:

docker node promote worker1

Demote a manager to worker:

docker node demote manager2

Distribute managers across different availability zones or physical locations to survive infrastructure failures. With three managers, your cluster tolerates one manager failure. With five managers, two failures are tolerable.

Troubleshooting Common Issues

Node Communication Problems

If nodes can’t communicate, verify network connectivity:

ping worker1
telnet worker1 2377

Check firewall rules on all nodes:

sudo firewall-cmd --list-all

Ensure all required ports are open. Verify SELinux isn’t blocking connections:

sudo ausearch -m avc -ts recent

Test bidirectional connectivity between all cluster nodes.

Service Discovery Issues

When services can’t resolve DNS names, check overlay network configuration:

docker network ls
docker network inspect ingress

Verify services are attached to the correct networks. Restart the Docker daemon on affected nodes if DNS resolution fails consistently:

sudo systemctl restart docker

Services Not Starting

Inspect service logs for errors:

docker service logs web

Check resource availability on worker nodes:

docker node ls
docker node inspect worker1 --format '{{.Status}}'

Insufficient CPU or memory can prevent task scheduling. Review placement constraints that might prevent scheduling on available nodes. Verify the container image is accessible from all nodes.

Node Status Issues

When nodes show “Down” status:

docker node ls

SSH to the affected node and check Docker service status:

sudo systemctl status docker
journalctl -u docker -n 50

Restart Docker if necessary:

sudo systemctl restart docker

If the node remains down, consider draining tasks and removing it from the cluster:

docker node update --availability drain worker1
docker node rm worker1

Data Replication Problems

Verify volume configurations for stateful services. Swarm doesn’t automatically replicate volume data between nodes. Use distributed storage solutions like NFS, GlusterFS, or cloud storage for shared data access. Check that volume mount paths exist on all nodes where tasks might run.

Congratulations! You have successfully installed Docker Swarm. Thanks for using this tutorial to install the latest version of Docker Swarm on Fedora 43 Linux. For additional help or useful information, we recommend you check the official Docker website.

VPS Manage Service Offer
If you don’t have time to do all of this stuff, or if this is not your area of expertise, we offer a service to do “VPS Manage Service Offer”, starting from $10 (Paypal payment). Please contact us to get the best deal!

r00t

r00t is an experienced Linux enthusiast and technical writer with a passion for open-source software. With years of hands-on experience in various Linux distributions, r00t has developed a deep understanding of the Linux ecosystem and its powerful tools. He holds certifications in SCE and has contributed to several open-source projects. r00t is dedicated to sharing her knowledge and expertise through well-researched and informative articles, helping others navigate the world of Linux with confidence.
Back to top button