How To Install Docker Compose on Ubuntu 26.04 LTS

Install Docker Compose on Ubuntu 26.04 LTS

Running multiple containers by hand quickly turns into a maintenance problem. You end up with long docker run commands scattered across shell scripts, no single place to see how your services relate to each other, and a rebuild process that breaks whenever you change one flag. That is the exact problem Docker Compose solves, and this guide shows you exactly how to install Docker Compose on Ubuntu 26.04 LTS and get a real multi-container stack running in under 15 minutes.

This guide was tested on a fresh Ubuntu 26.04 LTS server in April 2026, using Docker CE 29.4.0 and Docker Compose v5.1.2. Every command shown here produces the output you will see. No filler, no guesswork.

By the end of this tutorial, you will have Docker Engine installed from Docker’s official repository, the Compose v2 plugin ready to use, your user configured to run Docker without sudo, and a working two-container stack you can build on.

What Is Docker Compose and Why It Exists

Docker Compose is a tool that lets you define an entire multi-container application in a single YAML file, then start, stop, and manage all of it with one command.

Without Compose, deploying a WordPress site means running a docker run command for MariaDB with a dozen flags, then another docker run for WordPress, then manually creating a shared network between them, then re-doing all of it after every server reboot. Compose replaces all of that with a docker-compose.yml file and docker compose up -d.

Compose is not a replacement for Docker Engine. It sits on top of it, using the Docker API to manage containers, networks, and volumes on your behalf.

Docker Compose v2 vs. the Old v1

If you have used Docker before, you may remember docker-compose (with a hyphen). That was v1, a standalone Python binary installed separately via pip or a GitHub binary download.

v2 is completely different. It ships as a Go-based plugin bundled inside the docker-compose-plugin package. There is no Python dependency, no pip, and no extra binary to manage. The command changed from docker-compose to docker compose (with a space).

v1 is officially deprecated by Docker and has no support on Ubuntu 26.04. If you try to use the old hyphenated command on a fresh install, it will not exist. Use docker compose throughout this guide.

Prerequisites

Before you run a single command, make sure your environment meets these requirements:

  • Ubuntu 26.04 LTS installed on a server or VM (this guide is tested on kernel 7.0.0-10-generic)
  • A non-root user with sudo privileges (running everything as root is a security risk)
  • Minimum 2 GB RAM and 20 GB of disk space for real application stacks
  • An active internet connection to pull packages from Docker’s APT repository
  • No previous Docker install from Ubuntu’s snap or default repo (those conflict with Docker CE)

Ubuntu 26.04 runs cgroup v2 exclusively with systemd 259. Most modern container images handle this fine. Very old images like Alpine 3.13 or legacy CentOS 7 may behave unexpectedly. Stick to current image tags throughout this tutorial.

Step 1: Update Your System Package Index

Before installing anything, bring the system up to date.

sudo apt update && sudo apt upgrade -y

Why: apt update refreshes the local package index against all configured repositories. Without it, apt works from a stale list and may try to install outdated package versions or fail to locate packages that were recently added. The apt upgrade step applies pending security patches so you start from a clean, patched base.

Installing Docker on a system with unresolved package conflicts is a common source of cryptic dependency errors. Running this first saves you debugging time later.

Expected output: You will see lines starting with Hit: and Get: as apt contacts each repository, followed by a summary of how many packages were upgraded.

Step 2: Install Required Dependency Packages

sudo apt install -y ca-certificates curl gnupg

Why each package matters:

  • ca-certificates: Contains the trusted root Certificate Authority certificates that allow curl to verify Docker’s HTTPS download URL. On a minimal Ubuntu server install, this package may be absent, which causes TLS errors when downloading the GPG key.
  • curl: Docker’s official repository setup uses curl to fetch the GPG signing key. Minimal Ubuntu images sometimes omit it.
  • gnupg: Ubuntu 26.04 uses GPG-signed APT repositories. The gpg command (provided by this package) is needed to import Docker’s signing key in the next step.

These three packages are prerequisites only. They are not Docker itself. If your system already has them, apt skips the installation silently.

Step 3: Add Docker’s Official GPG Key

This step creates a secure keystore directory and imports Docker’s signing key into it.

sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

Why this matters: APT uses cryptographic signatures to verify that the packages you install actually came from the source they claim. Without Docker’s GPG key, apt cannot verify Docker’s packages and will refuse to install them.

Why use /etc/apt/keyrings/? This is the per-repository key location recommended since Ubuntu 22.04. The older method added keys to /etc/apt/trusted.gpg.d/, which applied them globally to all repositories. That is a security problem because a key for one repository could be used to sign packages from a different one. The keyrings directory scopes each key to a specific repo.

Why --dearmor? Docker distributes its GPG key in ASCII-armored format (human-readable text). APT needs the binary version. The gpg --dearmor flag handles that conversion automatically.

Why chmod a+r? The APT process that reads this key file runs as a non-root user. Setting the file to world-readable ensures it can access the key without requiring elevated permissions.

Step 4: Add the Docker APT Repository

Now point APT at Docker’s official package repository.

echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo $VERSION_CODENAME) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Then refresh the package index:

sudo apt update

Why use Docker’s repository instead of Ubuntu’s default? The docker.io package that ships in Ubuntu’s standard repository lags significantly behind Docker’s official releases. On Ubuntu 26.04, you need Docker CE from Docker’s own APT repo to get the docker-compose-plugin package and Compose v2.

Why $(dpkg --print-architecture)? This dynamically inserts the correct CPU architecture — amd64, arm64, or armhf. If you hardcode amd64, the repository entry breaks on ARM servers like AWS Graviton or Raspberry Pi.

Why $VERSION_CODENAME? This pulls your Ubuntu version’s codename directly from /etc/os-release. It makes this exact command work across Ubuntu versions without editing anything manually.

Why run apt update again? Adding a repository line to /etc/apt/sources.list.d/ does nothing until APT actually fetches the index from that repository. The second apt update is what activates it.

Step 5: Install Docker Engine and the Docker Compose Plugin

Install all five required packages together in a single command.

sudo apt install -y docker-ce docker-ce-cli containerd.io \
docker-buildx-plugin docker-compose-plugin

Why all five packages? Each one plays a specific role:

  • docker-ce: The Docker Engine daemon that creates and manages containers
  • docker-ce-cli: The docker command-line client you type commands into
  • containerd.io: The low-level container runtime Docker uses internally for pulling images, managing storage, and running containers
  • docker-buildx-plugin: Extended build capabilities including multi-platform image builds
  • docker-compose-plugin: Installs the docker compose subcommand. This is the target of this entire guide.

Why specify containerd.io explicitly? On Ubuntu 26.04, APT may resolve the containerd dependency using Ubuntu’s system-packaged version instead of Docker’s version. The Ubuntu-packaged containerd can be incompatible with Docker CE and cause runtime errors. Specifying containerd.io forces Docker’s tested, compatible version.

Expected output:

Reading package lists... Done
Building dependency tree... Done
The following NEW packages will be installed:
  containerd.io docker-buildx-plugin docker-ce docker-ce-cli docker-compose-plugin
0 upgraded, 5 newly installed, 0 to remove and 0 not upgraded.
Need to get 112 MB of archives.
After this operation, 415 MB of additional disk space will be used.

Step 6: Verify the Docker Compose Installation

Confirm both Docker and Compose installed correctly.

docker --version
docker compose version

Expected output:

Docker version 29.4.0, build 9d7ad9f
Docker Compose version v5.1.2

Now verify the Docker daemon is running and check the system configuration:

sudo systemctl is-active docker
docker info | grep -E "Cgroup|Storage|Server Version"

Expected output:

active

 Server Version: 29.4.0
 Storage Driver: overlayfs
 Cgroup Driver: systemd
 Cgroup Version: 2

Why check the cgroup driver? Ubuntu 26.04 uses cgroup v2 with the systemd cgroup driver. If Docker shows cgroupfs instead of systemd, it means Docker is not integrated with systemd’s resource management and may produce inconsistent behavior under load. The overlayfs storage driver is the correct default on Ubuntu 26.04 for good container I/O performance.

If docker compose version fails while docker --version works, the docker-compose-plugin package was not installed. Run apt list --installed | grep docker-compose-plugin to confirm, then reinstall.

Step 7: Configure Docker Compose on Ubuntu 26.04 for Non-Root Users

By default, the Docker socket at /var/run/docker.sock is only accessible to the root user and the docker group. Every docker command requires sudo out of the box.

Add your current user to the docker group:

sudo usermod -aG docker $USER

Activate the group membership without logging out:

newgrp docker

Verify it works:

docker run hello-world

Expected output:

Hello from Docker!
This message shows that your installation appears to be working correctly.

Why newgrp docker? Group membership changes in Linux only take effect in new login sessions. newgrp docker creates a new shell session with the updated group list, so you do not have to log out and log back in.

Security note: Adding a user to the docker group gives them effective root-level access to the host. A container can mount the host filesystem with full read-write access, and so can anyone in the docker group. Only add users you already trust with root access. Never add application service accounts to the docker group on production servers.

Step 8: Enable Docker to Start at Boot

sudo systemctl enable docker
sudo systemctl enable containerd

Confirm it worked:

sudo systemctl is-enabled docker

Expected output: enabled

Why: systemctl enable creates a symlink in systemd’s target directories that tells systemd to start Docker and containerd automatically on every boot.

Without this, every server reboot leaves all containers stopped until someone logs in and manually runs systemctl start docker. The restart: unless-stopped directive in your Compose files only works if the Docker daemon itself starts on boot.

Ubuntu 26.04’s APT package enables the service by default during install, but confirming this explicitly is a good habit before you deploy anything that needs to survive reboots.

Step 9: Deploy Your First Multi-Container Stack

A single docker run hello-world proves Docker works. It does not prove Compose networking, named volumes, or service dependencies work. Use a real two-service stack for that.

Create a project directory:

mkdir -p ~/test-stack && cd ~/test-stack

Create a .env file for credentials:

nano .env

Add these lines:

DB_ROOT_PASS=SecureRootPass2026
DB_NAME=wordpress
DB_USER=wpuser
DB_PASS=SecureWpPass2026

Now create the Compose file:

nano docker-compose.yml

Add the following stack definition:

services:
  db:
    image: mariadb:11
    container_name: test-mariadb
    restart: unless-stopped
    environment:
      MARIADB_ROOT_PASSWORD: ${DB_ROOT_PASS}
      MARIADB_DATABASE: ${DB_NAME}
      MARIADB_USER: ${DB_USER}
      MARIADB_PASSWORD: ${DB_PASS}
    volumes:
      - db_data:/var/lib/mysql
    networks:
      - test-net
    healthcheck:
      test: ["CMD", "healthcheck.sh", "--connect", "--innodb_initialized"]
      interval: 10s
      timeout: 5s
      retries: 3

  wordpress:
    image: wordpress:6-apache
    container_name: test-wordpress
    restart: unless-stopped
    depends_on:
      db:
        condition: service_healthy
    ports:
      - "8080:80"
    environment:
      WORDPRESS_DB_HOST: db
      WORDPRESS_DB_NAME: ${DB_NAME}
      WORDPRESS_DB_USER: ${DB_USER}
      WORDPRESS_DB_PASSWORD: ${DB_PASS}
    volumes:
      - wp_data:/var/www/html
    networks:
      - test-net

volumes:
  db_data:
  wp_data:

networks:
  test-net:
    driver: bridge

Start the stack:

docker compose up -d

Check service status:

docker compose ps

Expected output:

NAME             IMAGE              STATUS                   PORTS
test-wordpress   wordpress:6-apache Up 8 seconds             0.0.0.0:8080->80/tcp
test-mariadb     mariadb:11         Up 17 seconds (healthy)  3306/tcp

Why depends_on with condition: service_healthy? Without it, Compose starts WordPress immediately after MariaDB’s container starts. MariaDB takes several seconds to initialize InnoDB and accept connections. WordPress tries to connect during that window, fails, and crashes. The service_healthy condition makes WordPress wait until MariaDB passes its health check before starting.

Why credentials in .env instead of directly in the YAML? Hardcoded credentials in docker-compose.yml end up in your git history. Using a .env file keeps secrets out of version control. Add .env to your .gitignore immediately.

Bring the stack down when done:

docker compose down

Essential Docker Compose Commands Reference

Here are the commands you will use daily when working with any Compose stack:

Command What It Does When to Use It
docker compose up -d Start all services in the background Initial launch, after config changes
docker compose down Stop and remove containers and networks Teardown while preserving volume data
docker compose down -v Remove containers, networks, and named volumes Full data wipe
docker compose ps Show service status Quick health check
docker compose logs -f [service] Tail live logs from a service Debugging startup failures
docker compose restart [service] Restart one service Config reload without full teardown
docker compose exec [service] bash Open a shell inside a running container Live debugging
docker compose config Validate YAML and resolve variables Before every up in production
docker compose top Show processes inside each container Checking what is running inside

Always run docker compose config before deploying a changed Compose file. It validates YAML syntax and catches unresolved variable references without touching any running containers.

Troubleshooting Common Errors on This Linux Server Tutorial

Error 1: Got permission denied while trying to connect to the Docker daemon socket

Why it happens: Your user is not yet in the docker group, or your current terminal session predates the group change.

Fix:

sudo usermod -aG docker $USER
newgrp docker

If the error persists after newgrp docker, log out and log back in completely.

Error 2: docker compose: command not found

Why it happens: The docker-compose-plugin package was not installed, or you are accidentally using the old docker-compose (hyphen) syntax from v1.

Fix: Check whether the plugin is installed:

apt list --installed | grep docker-compose-plugin

If it is missing, reinstall:

sudo apt install -y docker-compose-plugin

Note the syntax carefully: docker compose (space), not docker-compose (hyphen).

Error 3: Unable to locate package docker-ce

Why it happens: Docker’s APT repository was not added correctly, or apt update was not run after adding the repo line.

Fix: Re-run Steps 3 and 4, then run sudo apt update before attempting the install again. Confirm the repo file exists:

cat /etc/apt/sources.list.d/docker.list

You should see a line pointing to https://download.docker.com/linux/ubuntu.

Error 4: Containers exit immediately after docker compose up

Why it happens: Missing depends_on or absent health checks cause services to start before their dependencies are ready. WordPress crashes when it cannot connect to a database that is still initializing.

Fix: Add depends_on with condition: service_healthy and a proper healthcheck block to your database service. Check logs to confirm what the container saw:

docker compose logs db
docker compose logs wordpress

Error 5: cgroups: cgroup mountpoint does not exist

Why it happens: This error appears when running Docker inside a VM or container that does not properly expose cgroup v2 to the guest. Ubuntu 26.04 requires cgroup v2.

Fix: On a VPS, ensure your hypervisor or VPS provider supports cgroup v2 in their virtualization layer. On a local VM, enable nested virtualization or use a Type-1 hypervisor like KVM. Confirm cgroup support with:

cat /sys/fs/cgroup/cgroup.controllers

Congratulations! You have successfully installed Docker Compose. Thanks for using this tutorial for installing Docker Compose on the Ubuntu 26.04 LTS (Resolute Raccoon) system. For additional help or useful information, we recommend you check the official Docker website.

VPS Manage Service Offer
If you don’t have time to do all of this stuff, or if this is not your area of expertise, we offer a service to do “VPS Manage Service Offer”, starting from $10 (Paypal payment). Please contact us to get the best deal!
r00t is a Linux Systems Administrator and open-source advocate with over ten years of hands-on experience in server infrastructure, system hardening, and performance tuning. Having worked across distributions such as Debian, Arch, RHEL, and Ubuntu, he brings real-world depth to every article published on this blog. r00t writes to bridge the gap between complex sysadmin concepts and practical, everyday application — whether you are configuring your first server or optimizing a production environment. Based in New York, US, he is a firm believer that knowledge, like open-source software, is best when shared freely.

Related Posts