How To Install GlusterFS on AlmaLinux 10

Install GlusterFS on AlmaLinux 10

Running storage off a single server is a liability. When that server goes down, everything depending on it goes down with it — your applications, your backups, and your users’ trust. That is the exact problem GlusterFS solves.

GlusterFS is a free, open-source distributed file system that aggregates storage from multiple servers into one unified namespace. Instead of one server holding all your data, multiple nodes share the load, replicate files automatically, and keep your storage online even when a node fails.

In this Linux server tutorial, you will learn how to install GlusterFS on AlmaLinux 10, form a two-node storage cluster, create a replicated volume, and mount it on a client machine. Every command is tested, every output is explained, and every common mistake gets its own fix at the end.

By the time you finish this guide, you will have a production-ready GlusterFS on AlmaLinux 10 setup that survives node failures without any manual intervention.

What Is GlusterFS and Why Does It Matter?

GlusterFS is a scalable, network-attached storage solution originally developed by Gluster Inc. and now maintained under the Red Hat ecosystem. It works by linking together storage from multiple physical or virtual servers, called bricks, into a single mountable volume.

Unlike traditional NFS shares, GlusterFS has no single metadata server. Every node in the cluster knows the full picture. This shared-nothing architecture is what gives GlusterFS its fault tolerance and horizontal scalability — you can add nodes to increase capacity without downtime.

For sysadmins running AlmaLinux 10, GlusterFS is a natural fit. AlmaLinux 10 is a binary-compatible RHEL 10 rebuild, and GlusterFS installs cleanly through the CentOS Storage SIG repositories that AlmaLinux officially supports. You get enterprise-grade distributed storage without an enterprise-grade licensing bill.

GlusterFS Volume Types at a Glance

Before you configure GlusterFS on AlmaLinux 10, you need to pick the right volume type for your workload. Here are the five main options:

Volume Type How It Works Best For
Distributed Files spread across bricks with no redundancy Maximum raw capacity
Replicated Full copy of every file on each brick High availability, critical data
Distributed Replicated Distributed across replica sets Scalability plus redundancy
Dispersed Erasure coding across bricks Space-efficient fault tolerance
Distributed Dispersed Distributed across dispersed subvolumes Large-scale production clusters

This guide uses a Replicated volume with two nodes. That means every file you write gets copied to both servers automatically. If one server goes offline, your data stays online and accessible from the other.

Prerequisites

Before you start, make sure you have the following in place:

  • Two AlmaLinux 10 server nodes (referred to as gluster1 and gluster2 in this guide)
  • One AlmaLinux 10 client node (referred to as glusterclient)
  • Root or sudo privileges on all three nodes
  • A dedicated secondary disk on each server node for brick storage (e.g., /dev/sdb) — do not use the root disk
  • Static IP addresses or working DNS/hostname resolution between all nodes
  • Network connectivity between all nodes on the same subnet
  • All nodes updated before you begin
  • Firewall (firewalld) active and running on both server nodes
  • SELinux in permissive mode during setup (recommended for first-time configurations)

Lab Environment Used in This Guide

Node Hostname IP Address Role
Server 1 gluster1 192.168.1.10 GlusterFS Server
Server 2 gluster2 192.168.1.11 GlusterFS Server
Client glusterclient 192.168.1.12 GlusterFS Client

Step 1: Update Your System and Configure Hostnames

Start by updating all three nodes. Stale packages are one of the most common sources of unexplained installation failures.

Run this on all three nodes:

sudo dnf update -y

Next, set the correct hostname on each node. On gluster1:

sudo hostnamectl set-hostname gluster1

On gluster2:

sudo hostnamectl set-hostname gluster2

On glusterclient:

sudo hostnamectl set-hostname glusterclient

Configure /etc/hosts for Name Resolution

GlusterFS uses hostnames internally for peer communication. If hostname resolution breaks, your cluster breaks. Add these entries to /etc/hosts on all three nodes:

sudo nano /etc/hosts

Add the following lines:

192.168.1.10  gluster1
192.168.1.11  gluster2
192.168.1.12  glusterclient

Save the file and verify connectivity from gluster1:

ping -c3 gluster2
ping -c3 glusterclient

You should see replies from both hosts with zero packet loss. If ping fails here, fix your network before continuing — nothing downstream will work correctly.

Step 2: Enable the CentOS SIG GlusterFS Repository

GlusterFS is not included in the AlmaLinux 10 base repositories. The correct and officially supported installation path is through the CentOS Storage SIG repository, which AlmaLinux explicitly supports.

Run this on both server nodes (gluster1 and gluster2):

sudo dnf install -y centos-release-gluster11

This installs the repository definition file that points dnf at the GlusterFS 11 package source. GlusterFS 11 is the current stable release as of 2026 and the recommended version for new AlmaLinux deployments.

Verify the repository was added successfully:

sudo dnf repolist

You should see centos-gluster11 listed among the active repositories. If you do not see it, rerun the install command and check your internet connectivity.

Step 3: Install GlusterFS on AlmaLinux 10 Server Nodes

With the repository active, install the GlusterFS server package on both gluster1 and gluster2:

sudo dnf install -y glusterfs-server

This single package pulls in everything you need:

  • glusterfs — core binaries
  • glusterfs-server — the server-side daemon and tools
  • glusterfs-cli — the gluster command-line interface
  • glusterfs-fuse — FUSE support for mounting volumes
  • glusterfs-client-xlators — client-side translator stack

Start and Enable the glusterd Daemon

glusterd is the GlusterFS management daemon. It handles all cluster communication, peer management, and volume operations. Start it and enable it to run on boot:

sudo systemctl enable --now glusterd

Confirm it is running:

sudo systemctl status glusterd

Expected output:

● glusterd.service - GlusterFS, a clustered file-system server
     Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)
     Active: active (running) since ...

The active (running) status confirms the daemon is healthy. If you see failed here, check the journal with journalctl -xe | grep gluster for the root cause.

Step 4: Configure Firewall Rules on Both Server Nodes

GlusterFS requires specific ports to be open between nodes and from client to servers. Without these rules, peer probing and volume mounting will silently fail or hang.

Run these commands on both gluster1 and gluster2:

sudo firewall-cmd --permanent --add-service=glusterfs
sudo firewall-cmd --reload

For tighter control in production environments, open the specific ports manually:

sudo firewall-cmd --permanent --add-port=24007/tcp
sudo firewall-cmd --permanent --add-port=24008/tcp
sudo firewall-cmd --permanent --add-port=49152-49251/tcp
sudo firewall-cmd --reload
  • Port 24007 — glusterd management daemon
  • Port 24008 — RDMA port (used even in non-RDMA setups)
  • Ports 49152+ — Dynamically assigned brick ports, one per brick per volume

Verify the rules are active:

sudo firewall-cmd --list-all

Look for glusterfs in the services line or the specific ports in the ports line. Both approaches work — use whichever fits your security policy. In production, consider adding source IP restrictions using --add-rich-rule to limit which hosts can reach these ports.

Step 5: Prepare the Brick Storage on Both Server Nodes

A brick is the fundamental storage unit in GlusterFS — a directory on a local filesystem that the cluster uses to store data. Use a dedicated disk or partition for this, not a subdirectory on your root filesystem.

Mixing brick storage with your OS disk is the fastest way to fill up root and crash a production node.

Format and Mount the Dedicated Disk

Run these commands on both gluster1 and gluster2, replacing /dev/sdb with your actual secondary disk:

sudo mkfs.xfs /dev/sdb

XFS is the recommended filesystem for GlusterFS bricks. It handles large files well and pairs cleanly with GlusterFS’s internal data structures.

Create the brick directory and mount the disk:

sudo mkdir -p /gluster/brick1
sudo mount /dev/sdb /gluster/brick1

Make the mount persistent across reboots by adding it to /etc/fstab:

echo '/dev/sdb /gluster/brick1 xfs defaults 0 0' | sudo tee -a /etc/fstab

Create the actual brick subdirectory inside the mount point:

sudo mkdir -p /gluster/brick1/vol1

Always create a subdirectory inside the mount point (e.g., /gluster/brick1/vol1) rather than using the mount point itself as the brick root. This prevents GlusterFS metadata from mixing with your filesystem root.

Step 6: Create the Trusted Storage Pool (Peer Probing)

A Trusted Storage Pool is the group of GlusterFS server nodes that work together as a cluster. You build the pool by running a peer probe command, which establishes a trusted relationship between nodes.

From gluster1 only, probe gluster2:

sudo gluster peer probe gluster2

Expected output:

peer probe: success

Verify the pool status:

sudo gluster peer status

Expected output:

Number of Peers: 1

Hostname: gluster2
Uuid: [unique UUID]
State: Peer in Cluster (Connected)

The Peer in Cluster (Connected) state means both nodes recognize each other and glusterd communication is healthy. You only run peer probe from one node — the relationship is automatically bidirectional.

You can confirm from gluster2 as well:

sudo gluster peer status

You should see gluster1 listed as a connected peer with the same state.

Step 7: Create and Start the Replicated GlusterFS Volume

Now you are ready to create the actual storage volume. Run this from gluster1 only:

sudo gluster volume create vol1 replica 2 \
  gluster1:/gluster/brick1/vol1 \
  gluster2:/gluster/brick1/vol1

Breaking down this command:

  • vol1 — the name of the volume (choose anything meaningful)
  • replica 2 — maintain two copies, one on each node
  • The two paths that follow are the brick locations on each server

Expected output:

volume create: vol1: success: please start the volume to access data

Start the volume:

sudo gluster volume start vol1

Expected output:

volume start: vol1: success

Verify Volume Info and Status

sudo gluster volume info vol1

Sample output:

Volume Name: vol1
Type: Replicate
Volume ID: [UUID]
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster1:/gluster/brick1/vol1
Brick2: gluster2:/gluster/brick1/vol1
sudo gluster volume status vol1

Look for Online: Y next to each brick. If any brick shows Online: N, the volume exists but that brick has a problem — check the brick directory path and glusterd status on that node.

Step 8: Install the GlusterFS Client and Mount the Volume

Switch to your client node (glusterclient). First, enable the same CentOS SIG repository on the client:

sudo dnf install -y centos-release-gluster11

Then install the GlusterFS FUSE client:

sudo dnf install -y glusterfs-client

Create the local mount point:

sudo mkdir -p /mnt/gluster

Mount the GlusterFS volume:

sudo mount -t glusterfs gluster1:/vol1 /mnt/gluster

Confirm the mount is active:

df -h /mnt/gluster

You should see gluster1:/vol1 listed with the size of your brick disk.

Make the Mount Persistent

Add this line to /etc/fstab on the client node:

sudo nano /etc/fstab

Add:

gluster1:/vol1  /mnt/gluster  glusterfs  defaults,_netdev  0  0

The _netdev option is critical. It tells the OS to wait for the network to be ready before attempting this mount at boot. Without it, the system can hang during startup if gluster1 is not yet reachable when fstab processes.

Step 9: Test the Replicated Volume

Write a test file from the client:

sudo touch /mnt/gluster/testfile.txt
ls /mnt/gluster

Now verify replication is working. SSH into gluster2 and check the brick directory directly:

ls /gluster/brick1/vol1

You should see testfile.txt there, even though you created it from the client through gluster1. That confirms active replication.

Run a self-heal check to confirm no pending operations:

sudo gluster volume heal vol1 info

Expected output:

Brick gluster1:/gluster/brick1/vol1
Number of entries: 0

Brick gluster2:/gluster/brick1/vol1
Number of entries: 0

Zero entries means the volume is fully healed and both bricks are in sync.

Troubleshooting Common GlusterFS Errors on AlmaLinux 10

Even with careful setup, you will occasionally hit a snag. Here are the five most common issues and their fixes:

Error 1: Peer Probe Fails with “No Route to Host”

Cause: The firewall is blocking port 24007, or glusterd is not running on the target node.

Fix:

# On the target node, verify glusterd is running
sudo systemctl status glusterd

# Check the firewall
sudo firewall-cmd --list-all

# Re-add the firewall rule if missing
sudo firewall-cmd --permanent --add-service=glusterfs
sudo firewall-cmd --reload

Error 2: Volume Start Fails with “Brick is Not in Started State”

Cause: The brick directory does not exist, or the permissions are wrong.

Fix:

# Recreate the brick directory on the affected node
sudo mkdir -p /gluster/brick1/vol1

# Verify ownership
sudo ls -la /gluster/brick1/

Error 3: Client Mount Hangs Indefinitely

Cause: Brick ports (49152+) are blocked by the server firewall, or the client cannot resolve the server hostname.

Fix:

# On the server, open brick ports
sudo firewall-cmd --permanent --add-port=49152-49251/tcp
sudo firewall-cmd --reload

# On the client, verify hostname resolves
ping -c3 gluster1

Error 4: “Transport endpoint not connected” After Mount

Cause: The glusterd daemon stopped, or the volume was not started before mounting.

Fix:

# On the server node
sudo systemctl restart glusterd
sudo gluster volume start vol1

Error 5: SELinux Blocking glusterd Communication

Cause: SELinux denies connections between GlusterFS processes on RHEL-based systems.

Fix (for testing):

sudo setenforce 0

Fix (for production):

sudo setsebool -P gluster_export_all_rw 1

Always use the production fix in live environments. Disabling SELinux entirely is not a security-conscious long-term solution.

GlusterFS CLI Quick Reference

Keep these commands handy for day-to-day cluster management:

Command What It Does
gluster peer status View all peers and their connection state
gluster volume list List all configured volumes
gluster volume info <vol> Show configuration details of a volume
gluster volume status <vol> Show runtime status including brick health
gluster volume stop <vol> Gracefully stop a running volume
gluster volume delete <vol> Permanently remove a stopped volume
gluster volume heal <vol> info Check pending self-heal tasks
gluster peer detach <host> Remove a node from the trusted pool

Congratulations! You have successfully installed GlusterFS. Thanks for using this tutorial for installing GlusterFS scalable network filesystem on your AlmaLinux OS 10 system. For additional help or useful information, we recommend you check the official GlusterFS website.

VPS Manage Service Offer
If you don’t have time to do all of this stuff, or if this is not your area of expertise, we offer a service to do “VPS Manage Service Offer”, starting from $10 (Paypal payment). Please contact us to get the best deal!
r00t is a dedicated and highly skilled Linux Systems Administrator with over a decade of progressive experience in designing, deploying, and maintaining enterprise-grade Linux infrastructure. His professional journey began in the telecommunications industry, where early exposure to Unix-based operating systems ignited a deep and enduring passion for open-source technologies and server administration.​ Throughout his career, r00t has demonstrated exceptional proficiency in managing large-scale Linux environments, overseeing more than 300 servers across development, staging, and production platforms while consistently achieving 99.9% system uptime. He holds advanced competencies in Red Hat Enterprise Linux (RHEL), Debian, and Ubuntu distributions, complemented by hands-on expertise in automation tools such as Ansible, Terraform, Bash scripting, and Python.

Related Posts