How To Fix ‘No Space Left on Device’ Error on Linux
In this tutorial, we will show you how to Fix ‘No Space Left on Device’ Error on Linux. Have you ever encountered the dreaded “No Space Left on Device” error while working on your Linux system? This frustrating message can halt your work, prevent important operations, and cause significant disruptions to your workflow. Whether you’re a system administrator managing servers or a Linux enthusiast using it for personal projects, understanding how to diagnose and fix this common error is essential for maintaining a healthy system.
In this comprehensive guide, we’ll explore the various causes of the “No Space Left on Device” error and provide you with practical, step-by-step solutions to resolve it efficiently. By the end of this article, you’ll have the knowledge and tools to not only fix the immediate issue but also prevent it from recurring in the future.
Understanding the “No Space Left on Device” Error
The “No Space Left on Device” error in Linux occurs when the system cannot write data to a partition because it has run out of available space or resources. This error typically manifests during file creation, data transfers, or application installations. When you encounter this message, your system is telling you that the storage location you’re trying to write to simply doesn’t have enough room.
Common causes of this error include:
- Disk partitions reaching 100% usage
- Inode exhaustion (even when disk space is available)
- Deleted files still being held open by running processes
- Mount points being overwritten
- Docker files consuming excessive space
- File system corruption
When this error occurs, services may stop functioning properly, applications may crash, and you’ll be unable to save new files or updates until the issue is resolved.
Diagnosing the Root Cause
Before implementing solutions, you need to determine exactly what’s causing the space issue. This diagnostic phase is crucial for implementing the correct fix.
Checking Disk Space Usage
The first step is to determine whether you’ve simply run out of actual disk space. The df
command (disk free) is the primary tool for checking partition usage:
df -h
The -h
flag makes the output human-readable, displaying sizes in MB/GB instead of blocks. Look for partitions showing close to 100% usage in the “Use%” column.
Example output:
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 20G 19G 1G 95% /
If you need to check a specific partition, specify it as an argument:
df -h /
Checking Inode Usage
Sometimes the issue isn’t disk space but inode exhaustion. Inodes are data structures that store file metadata, and Linux has a limited number per partition. Check inode usage with:
df -i
If you see “IUse%” at or near 100%, you’ve run out of inodes even though disk space might be available. This often happens in systems that store many small files.
Finding Space-Consuming Files and Directories
To identify which files and directories consume the most space, use the du
(disk usage) command:
du -sh /*
This command shows the summarized size of each top-level directory. For a more detailed analysis, sorted by size:
du -ah / | sort -rh | head -20
This command lists the 20 largest files and directories in descending order. For a directory-specific analysis, you can use:
du -h --max-depth=1 /path/to/directory | sort -rh
Checking for Deleted Files Still in Use
When a file is deleted while still open by a process, the space isn’t actually freed until the process closes the file or terminates. Check for such files with:
lsof | grep deleted
This command reveals processes that are holding onto deleted files, preventing the space from being reclaimed. If you don’t have lsof
installed, you can try:
find /proc/*/fd -ls | grep '(deleted)'
Common Solutions for Disk Space Issues
After diagnosing the cause of your space problem, you can implement appropriate solutions to free up space.
Removing Unnecessary Files
After identifying large, unnecessary files, you can safely delete them to free up space. Common targets include:
Log Files
Log files in /var/log
can grow extremely large over time. Clean them up with:
sudo find /var/log -type f -name "*.log" -exec truncate -s 0 {} \;
This command truncates all log files to zero size without deleting the files themselves, which could cause issues for running services.
Temporary Files
Temporary files in /tmp
and /var/tmp
can accumulate over time:
sudo rm -rf /tmp/* /var/tmp/*
Be cautious with this command, as some processes might be using files in these directories.
Package Cache
Different Linux distributions store package caches differently. Clean them up based on your distribution:
- For Ubuntu/Debian systems:
sudo apt clean
sudo apt autoremove
- For RHEL-based systems (CentOS, Fedora, AlmaLinux):
sudo dnf clean all
sudo dnf autoremove
- For Arch Linux:
sudo pacman -Sc
User Downloads and Cache
User’s download directories and browser caches can consume significant space:
rm -rf ~/.cache/*
To clean browser caches, you’ll need to do so through the browser interface or using browser-specific commands.
Dealing with Deleted but Open Files
If lsof | grep deleted
revealed files that are deleted but still consuming space, you need to restart the processes holding these files open:
# Identify the process IDs
lsof | grep deleted
# Restart or kill the processes
sudo kill <PROCESS_ID>
# or
sudo killall <PROCESS_NAME>
After restarting the processes, the space will be properly freed. For critical services, use a more gentle approach:
sudo systemctl restart <service-name>
Solutions for Inode Limitations
When the issue is inode exhaustion rather than disk space, different strategies are needed.
Identifying Inode-Heavy Directories
Find directories containing numerous small files:
find / -xdev -type d -exec ls -la {} \; | wc -l | sort -nr | head -20
This command identifies directories with the highest file counts.
Consolidating Files
Directories with many small files consume inodes rapidly. Consider these strategies:
Archive Small Files
Compress groups of small files into archives:
tar -czf archive_name.tar.gz directory_with_many_files
rm -rf directory_with_many_files
Use Database Storage
Instead of storing many small files individually, consider using a database system that can store the data more efficiently.
Implement File Rotation Policies
For logs and other automatically generated files, implement rotation policies that compress or delete old files regularly:
sudo nano /etc/logrotate.conf
Modify the configuration to rotate logs more frequently and compress them to save space.
File System Selection
When creating new partitions, consider file systems optimized for your use case:
- XFS provides better scaling for large files
- Ext4 with appropriate inode density settings for small file workloads
- ZFS for advanced management capabilities
For an existing system, you may need to backup your data, reformat with a more appropriate file system, and restore.
Advanced Solutions
When simpler solutions aren’t enough, you may need to implement more advanced measures.
Extending Partitions and File Systems
If you can’t free enough space, consider extending your partition:
For LVM-managed partitions:
# Check available space
sudo vgs
# Extend the logical volume (add 10GB)
sudo lvextend -L +10G /dev/mapper/vg-name
# Resize the file system to use the additional space
sudo resize2fs /dev/mapper/vg-name
For traditional partitions:
You may need to:
- Back up your data
- Boot from a live USB
- Resize using tools like GParted
- Restore your data
Docker Cleanup Strategies
Docker can consume significant disk space through unused images, containers, and volumes. Clean them with:
# Remove unused containers
docker container prune
# Remove unused images
docker image prune -a
# Remove unused volumes
docker volume prune
# Remove everything unused (use with caution)
docker system prune -a --volumes
These commands can free up gigabytes of space in systems that use Docker extensively.
Addressing Mount Point Issues
If mount points are being overwritten, verify your mount configuration:
mount | grep <directory>
Review /etc/fstab
for incorrect entries and ensure mount points are properly managed:
sudo nano /etc/fstab
Look for duplicate entries or incorrectly configured mount options.
Preventative Measures
Preventing disk space issues is better than fixing them. Implement these strategies to avoid future problems.
Monitoring Disk Space
Implement monitoring to detect issues before they become critical:
- Set up monitoring tools like Nagios, Zabbix, or Prometheus
- Configure alerts when disk usage exceeds 80-85%
- Monitor both space and inode usage
For a simple solution, create a cron job to check disk space and email warnings:
echo '#!/bin/bash
THRESHOLD=85
USAGE=$(df -h | grep /dev/sda1 | awk '\''{ print $5 }'\'' | sed '\''s/%//g'\'')
if [ "$USAGE" -gt "$THRESHOLD" ]; then
echo "Disk space alert: $USAGE% used on /dev/sda1" | mail -s "Disk Space Alert" admin@example.com
fi' > /etc/cron.daily/disk-check
chmod +x /etc/cron.daily/disk-check
Creating Regular Cleanup Routines
Implement automated cleanup scripts run via cron jobs:
# Example cron entry for daily cleanup at 2 AM
0 2 * * * /path/to/cleanup_script.sh
A basic cleanup script might include:
- Log rotation and compression
- Temporary file cleanup
- Cache purging
- Old backup removal
Example cleanup script:
#!/bin/bash
# Clean package cache
apt clean
apt autoremove -y
# Clean temporary files
find /tmp -type f -atime +7 -delete
find /var/tmp -type f -atime +7 -delete
# Compress old logs
find /var/log -type f -name "*.log.*" -exec gzip -9 {} \;
# Remove old backups (older than 30 days)
find /backup -type f -mtime +30 -delete
Implementing Disk Quotas
Prevent individual users from consuming excessive space by implementing disk quotas:
# Install quota support
sudo apt install quota
# Edit /etc/fstab to add usrquota,grpquota options
sudo nano /etc/fstab
Add usrquota,grpquota
to the mount options for the relevant partition, then remount and set quotas:
sudo mount -o remount /
sudo quotacheck -cum /
sudo quotaon /
sudo setquota -u username 5G 6G 0 0 /
This sets a soft limit of 5GB and a hard limit of 6GB for the specified user.
Troubleshooting Specific Scenarios
Different environments may require specialized approaches to disk space issues.
Web Server Issues
Web servers often encounter space issues due to:
- Explosive log growth
- Session files accumulation
- Cached content
For Apache, consider:
- Configuring proper log rotation in
/etc/apache2/apache2.conf
- Setting up a cleanup routine for
/var/www/tmp
- Monitoring traffic spikes that cause log bloat
Example Apache log rotation configuration:
CustomLog "|/usr/bin/rotatelogs -l /var/log/apache2/access.%Y%m%d.log 86400" combined
ErrorLog "|/usr/bin/rotatelogs -l /var/log/apache2/error.%Y%m%d.log 86400"
Database Servers
Database servers may experience:
- Transaction log growth
- Index bloat
- Temporary file accumulation
For MySQL/MariaDB:
- Consider binary log purging:
PURGE BINARY LOGS BEFORE DATE_SUB(NOW(), INTERVAL 7 DAY);
- Optimize tables regularly:
OPTIMIZE TABLE tablename;
- Move data directories to dedicated partitions:
datadir=/path/to/data
Application Servers
For Java/Tomcat or similar application servers:
- Monitor heap dumps
- Clear temp directories:
find /path/to/tomcat/temp -type f -mtime +1 -delete
- Implement log rotation for application logs:
sudo nano /etc/logrotate.d/tomcat