No space left on device
What causes this
Your disk is full — or more specifically, the partition where you’re trying to write has no free space. This can also happen when you run out of inodes (too many small files) even if there’s technically space left. Common culprits:
- Docker images and containers accumulating over time
- Log files growing unchecked
- Package manager caches (apt, npm, pip)
- Old kernel versions not being cleaned up
- Temporary files from builds
Fix 1: Find what’s using the space
# Overview of disk usage per partition
df -h
# Find the biggest directories
du -sh /* 2>/dev/null | sort -rh | head -20
# Drill deeper into the biggest one
du -sh /var/* 2>/dev/null | sort -rh | head -10
This tells you exactly where the space went. Usually it’s /var (logs, Docker), /home (user files), or /tmp.
Fix 2: Clean up Docker
Docker is the #1 space hog on developer machines and servers:
# See how much Docker is using
docker system df
# Remove everything unused (stopped containers, dangling images, build cache)
docker system prune -a
# Also remove unused volumes (careful — this deletes data)
docker system prune -a --volumes
This can easily free up 10-50GB.
Fix 3: Clean package manager caches
# apt (Ubuntu/Debian)
sudo apt clean
sudo apt autoremove
# npm
npm cache clean --force
# pip
pip cache purge
# yarn
yarn cache clean
Fix 4: Clean up log files
# Check log sizes
du -sh /var/log/* | sort -rh | head -10
# Truncate a large log file (keeps the file, empties content)
sudo truncate -s 0 /var/log/syslog
# Clean old journal logs (keep only last 3 days)
sudo journalctl --vacuum-time=3d
# Or limit journal size to 500MB
sudo journalctl --vacuum-size=500M
Fix 5: Find and remove large files
# Find files larger than 100MB
find / -type f -size +100M 2>/dev/null | head -20
# Find files larger than 1GB
find / -type f -size +1G 2>/dev/null
Common large files to check:
- Old database dumps in
/tmpor home directories - Core dumps in
/var/crash - Old backup files
Fix 6: Check for inode exhaustion
If df -h shows free space but you still get the error:
# Check inode usage
df -i
# If a partition is at 100% inodes, find directories with many small files
find / -xdev -type d -exec sh -c 'echo "$(find "$1" -maxdepth 1 | wc -l) $1"' _ {} \; 2>/dev/null | sort -rn | head -20
Common inode hogs: npm’s node_modules directories, mail queues, session files.
Related resources
How to prevent it
- Set up log rotation with
logrotateto prevent logs from growing indefinitely - Run
docker system pruneregularly (or add it to a weekly cron job) - Monitor disk usage with alerts — don’t wait until it’s 100% full
- Use
tmpreaperorsystemd-tmpfilesto automatically clean/tmp - For servers, consider separate partitions for
/var/logand/var/lib/dockerso they can’t fill up the root partition