An evicted pod means Kubernetes killed it because the node ran out of resources β usually memory or disk space.
What causes this error
- Node out of memory β pods on the node used more memory than available
- Node out of disk β container logs, images, or volumes filled the disk
- Pod exceeded its memory limit β OOMKilled then evicted
Fix 1: Check why it was evicted
kubectl describe pod my-pod | grep -A3 "Status"
# Look for:
# Status: Failed
# Reason: Evicted
# Message: "The node was low on resource: memory"
Fix 2: Set proper resource limits
resources:
requests:
memory: "256Mi" # Guaranteed minimum
cpu: "100m"
limits:
memory: "512Mi" # Maximum before OOMKill
cpu: "500m"
Without limits, a single pod can consume all node memory and trigger eviction of other pods.
Fix 3: Fix disk pressure
# Check node disk usage
kubectl describe node my-node | grep -A5 "Conditions"
# Clean up on the node
docker system prune -af # Remove unused images
journalctl --vacuum-size=500M # Trim logs
Fix 4: Add monitoring
Set up alerts for node resource usage before evictions happen:
- Memory usage > 80% β warning
- Disk usage > 85% β warning
- Pod restarts > 3 in 10 minutes β alert
How to prevent evictions
- Always set memory
requestsandlimitson every pod - Use
PodDisruptionBudgetsfor critical workloads - Monitor node resources with Prometheus/Grafana
- Set up cluster autoscaling to add nodes when resources are low
Related: Kubernetes kubectl cheat sheet Β· Kubernetes: Pod Stuck in Pending fix Β· Kubernetes: OOMKilled fix Β· What is Kubernetes