State: Terminated
Reason: OOMKilled
Exit Code: 137
Your container used more memory than its limit allows. Kubernetes killed it.
Fix 1: Increase Memory Limit
# Check current limits
kubectl describe pod my-pod | grep -A 5 Limits
# Increase in deployment
resources:
requests:
memory: "256Mi"
limits:
memory: "512Mi" # Increase this
Fix 2: Find the Memory Leak
# Check actual memory usage
kubectl top pod my-pod
# Check memory over time
kubectl top pod my-pod --containers
# Get detailed metrics
kubectl exec my-pod -- cat /sys/fs/cgroup/memory/memory.usage_in_bytes
Fix 3: JVM Heap Size (Java Apps)
# ❌ JVM uses more memory than container limit
# ✅ Set JVM heap to ~75% of container limit
env:
- name: JAVA_OPTS
value: "-Xmx384m -Xms256m" # For 512Mi container limit
Fix 4: Node.js Memory Limit
env:
- name: NODE_OPTIONS
value: "--max-old-space-size=384" # For 512Mi container limit
Fix 5: Set Requests Equal to Limits
# ✅ Prevents overcommit — pod gets guaranteed memory
resources:
requests:
memory: "512Mi"
limits:
memory: "512Mi" # Same as request = Guaranteed QoS
Fix 6: Horizontal Scaling Instead
# Instead of giving one pod more memory, run more pods
kubectl scale deployment my-app --replicas=3
Debugging
# Check why it was killed
kubectl describe pod my-pod | grep -A 10 "Last State"
# Check events
kubectl get events --field-selector involvedObject.name=my-pod
# Check previous container logs
kubectl logs my-pod --previous