Ollama Connection Refused Fix: Server Not Starting or Not Responding (2026)
You ran ollama run or tried to connect to the API and got:
Error: could not connect to ollama app, is it running?
Or:
connection refused: localhost:11434
This means the Ollama server isnβt running or isnβt accessible. Here are the fixes.
Fix 1: Start the Ollama service
The most common cause β Ollama isnβt running:
# macOS: start the app
open -a Ollama
# Linux: start the service
sudo systemctl start ollama
# Or run directly
ollama serve
Verify itβs running:
curl http://localhost:11434/api/version
# Should return: {"version":"0.x.x"}
Fix 2: Check the port
Something else might be using port 11434:
# Check what's on port 11434
lsof -i :11434 # macOS/Linux
netstat -tlnp | grep 11434 # Linux
# If another process is using it, change Ollama's port
OLLAMA_HOST=0.0.0.0:11435 ollama serve
Fix 3: Docker networking
If Ollama runs in Docker and your app canβt reach it:
# docker-compose.yml
services:
ollama:
image: ollama/ollama
ports:
- "11434:11434" # Expose to host
your-app:
build: .
environment:
# Use service name, not localhost
- OLLAMA_HOST=http://ollama:11434
Inside Docker, localhost refers to the container itself, not the host. Use the service name (ollama) or host.docker.internal to reach the Ollama container.
# From another container
curl http://ollama:11434/api/version
# From the host machine
curl http://localhost:11434/api/version
Fix 4: Firewall blocking
On Linux, the firewall might block port 11434:
# UFW
sudo ufw allow 11434
# firewalld
sudo firewall-cmd --add-port=11434/tcp --permanent
sudo firewall-cmd --reload
Fix 5: Bind to all interfaces
By default, Ollama only listens on localhost. To access from other machines:
# Listen on all interfaces
OLLAMA_HOST=0.0.0.0:11434 ollama serve
# Or set permanently
# Linux: edit /etc/systemd/system/ollama.service
# Add: Environment="OLLAMA_HOST=0.0.0.0:11434"
sudo systemctl daemon-reload
sudo systemctl restart ollama
Security warning: Only do this on trusted networks. Exposing Ollama to the internet without authentication is a security risk. See our sandbox guide for securing local AI.
Fix 6: Ollama crashed silently
Check the logs:
# macOS
cat ~/.ollama/logs/server.log
# Linux (systemd)
journalctl -u ollama -n 50
# Docker
docker logs ollama
Common crash causes:
- Out of memory (see our OOM fix)
- Corrupted model files (
ollama rm model-nameand re-pull) - GPU driver issues (update NVIDIA drivers)
Fix 7: WSL2 on Windows
If using Ollama in WSL2:
# Ollama in WSL2 binds to the WSL IP, not Windows localhost
# Find the WSL IP
hostname -I
# Connect from Windows using that IP
curl http://172.x.x.x:11434/api/version
Or install the native Windows version of Ollama instead of the WSL2 version.
Related: Ollama Complete Guide Β· Ollama Troubleshooting Guide Β· Ollama Out of Memory Fix Β· Ollama Model Not Found Fix Β· How to Sandbox Local AI Models Β· How to Set Up Open WebUI