You tried to run a model and got:
Error: pull model manifest: file does not exist
Or:
Error: model "my-model" not found, try pulling it first
Hereβs every reason this happens and how to fix each one.
Fix 1: Check the model name
The most common cause β a typo or wrong model name:
# Wrong (common mistakes)
ollama run llama-4 # Hyphen instead of no separator
ollama run deepseek-r1-14b # Wrong format
ollama run qwen3.5-27b # Wrong format
# Right
ollama run llama4-scout
ollama run deepseek-r1:14b
ollama run qwen3.5:27b
The format is model-name:tag. The colon separates the model from the size/quantization tag. Search the Ollama library for the exact name.
# Search for available models
ollama list # Shows locally downloaded models
# Check ollama.com/library for the full catalog
Fix 2: Pull before running
Ollama doesnβt auto-download models on run if theyβre not in the library:
# Pull first, then run
ollama pull qwen3.5:27b
ollama run qwen3.5:27b
# Or pull + run in one step (works for library models)
ollama run qwen3.5:27b # Auto-pulls if in the official library
If the model isnβt in Ollamaβs official library (e.g., a community model or custom GGUF), you need to import it manually.
Fix 3: Import a GGUF model
For models not in the Ollama library (like Jais or custom fine-tunes):
# Download the GGUF file
huggingface-cli download TheBloke/model-name-GGUF model.Q5_K_M.gguf --local-dir ./
# Create a Modelfile
cat > Modelfile << 'EOF'
FROM ./model.Q5_K_M.gguf
PARAMETER temperature 0.7
PARAMETER num_ctx 4096
EOF
# Import into Ollama
ollama create my-custom-model -f Modelfile
# Now it works
ollama run my-custom-model
Fix 4: Network/registry issues
If ollama pull fails:
# Check if Ollama's registry is reachable
curl -s https://registry.ollama.ai/v2/ | head -5
# If behind a proxy
export HTTPS_PROXY=http://your-proxy:8080
ollama pull qwen3.5:27b
# If DNS issues
# Try Google DNS
echo "nameserver 8.8.8.8" | sudo tee /etc/resolv.conf
Fix 5: Corrupted download
If a pull was interrupted, the model might be partially downloaded:
# Remove the corrupted model
ollama rm model-name
# Re-pull
ollama pull model-name
Fix 6: Docker-specific issues
If running Ollama in Docker, the model storage might not be mounted:
# docker-compose.yml β mount the model directory
services:
ollama:
image: ollama/ollama
volumes:
- ollama_data:/root/.ollama # Persist models between restarts
volumes:
ollama_data:
Without the volume mount, models are lost every time the container restarts.
Quick reference: popular model names
| Model | Ollama command |
|---|---|
| Qwen 3.5 27B | ollama run qwen3.5:27b |
| DeepSeek R1 14B | ollama run deepseek-r1:14b |
| Llama 4 Scout | ollama run llama4-scout |
| Gemma 4 9B | ollama run gemma4:9b |
| Phi-4 14B | ollama run phi4:14b |
| Mistral Large 2 | ollama run mistral-large |
| CodeStral | ollama run codestral |
For the full list of models and which ones are best for coding, see our best Ollama models guide.
Related: Ollama Complete Guide Β· Ollama Troubleshooting Guide Β· Ollama Out of Memory Fix Β· Best Ollama Models for Coding Β· How to Run DeepSeek Locally Β· How to Run Qwen 3.5 Locally