Mistral AI offers six main models in 2026, each optimized for different tasks. Here’s the complete breakdown.
The full lineup
Mistral Large 2 (123B) — Flagship reasoning
The general-purpose powerhouse. 123B dense parameters, 128K context. Competes with Claude Sonnet and GPT-4o on reasoning, coding, and multilingual tasks. Best-in-class for European languages.
- Use for: Complex reasoning, analysis, multilingual content
- Price: $2/$6 per 1M tokens
- Run locally: 1x H100 or Mac Studio Ultra
- Full guide: Mistral Large 2 Complete Guide
Devstral 2 (123B) — Best open coding agent
Same 123B architecture as Large 2 but fine-tuned specifically for agentic coding. 72.2% on SWE-bench Verified — matching Claude Opus. 256K context window.
- Use for: Multi-file refactoring, autonomous coding, complex bug fixes
- Price: $2/$6 per 1M tokens
- License: Modified MIT (commercial OK)
- Full guide: Devstral 2 Complete Guide
Devstral Small 2 (24B) — Local coding agent
The consumer-friendly Devstral. Runs on a single RTX 4090 or 32GB Mac. Still has the 256K context window.
- Use for: Local agentic coding, privacy-sensitive environments
- Price: Free (Apache 2.0)
- VRAM: ~14GB (Q4)
- Full guide: Devstral Small 2 Guide
Codestral (22B) — Best autocomplete
Purpose-built for code completion with native Fill-in-the-Middle. 256K context, 80+ languages, 86.6% HumanEval.
- Use for: IDE autocomplete, tab completions, code suggestions
- Price: $0.30/$0.90 per 1M tokens (or free locally via Ollama)
- VRAM: ~12GB (Q4)
- Full guide: Codestral Complete Guide
Mistral Small (22B) — Fast and cheap
General-purpose model optimized for speed and cost. Good enough for most tasks at 1/20th the price of frontier models.
- Use for: Chat, summarization, simple coding, high-volume tasks
- Price: $0.10/$0.30 per 1M tokens
Mistral Nemo (12B) — Edge deployment
The smallest Mistral model. Runs on laptops and edge devices. Apache 2.0 licensed.
- Use for: Mobile apps, edge devices, Raspberry Pi
- Price: Free (Apache 2.0)
- VRAM: ~7GB (Q4)
The recommended Mistral stack
For a complete AI coding setup using only Mistral models:
- Devstral 2 via API for complex agent tasks ($2/1M)
- Codestral locally for autocomplete (free)
- Mistral Small via API for quick questions ($0.10/1M)
Total cost: ~$10-30/month for heavy use. Compare that to Claude Opus at $200+/month.
Tools
| Tool | Best Mistral model |
|---|---|
| Vibe CLI | Devstral 2 (native) |
| Aider | Any (via API) |
| Continue.dev | Codestral (autocomplete) + Large 2 (chat) |
| OpenCode | Devstral 2 (via API) |
Related: What is Mistral AI? · Mistral Large 2 vs Claude Sonnet · How to Run Mistral Large 2 Locally