Mistral’s API (La Plateforme) gives you access to all their models through a single endpoint. Pricing starts at $0.10/1M tokens — 25-150x cheaper than Claude Opus.
Quick start
pip install mistralai
from mistralai import Mistral
client = Mistral(api_key="your-mistral-key")
response = client.chat.complete(
model="mistral-large-latest",
messages=[{"role": "user", "content": "Write a REST API in FastAPI"}]
)
print(response.choices[0].message.content)
Available models
| Model ID | Price (in/out per 1M) | Best for |
|---|---|---|
mistral-large-latest | $2 / $6 | Complex reasoning |
devstral-2-latest | $2 / $6 | Agentic coding |
codestral-latest | $0.30 / $0.90 | Code completion + FIM |
mistral-small-latest | $0.10 / $0.30 | Fast, cheap tasks |
Fill-in-the-Middle (Codestral)
Codestral’s killer feature — understands code before AND after the cursor:
response = client.fim.complete(
model="codestral-latest",
prompt="def fibonacci(n):\n ",
suffix="\n return result"
)
Via OpenRouter
from openai import OpenAI
client = OpenAI(base_url="https://openrouter.ai/api/v1", api_key="your-key")
response = client.chat.completions.create(
model="mistralai/mistral-large-2",
messages=[{"role": "user", "content": "Explain WebSockets"}]
)
See our OpenRouter guide.
JavaScript/TypeScript
import Mistral from '@mistralai/mistralai';
const client = new Mistral({ apiKey: 'your-key' });
const response = await client.chat.complete({
model: 'devstral-2-latest',
messages: [{ role: 'user', content: 'Refactor this to use async/await' }]
});
With coding tools
# Aider
aider --model mistralai/devstral-2 --api-key $MISTRAL_API_KEY
# Vibe CLI (native Mistral tool)
npm install -g @mistralai/vibe-cli
vibe
GDPR advantage
Mistral’s API runs on EU infrastructure. For European companies needing GDPR compliance, this means data stays in Europe without additional configuration. No Standard Contractual Clauses needed.
Related: What is Mistral AI? · Mistral AI Complete Model Guide · How to Run Mistral Models Locally · Best Free AI APIs 2026