How to Use DeepSeek V4 With Aider: Setup Guide for V4 Pro and Flash (2026)
Aider is one of the best terminal-based AI coding assistants available today, and it works exceptionally well with DeepSeek V4 models. This guide walks you through setting up Aider with both DeepSeek V4 Pro and V4 Flash, configuring your environment, and getting the most out of each model.
If you have used Aider with earlier DeepSeek models, the V4 setup follows the same pattern but unlocks significantly better performance across the board.
Why Use DeepSeek V4 With Aider?
Aider supports DeepSeek V4 through the OpenAI-compatible API that DeepSeek provides. This means you do not need any special plugins or adapters. Aider treats DeepSeek V4 as a first-class model provider, and the integration is smooth out of the box.
Key reasons to pair them:
- V4 Pro handles complex multi-file refactoring, architecture changes, and large codebases with strong reasoning capabilities.
- V4 Flash delivers fast, cheap responses for everyday coding tasks like writing functions, fixing bugs, and generating tests.
- Both models support large context windows, which Aider leverages for its repository map feature.
- Pricing is a fraction of comparable models from OpenAI and Anthropic.
Setup: Direct DeepSeek API
Step 1: Get Your API Key
Sign up at platform.deepseek.com and generate an API key from the dashboard.
Step 2: Export Your Key
Add this to your shell profile (.bashrc, .zshrc, or equivalent):
export DEEPSEEK_API_KEY="your-api-key-here"
Reload your shell or run source ~/.zshrc.
Step 3: Launch Aider With V4
For V4 Pro (best for complex tasks):
aider --model deepseek/deepseek-v4-pro
For V4 Flash (best for speed and cost):
aider --model deepseek/deepseek-v4-flash
That is all you need. Aider automatically routes requests to the DeepSeek API when it detects the deepseek/ model prefix and the DEEPSEEK_API_KEY environment variable.
Setup: Via OpenRouter
If you prefer to use OpenRouter as a unified gateway (useful if you switch between multiple providers), the setup is slightly different.
export OPENROUTER_API_KEY="your-openrouter-key-here"
Then launch Aider with the OpenRouter model path:
aider --model openrouter/deepseek/deepseek-v4-pro
Or for Flash:
aider --model openrouter/deepseek/deepseek-v4-flash
OpenRouter adds a small markup to the per-token cost, but it gives you a single API key for dozens of providers and automatic fallback routing.
Configuration Tips
Thinking Mode
DeepSeek V4 Pro supports extended thinking, which improves results on complex reasoning tasks. You can enable this in Aider:
aider --model deepseek/deepseek-v4-pro --thinking
This tells the model to use chain-of-thought reasoning before producing its final answer. It increases latency and token usage but noticeably improves output quality for architectural decisions and tricky refactors.
Context Window Settings
V4 Pro supports up to 128K tokens of context. Aider respects this by default, but you can explicitly set it:
aider --model deepseek/deepseek-v4-pro --map-tokens 2048 --max-chat-history-tokens 4096
--map-tokenscontrols how many tokens Aider uses for its repository map.--max-chat-history-tokenslimits how much conversation history is sent with each request.
For V4 Flash (64K context), keep these values conservative to avoid hitting the limit on larger projects.
Persistent Configuration
Instead of passing flags every time, create a .aider.conf.yml in your project root:
model: deepseek/deepseek-v4-pro
thinking: true
map-tokens: 2048
max-chat-history-tokens: 4096
When to Use V4 Flash vs V4 Pro
The two models serve different purposes. Here is a practical breakdown:
| Task | Recommended Model | Why |
|---|---|---|
| Quick bug fixes | V4 Flash | Fast response, low cost |
| Writing unit tests | V4 Flash | Straightforward generation |
| Multi-file refactoring | V4 Pro | Better reasoning across files |
| Architecture planning | V4 Pro | Stronger at complex decisions |
| Code review and suggestions | V4 Flash | Good enough for most reviews |
| Migrating frameworks | V4 Pro | Needs deep understanding of both |
| Writing documentation | V4 Flash | Simple generation task |
| Debugging complex issues | V4 Pro | Thinking mode helps significantly |
A good workflow: start your day with V4 Flash for routine tasks, then switch to V4 Pro when you hit something that needs deeper reasoning. You can switch models mid-session in Aider with the /model command.
Cost Comparison
One of the biggest advantages of DeepSeek V4 is pricing. Here is what a typical Aider coding session looks like in terms of cost.
Assumptions: a 45-minute session with roughly 50K input tokens and 10K output tokens.
| Model | Input Cost (per 1M tokens) | Output Cost (per 1M tokens) | Estimated Session Cost |
|---|---|---|---|
| DeepSeek V4 Flash | $0.10 | $0.30 | ~$0.008 |
| DeepSeek V4 Pro | $0.50 | $1.50 | ~$0.040 |
| GPT-5.4 | $2.50 | $10.00 | ~$0.225 |
V4 Flash is roughly 28x cheaper than GPT-5.4 for a typical session. Even V4 Pro comes in at about 5.6x cheaper. Over a month of daily coding sessions, the savings add up fast.
Troubleshooting Common Issues
βModel not foundβ error
Make sure you are using the correct model string. It must include the provider prefix:
# Correct
aider --model deepseek/deepseek-v4-pro
# Wrong
aider --model deepseek-v4-pro
Authentication failures
Verify your API key is set correctly:
echo $DEEPSEEK_API_KEY
If it prints nothing, your key is not exported. Re-add it to your shell profile and reload.
Slow responses with V4 Pro
If you enabled thinking mode, responses will naturally take longer. For faster iteration, switch to V4 Flash or disable thinking:
aider --model deepseek/deepseek-v4-pro --no-thinking
Context window exceeded
If Aider warns about context limits, reduce the map tokens and chat history:
aider --model deepseek/deepseek-v4-flash --map-tokens 1024 --max-chat-history-tokens 2048
For very large repositories, consider using Aiderβs /add and /drop commands to manage which files are in context.
Rate limiting
DeepSeek applies rate limits on free-tier accounts. If you hit rate limits frequently, upgrade to a paid plan or route through OpenRouter which handles retries automatically.
FAQ
Can I use DeepSeek V4 with Aider offline via Ollama?
Not with V4 Pro or V4 Flash directly, as these are cloud-only models. However, you can run smaller DeepSeek models locally with Aider and Ollama. For the full V4 experience, you need the API.
Can I switch between V4 Pro and V4 Flash during a session?
Yes. Use the /model command inside Aider to switch models without restarting your session. This is great for starting with Flash and escalating to Pro when needed.
Does Aider support DeepSeek V4 function calling?
Aider primarily uses the chat/completion interface rather than function calling. DeepSeek V4βs strong instruction-following ability means Aiderβs edit formats (diff, whole, etc.) work reliably without needing function calling support.
Wrapping Up
DeepSeek V4 and Aider make a powerful combination for AI-assisted coding. V4 Flash keeps your daily costs near zero while V4 Pro handles the heavy lifting when you need it. The setup takes under two minutes, and the OpenAI-compatible API means everything just works.
For more details on the models themselves, check out the DeepSeek V4 Pro complete guide and the V4 Flash guide. If you are new to Aider, start with the Aider complete guide to get familiar with the tool before diving into model configuration.