Yi is a family of open-source language models built by 01.AI, a Chinese AI lab founded by Kai-Fu Lee (former president of Google China). The models are fully open under Apache 2.0 license and compete directly with Qwen, DeepSeek, and Llama.
The Yi model family
| Model | Parameters | Best for | License |
|---|---|---|---|
| Yi-34B | 34B | General purpose, strong reasoning | Apache 2.0 |
| Yi-1.5-34B | 34B | Improved version, 500B extra tokens | Apache 2.0 |
| Yi-Coder | 1.5B / 9B | Code generation, small and fast | Apache 2.0 |
| Yi-Lightning | Undisclosed | Flagship, ranked 6th on Chatbot Arena | API only |
| Yi-6B | 6B | Lightweight, edge deployment | Apache 2.0 |
| Yi-VL | 34B | Vision + language (multimodal) | Apache 2.0 |
What makes Yi different
Strong bilingual performance
Yi was trained on a balanced Chinese-English dataset, making it one of the best bilingual models. It ranks 2nd-4th on Chinese, Math, Coding, and Hard Prompts categories on Chatbot Arena.
Yi-Coder: the hidden gem
Yi-Coder is a purpose-built coding model under 10B parameters. It delivers strong code generation performance at a fraction of the size of Devstral or Codestral. At 9B parameters, it runs on any machine with 8GB+ RAM.
Apache 2.0 license
Unlike some models with restrictive licenses, Yi is fully Apache 2.0 β use it commercially, modify it, distribute it, no restrictions. Same freedom as DeepSeek (MIT) and more permissive than Llama (custom license).
Yi vs other Chinese models
| Yi-34B | Qwen 3.5 27B | DeepSeek V3 | GLM-5.1 | |
|---|---|---|---|---|
| Parameters | 34B | 27B (MoE 397B) | MoE 671B | MoE 754B |
| License | Apache 2.0 | Apache 2.0 | MIT | MIT |
| Coding | Good | Very good | Excellent | Excellent |
| Chinese | Excellent | Excellent | Excellent | Excellent |
| Run locally | β 24GB RAM | β 20GB RAM | β 14B version | β via Z.ai |
| Best tool | Ollama | Ollama | Aider | Claude Code |
Yi-34B is a solid all-rounder but falls behind the latest Qwen and DeepSeek models on coding benchmarks. Its strength is the 34B dense architecture β no MoE complexity, predictable performance, and straightforward deployment.
How to run Yi locally
# Install Ollama
brew install ollama
# Pull Yi models
ollama pull yi:34b # Full 34B model
ollama pull yi-coder:9b # Coding-focused 9B
ollama pull yi:6b # Lightweight 6B
# Run
ollama run yi-coder:9b "Write a Python function to parse JSON from a URL"
Connect to Aider:
aider --model ollama/yi-coder:9b
Connect to Continue.dev:
{
"models": [{
"title": "Yi Coder",
"provider": "ollama",
"model": "yi-coder:9b"
}]
}
Who should use Yi
- Budget hardware β Yi-Coder 9B runs on 8GB RAM, great for laptops
- Bilingual projects β best Chinese-English performance at this size
- Apache 2.0 needed β no license restrictions for commercial use
- Dense model preferred β simpler than MoE architectures, easier to deploy
For pure coding quality, Devstral Small 24B or Qwen 3.5 27B are better. For the smallest useful coding model, Yi-Coder 9B is competitive with Qwen3 8B.
The company behind Yi
01.AI was founded in 2023 by Kai-Fu Lee, one of the most influential figures in Chinese tech. He was president of Google China, ran Microsoft Research Asia, and wrote the bestselling book βAI Superpowers.β The company raised $1B+ in funding and is headquartered in Beijing.
Their philosophy is different from other Chinese AI labs: they focus on open-source models with permissive licenses, aiming to be the βRed Hat of AIβ rather than competing on proprietary APIs.
Yi-Lightning: the flagship
Yi-Lightning is 01.AIβs most capable model, available only via API. It ranked 6th overall on Chatbot Arena and 2nd-4th in specialized categories (Chinese, Math, Coding, Hard Prompts). Key features:
- Competitive with Claude Sonnet and GPT-5 on reasoning tasks
- Particularly strong on Chinese language tasks
- Fast inference (optimized architecture)
- Not open source (API only)
For developers who need the best Yi quality via API, Yi-Lightning is the choice. For local deployment, Yi-34B and Yi-Coder are the open alternatives.
Yi-Coder: purpose-built for code
Yi-Coder deserves special attention. At just 9B parameters, it delivers coding performance that rivals models 3-4x its size:
- Trained specifically on code data (not just general text)
- Supports 52 programming languages
- 128K context window (enough for most codebases)
- Apache 2.0 license (fully commercial)
Itβs the model to pick when you need a coding assistant on a laptop without a dedicated GPU. See our how to run Yi locally guide for setup.
FAQ
Is Yi still relevant compared to Qwen and DeepSeek?
Yi-34B has fallen behind the latest Qwen 3.5 and DeepSeek V3 on most benchmarks. However, Yi-Coder 9B remains competitive as one of the best sub-10B coding models, and Yi-Lightning (API only) ranks in the top 10 on Chatbot Arena. For local deployment on budget hardware, Yi-Coder 9B is still an excellent choice.
Whatβs the best Yi model for coding on a laptop?
Yi-Coder 9B is the clear choice β it needs only 8GB RAM, supports 128K context, and was trained specifically on code across 52 programming languages. Run it via ollama pull yi-coder:9b and connect it to Aider or Continue.dev for a free local coding assistant.
How is 01.AI different from other Chinese AI labs?
01.AI focuses on open-source models with permissive Apache 2.0 licenses, aiming to be the βRed Hat of AI.β Unlike Alibaba (Qwen) or Baidu which are large corporations, 01.AI is a startup founded by Kai-Fu Lee (former Google China president) with a philosophy of democratizing AI through fully open, commercially unrestricted models.
Related: How to Run Yi Locally Β· Yi vs Qwen vs DeepSeek Β· Best Ollama Models for Coding Β· Best Open Source Coding Models Β· How to Run Qwen 3.5 Locally Β· Ollama Complete Guide