GLM-5.1 is available through multiple API providers. Here’s everything you need to know about accessing it programmatically.
API options
There are three main ways to access GLM-5.1 via API:
- Z.ai direct API — Official endpoint from the model creator
- GLM Coding Plan — Subscription-based access optimized for coding tools
- OpenRouter — Third-party aggregator with pay-per-token pricing
Z.ai Direct API
Authentication
Sign up at z.ai and generate an API key from your dashboard.
Endpoints
Z.ai provides both OpenAI-compatible and Anthropic-compatible endpoints:
OpenAI-compatible:
curl https://api.z.ai/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "glm-5.1",
"messages": [
{"role": "system", "content": "You are a senior software engineer."},
{"role": "user", "content": "Refactor this function to use async/await"}
],
"temperature": 0.7,
"max_tokens": 4096
}'
Anthropic-compatible:
curl https://api.z.ai/v1/messages \
-H "x-api-key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "glm-5.1",
"max_tokens": 4096,
"messages": [
{"role": "user", "content": "Write unit tests for this module"}
]
}'
The Anthropic-compatible endpoint is what makes GLM-5.1 work as a drop-in replacement for Claude Code.
Pricing
| Model | Input (per 1M tokens) | Output (per 1M tokens) |
|---|---|---|
| GLM-5.1 | ~$1.00 | ~$2.30 |
| GLM-5 | ~$0.80 | ~$2.00 |
| GLM-5-Turbo | ~$0.40 | ~$1.00 |
| GLM-4.7 | ~$0.20 | ~$0.50 |
GLM-5.1 is priced higher than GLM-5 despite sharing the same architecture. Z.ai justifies this with the improved agentic performance, though the community has questioned this since inference costs should be identical.
GLM Coding Plan
The Coding Plan is a subscription that bundles API access with coding tool integration:
| Tier | Price | Models | Best for |
|---|---|---|---|
| Lite | $3/month | GLM-5.1, GLM-5-Turbo, GLM-4.x | Light usage, learning |
| Pro | $10/month | All models including GLM-5 | Daily development |
| Max | Higher | All models, higher limits | Teams, heavy usage |
All tiers support GLM-5.1. The Coding Plan includes setup guides for Claude Code, OpenClaw, Cline, and other popular tools.
Setup
# Install via the GLM CLI
npm install -g @zai/glm-cli
glm auth login
# Or configure manually
export ANTHROPIC_BASE_URL="https://api.z.ai/v1"
export ANTHROPIC_API_KEY="your-coding-plan-key"
OpenRouter
OpenRouter provides GLM-5.1 through their unified API:
curl https://openrouter.ai/api/v1/chat/completions \
-H "Authorization: Bearer YOUR_OPENROUTER_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "z-ai/glm-5.1",
"messages": [
{"role": "user", "content": "Explain this error and fix it"}
]
}'
OpenRouter pricing varies by provider but is typically competitive with Z.ai’s direct pricing. Check openrouter.ai/z-ai/glm-5 for current rates.
Python SDK
from openai import OpenAI
client = OpenAI(
base_url="https://api.z.ai/v1",
api_key="your-api-key"
)
response = client.chat.completions.create(
model="glm-5.1",
messages=[
{"role": "system", "content": "You are a coding assistant."},
{"role": "user", "content": "Write a REST API in FastAPI with user authentication"}
],
temperature=0.3,
max_tokens=8192
)
print(response.choices[0].message.content)
JavaScript/TypeScript SDK
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://api.z.ai/v1',
apiKey: 'your-api-key',
});
const response = await client.chat.completions.create({
model: 'glm-5.1',
messages: [
{ role: 'user', content: 'Create a React component for a data table with sorting and filtering' }
],
temperature: 0.3,
});
console.log(response.choices[0].message.content);
Reasoning mode
GLM-5.1 supports reasoning (chain-of-thought) through OpenRouter’s reasoning parameter:
curl https://openrouter.ai/api/v1/chat/completions \
-H "Authorization: Bearer YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "z-ai/glm-5.1",
"messages": [{"role": "user", "content": "Debug this race condition"}],
"reasoning": {"effort": "high"}
}'
The response includes a reasoning_details array showing the model’s step-by-step thinking.
Tool use
GLM-5.1 supports function calling through the standard OpenAI tools format:
response = client.chat.completions.create(
model="glm-5.1",
messages=[{"role": "user", "content": "What's the weather in Tokyo?"}],
tools=[{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"}
},
"required": ["location"]
}
}
}]
)
This is critical for agentic workflows where GLM-5.1 needs to call external tools over thousands of iterations.
Tips for best results
- Use low temperature (0.1-0.3) for coding — Higher temperatures introduce unnecessary variation in code output
- Provide system prompts — GLM-5.1 responds well to role-based system messages
- Use the Anthropic endpoint for Claude Code — The OpenAI endpoint works for general use, but Claude Code specifically needs the Anthropic-compatible one
- Monitor token usage — GLM-5.1’s 200K context window is generous, but long agentic sessions can consume millions of tokens
Related: GLM-5.1 Complete Guide · How to Use GLM-5.1 with Claude Code · Best Free AI APIs 2026