🤖 AI Tools
· 10 min read

How to Use Poolside Laguna with Aider, OpenCode, and Claude Code (2026)


Poolside Laguna models are some of the strongest open-weight coding models available in 2026, but they’re only useful if you can plug them into your actual development workflow. This guide covers how to set up Laguna M.1 and XS.2 with the most popular AI coding tools: Aider, OpenCode, Claude Code, and Continue.dev.

All setups use OpenRouter as the API provider, which gives you access to both Laguna M.1 (paid) and Laguna XS.2 (free) through a single API key. We’ll also cover local deployment via Ollama for developers who want to run Laguna on their own hardware.

Prerequisites

Before starting, you’ll need:

  1. An OpenRouter account — Sign up at openrouter.ai and generate an API key
  2. One or more coding tools installed — Aider, OpenCode, Claude Code, or Continue.dev
  3. For local deployment: Ollama installed on your machine

The OpenRouter API key is the same for all tools. Laguna XS.2 is free on OpenRouter (no credit card required). Laguna M.1 requires credits at $2.00/$8.00 per million input/output tokens.

For background on Poolside’s model lineup and capabilities, see What Is Poolside AI?. For API details, check the Poolside Laguna API guide.

Setting up Aider with Poolside Laguna

Aider is a terminal-based AI coding assistant that edits files directly in your repository. It’s one of the best tools for using Laguna because it supports custom OpenAI-compatible endpoints natively.

Quick setup

Set your environment variables:

export OPENROUTER_API_KEY="sk-or-v1-your-key-here"

Run Aider with Laguna M.1:

aider --model openrouter/poolside/laguna-m1 \
      --api-key openrouter=$OPENROUTER_API_KEY

Or with the free Laguna XS.2:

aider --model openrouter/poolside/laguna-xs2 \
      --api-key openrouter=$OPENROUTER_API_KEY

Persistent configuration

Create or edit ~/.aider.conf.yml to avoid typing flags every time:

# ~/.aider.conf.yml
model: openrouter/poolside/laguna-m1
api-key:
  openrouter: sk-or-v1-your-key-here

# Optional: use XS.2 as the cheap model for simple tasks
weak-model: openrouter/poolside/laguna-xs2

# Recommended settings for Laguna
edit-format: diff
auto-commits: true

This configuration uses Laguna M.1 as the primary model and XS.2 as the “weak model” for cheaper operations like commit messages and simple edits. Aider automatically routes simpler tasks to the weak model to save costs.

Aider with local Laguna (Ollama)

If you’re running Laguna XS.2 locally via Ollama:

# Start Ollama (if not already running)
ollama serve

# Pull the model
ollama pull poolside/laguna-xs2

# Run Aider with local model
aider --model ollama/poolside/laguna-xs2

For the Ollama setup, no API key is needed. Aider communicates with Ollama’s local server at http://localhost:11434.

Laguna models work best with these Aider settings:

# ~/.aider.conf.yml — optimized for Laguna
model: openrouter/poolside/laguna-m1
edit-format: diff          # Laguna handles diff format well
auto-commits: true         # Let Aider commit changes
map-tokens: 2048           # Repository map size
cache-prompts: true        # Reduce redundant API calls

The diff edit format is important — Laguna’s RLCEF training makes it particularly good at generating precise diffs rather than rewriting entire files. This saves tokens and produces cleaner edits.

Setting up OpenCode with Poolside Laguna

OpenCode is a terminal-based coding tool similar to Aider but with a TUI (terminal user interface). It supports custom providers through its configuration file.

Configuration

Create or edit ~/.config/opencode/config.json:

{
  "providers": {
    "openrouter": {
      "apiKey": "sk-or-v1-your-key-here",
      "baseURL": "https://openrouter.ai/api/v1"
    }
  },
  "models": {
    "laguna-m1": {
      "provider": "openrouter",
      "model": "poolside/laguna-m1",
      "maxTokens": 8192
    },
    "laguna-xs2": {
      "provider": "openrouter",
      "model": "poolside/laguna-xs2",
      "maxTokens": 4096
    }
  },
  "defaultModel": "laguna-m1"
}

Running OpenCode

# Start with default model (Laguna M.1)
opencode

# Or specify a model
opencode --model laguna-xs2

OpenCode with local Ollama

{
  "providers": {
    "ollama": {
      "baseURL": "http://localhost:11434/v1"
    }
  },
  "models": {
    "laguna-local": {
      "provider": "ollama",
      "model": "poolside/laguna-xs2",
      "maxTokens": 4096
    }
  },
  "defaultModel": "laguna-local"
}

No API key needed for the Ollama provider. Make sure Ollama is running and the model is pulled before starting OpenCode.

Setting up Claude Code with Poolside Laguna

Claude Code is Anthropic’s terminal-based coding agent. While it’s designed for Claude models, it supports custom providers through environment variables and configuration overrides. Using Laguna with Claude Code requires routing through an OpenAI-compatible proxy.

Method 1: OpenRouter as custom provider

Set the environment variables to redirect Claude Code to OpenRouter:

export ANTHROPIC_BASE_URL="https://openrouter.ai/api/v1"
export ANTHROPIC_API_KEY="sk-or-v1-your-key-here"
export CLAUDE_CODE_MODEL="poolside/laguna-m1"

Then run Claude Code normally:

claude

Note: This approach works because OpenRouter exposes an Anthropic-compatible API endpoint. Some Claude Code features that depend on Anthropic-specific APIs (like prompt caching) may not work with this setup.

Method 2: Using Claude Code’s model override

Claude Code supports a --model flag for custom models:

claude --model openrouter/poolside/laguna-m1 \
       --api-key sk-or-v1-your-key-here \
       --provider openrouter

Limitations with Claude Code

Claude Code is optimized for Claude models. When using Laguna:

  • Extended thinking features won’t work (Laguna doesn’t support this API)
  • Some agentic tool-use patterns may be less reliable
  • Prompt caching is handled by OpenRouter, not Anthropic’s native system

For the best experience with Laguna, Aider or OpenCode are better choices. Use Claude Code with Laguna only if it’s already your primary tool and you want to experiment.

Setting up Continue.dev with Poolside Laguna

Continue.dev is a VS Code and JetBrains extension that provides AI-powered code completion, chat, and editing. It’s the best option if you prefer working in an IDE rather than the terminal.

VS Code configuration

Open Continue’s configuration file (.continue/config.json in your home directory or workspace):

{
  "models": [
    {
      "title": "Laguna M.1",
      "provider": "openrouter",
      "model": "poolside/laguna-m1",
      "apiKey": "sk-or-v1-your-key-here"
    },
    {
      "title": "Laguna XS.2 (Free)",
      "provider": "openrouter",
      "model": "poolside/laguna-xs2",
      "apiKey": "sk-or-v1-your-key-here"
    }
  ],
  "tabAutocompleteModel": {
    "title": "Laguna XS.2 Local",
    "provider": "ollama",
    "model": "poolside/laguna-xs2"
  }
}

This configuration gives you:

  • Chat: Laguna M.1 via OpenRouter (for complex questions and refactoring)
  • Tab autocomplete: Laguna XS.2 running locally via Ollama (for fast, free completions)

Tab autocomplete optimization

For tab autocomplete, speed matters more than capability. Laguna XS.2 locally is ideal because:

  • ~40 tokens/second on M2 Macs
  • ~100ms time-to-first-token
  • Zero API latency
  • No cost per completion

Configure the autocomplete settings for optimal performance:

{
  "tabAutocompleteModel": {
    "title": "Laguna XS.2 Local",
    "provider": "ollama",
    "model": "poolside/laguna-xs2"
  },
  "tabAutocompleteOptions": {
    "maxPromptTokens": 2048,
    "debounceDelay": 300,
    "multilineCompletions": "always"
  }
}

JetBrains configuration

The Continue plugin for JetBrains (IntelliJ, PyCharm, WebStorm, etc.) uses the same configuration format. The config file is located at ~/.continue/config.json and is shared between VS Code and JetBrains installations.

Local deployment with Ollama

For all tools, local deployment through Ollama follows the same pattern.

Install and pull the model

# Install Ollama (macOS)
brew install ollama

# Start the server
ollama serve

# Pull Laguna XS.2 (recommended for local use)
ollama pull poolside/laguna-xs2

Verify it’s working

# Quick test
ollama run poolside/laguna-xs2 "Write a Python function to merge two sorted lists"

# Check the API endpoint
curl http://localhost:11434/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "poolside/laguna-xs2",
    "messages": [{"role": "user", "content": "Hello"}]
  }'

Performance tuning

For better local performance, set these Ollama environment variables:

# Use more CPU threads (default: half of available)
export OLLAMA_NUM_THREADS=8

# Keep model loaded in memory (default: 5 minutes)
export OLLAMA_KEEP_ALIVE=30m

# GPU layers (for NVIDIA GPUs)
export OLLAMA_NUM_GPU=999  # Use all available GPU layers

On Apple Silicon Macs, Ollama automatically uses the GPU. No additional configuration needed.

The most effective setup uses multiple Laguna models for different tasks:

TaskModelProviderWhy
Tab autocompleteLaguna XS.2Ollama (local)Fast, free, low latency
Simple edits & chatLaguna XS.2OpenRouter (free)Good enough, zero cost
Complex refactoringLaguna M.1OpenRouter (paid)Higher accuracy on hard tasks
Code reviewLaguna M.1OpenRouter (paid)Better at spotting subtle bugs

This tiered approach keeps costs low while giving you access to the full Laguna capability range. Most of your daily interactions use the free XS.2 model, with M.1 reserved for tasks that justify the cost.

Example: Aider with tiered models

# ~/.aider.conf.yml — tiered Laguna setup
model: openrouter/poolside/laguna-m1
weak-model: openrouter/poolside/laguna-xs2
api-key:
  openrouter: sk-or-v1-your-key-here
edit-format: diff
auto-commits: true

Aider automatically uses the weak model (XS.2, free) for simple operations and the main model (M.1, paid) for complex edits. This can reduce your API costs by 60-80% compared to using M.1 for everything.

Troubleshooting

”Model not found” errors

Make sure you’re using the correct model identifier:

  • OpenRouter: poolside/laguna-m1 or poolside/laguna-xs2
  • Ollama: poolside/laguna-xs2 (after pulling)

Model names are case-sensitive on some providers.

Slow responses via OpenRouter

OpenRouter routes to the fastest available backend, but during peak hours, latency can increase. Solutions:

  • Use local Ollama for latency-sensitive tasks (autocomplete)
  • Set a higher timeout in your tool’s configuration
  • Try the free XS.2 model — it often has less queue congestion than M.1

Ollama out of memory

If Ollama crashes or runs slowly:

  • Check available RAM: free -h (Linux) or Activity Monitor (macOS)
  • Use a smaller quantization: ollama pull poolside/laguna-xs2:q4_0 (smaller than default)
  • Close other memory-intensive applications
  • Set OLLAMA_KEEP_ALIVE=5m to unload the model sooner when idle

Aider “edit format” errors

If Laguna produces malformed edits in Aider:

  • Switch to --edit-format whole (rewrites entire files instead of diffs)
  • This uses more tokens but is more reliable with smaller models
  • M.1 handles diff format well; XS.2 occasionally struggles with complex diffs

Continue.dev autocomplete not working

  • Verify Ollama is running: curl http://localhost:11434/v1/models
  • Check Continue’s output panel in VS Code for error messages
  • Ensure the model name in config matches exactly what Ollama reports
  • Restart the Continue extension after config changes

Performance tips

  1. Use XS.2 locally for autocomplete — The latency difference between local and API is significant for real-time completions. Even on modest hardware, local XS.2 feels snappier.

  2. Cache your prompts — Both Aider and Continue.dev support prompt caching. Enable it to avoid resending the same context repeatedly.

  3. Keep context focused — Laguna XS.2 has a 64K context window. Don’t load your entire repository — use .aiderignore or Continue’s context filters to include only relevant files.

  4. Use diff format with M.1 — Laguna M.1’s RLCEF training makes it excellent at generating precise diffs. This saves tokens and produces cleaner edits than whole-file rewrites.

  5. Batch similar tasks — If you’re making similar changes across multiple files, describe the pattern once and let the model apply it. Laguna’s code understanding makes it good at pattern-based edits.

Bottom line

Poolside Laguna integrates smoothly with all major AI coding tools. The recommended setup for most developers:

  • Continue.dev with local Laguna XS.2 for tab autocomplete (fast, free)
  • Aider with Laguna M.1 via OpenRouter for complex edits (accurate, affordable)
  • OpenCode as an alternative to Aider if you prefer a TUI

The free XS.2 model on OpenRouter makes it easy to start experimenting with zero cost. Once you see the quality, upgrading to M.1 for complex tasks is a natural next step.

For more on Poolside’s models, see What Is Poolside AI?. For detailed API integration, check the Poolside Laguna API guide.


FAQ

Do I need separate API keys for each tool?

No. A single OpenRouter API key works with Aider, OpenCode, Claude Code, and Continue.dev. OpenRouter provides a unified API that all these tools can connect to. For local deployment via Ollama, no API key is needed at all. Generate one key at openrouter.ai and use it everywhere.

Is Laguna XS.2 on OpenRouter really free?

Yes. Poolside offers Laguna XS.2 at zero cost on OpenRouter — no per-token charges for input or output. There are rate limits (typically 10-20 requests per minute for free-tier users), but for individual developer use, these limits are generous. You don’t even need a credit card on your OpenRouter account to use it. The model weights are also open-source (Apache 2.0), so you can run it locally for truly unlimited free use.

Which tool works best with Laguna?

Aider is the best overall choice. It has mature support for custom models, handles the diff edit format well (which plays to Laguna’s strengths), and its weak-model feature lets you use XS.2 for cheap tasks and M.1 for complex ones automatically. Continue.dev is the best choice if you prefer IDE integration over terminal workflows. OpenCode is a solid alternative to Aider with a nicer TUI. Claude Code works but isn’t optimized for non-Claude models.

Can I use Laguna M.1 locally?

Laguna M.1 (225B parameters) requires approximately 120GB of VRAM for full-precision inference, which puts it out of reach for most consumer hardware. You’d need 2x A100 80GB GPUs or equivalent. For local deployment, Laguna XS.2 is the practical choice — it runs on 6-8GB of RAM when quantized. Use M.1 through the OpenRouter API and XS.2 locally for the best of both worlds.

How does Laguna compare to Claude Sonnet in Aider?

Laguna M.1 and Claude Sonnet 4 are in a similar capability tier for coding tasks. Laguna M.1 is cheaper ($2/$8 per million tokens vs Sonnet’s $3/$15) and has the RLCEF advantage for code correctness. Sonnet has better general reasoning and a more polished agentic experience. For pure coding in Aider, Laguna M.1 is a strong and more affordable alternative. For tasks that mix coding with complex reasoning or planning, Sonnet may still have an edge.

Can I switch between models mid-session in Aider?

Yes. Aider supports the /model command to switch models during a session. Type /model openrouter/poolside/laguna-xs2 to switch to XS.2 for a quick task, then /model openrouter/poolside/laguna-m1 to switch back for complex work. Your conversation history is preserved across model switches. This is a great way to manage costs — use XS.2 for exploration and M.1 for final implementations.

Related: What Is Poolside AI? · Poolside Laguna API Guide · Aider Complete Guide · OpenCode Complete Guide