What is Poolside AI? Laguna Models, RLCEF, and the $3B Coding Startup (2026)
Poolside AI is a coding-focused AI company that builds foundation models exclusively for software development. Not general-purpose chatbots that happen to write code. Not fine-tuned versions of existing models. Poolside trains from scratch on code, using a technique called Reinforcement Learning from Code Execution Feedback (RLCEF) that actually runs the code the model generates and feeds the results back into training.
The company was founded in 2023 by Jason Warner (CEO, former CTO of GitHub) and Eiso Kant (CTO). It reached a $3 billion valuation with backing from AWS, making it one of the most well-funded AI startups focused purely on developer tools. Their models — Laguna M.1 and Laguna XS.2 — are available for free on OpenRouter right now, and the smaller model ships under Apache 2.0.
Here is everything you need to know about Poolside: the models, the training approach, the products, and where it fits in the AI coding landscape.
The founding team
Jason Warner spent years as GitHub’s CTO, overseeing the platform during its acquisition by Microsoft and the launch of GitHub Copilot. He saw firsthand how general-purpose language models were being retrofitted for coding tasks and decided the approach was fundamentally limited. Code is not natural language. It has strict syntax, deterministic execution, and verifiable correctness. A model built from the ground up for code could exploit all of these properties.
Eiso Kant, the CTO, brings deep experience in developer tooling and machine learning infrastructure. Together, they assembled a team of researchers and engineers focused on a single thesis: coding-specific foundation models trained on code execution feedback will outperform general-purpose models at software development tasks.
The $3 billion valuation, backed by AWS among other investors, gives Poolside the compute budget to train large models from scratch rather than fine-tuning existing ones. AWS backing also means tight integration with Amazon Bedrock, where Poolside models are available as a managed service.
RLCEF: Reinforcement Learning from Code Execution Feedback
This is Poolside’s core technical differentiator. Most AI coding models are trained on static code — they learn patterns from repositories, documentation, and code snippets without ever running anything. RLCEF changes that.
During training, the model generates code. That code is actually executed in a sandboxed environment. The execution results — whether the code runs, whether tests pass, whether the output matches expectations — are fed back as reward signals. The model learns not just what code looks like, but what code does.
This is fundamentally different from RLHF (Reinforcement Learning from Human Feedback), where human annotators rate outputs. Human feedback is subjective, expensive, and slow. Code execution feedback is objective, cheap, and fast. A function either returns the correct value or it does not. A test suite either passes or it does not.
The practical impact: Poolside models tend to generate code that actually works on the first try more often than models trained purely on static code. They are better at understanding edge cases, handling error conditions, and producing code that integrates correctly with existing codebases. The model has been trained on millions of execution cycles, learning from its own mistakes in a way that static training cannot replicate.
RLCEF also helps with debugging. Because the model has seen the relationship between buggy code and execution errors during training, it develops stronger intuitions about what causes specific failure modes and how to fix them.
The Laguna model family
Poolside’s models are called Laguna. There are currently two sizes available:
Laguna M.1 — the flagship
- Total parameters: 225B
- Active parameters: 23B (Mixture-of-Experts)
- Architecture: MoE with sparse routing
- Availability: Free on OpenRouter (limited time), Amazon Bedrock, direct API
- License: Proprietary
Laguna M.1 is Poolside’s largest and most capable model. The 225B total / 23B active MoE architecture means it has the knowledge capacity of a 225B model but the inference cost of a 23B model. Only a subset of expert networks activate for each token, keeping latency and compute costs manageable.
M.1 targets complex coding tasks: multi-file refactoring, architecture decisions, debugging intricate issues across large codebases, and generating production-quality code with proper error handling and testing. It is currently free on OpenRouter for a limited time, which makes it worth trying before the pricing kicks in.
For a deep dive into M.1’s capabilities and benchmarks, see our Laguna M.1 complete guide.
Laguna XS.2 — the lightweight specialist
- Total parameters: 33B
- Active parameters: 3B (Mixture-of-Experts)
- Architecture: MoE with sparse routing
- Availability: Free on OpenRouter, Amazon Bedrock, direct API
- License: Apache 2.0
Laguna XS.2 is the model that makes Poolside accessible to everyone. At 33B total / 3B active parameters, it runs on consumer hardware. The Apache 2.0 license means you can download the weights, run it locally, fine-tune it, and deploy it commercially without restrictions.
Despite its small active parameter count, XS.2 punches well above its weight on coding tasks thanks to RLCEF training. It is particularly strong at code completion, function generation, and targeted bug fixes — the bread-and-butter tasks that developers encounter hundreds of times per day.
For setup instructions and detailed specs, see our Laguna XS.2 complete guide.
Poolside products
Beyond the raw models, Poolside ships two developer-facing products:
pool — terminal-based coding agent
pool is Poolside’s CLI coding agent. It runs in your terminal, understands your project context, and can make changes across multiple files. Think of it as a coding assistant that lives where you already work — no IDE plugins, no browser tabs, just your terminal.
pool uses Laguna models under the hood and is designed for the kind of iterative coding workflow where you describe what you want, review the changes, and refine. It handles file creation, modification, and deletion, and understands project structure well enough to make coherent multi-file changes.
Shimmer — cloud development experience
Shimmer is Poolside’s cloud-based development environment for building web apps, APIs, and CLIs. It provides a complete development experience in the browser, powered by Laguna models. You describe what you want to build, and Shimmer generates the project structure, writes the code, and provides a live preview.
Shimmer targets the rapid prototyping use case — getting from idea to working application as fast as possible. It handles the scaffolding, boilerplate, and configuration that typically slow down the early stages of a project.
How Poolside compares to the competition
The AI coding space is crowded. Here is where Poolside fits:
vs. GitHub Copilot / Cursor / Windsurf: These are IDE-integrated tools that use general-purpose models (GPT, Claude, Gemini) as backends. Poolside builds its own coding-specific models. The difference is in the foundation — Poolside’s models are trained from scratch on code with RLCEF, not adapted from general-purpose language models.
vs. Claude / GPT / Gemini for coding: These are frontier general-purpose models that are very good at coding. But they are also trained on everything else — creative writing, math, science, conversation. Poolside argues that a model trained exclusively on code, with execution feedback, will be more efficient and more reliable for coding tasks specifically.
vs. DeepSeek Coder / Qwen Coder / CodeStral: These are other coding-focused models, but they are typically fine-tuned from general-purpose base models or trained on static code without execution feedback. Poolside’s RLCEF approach is a genuine architectural differentiator.
vs. Devin / OpenHands / SWE-Agent: These are AI coding agents that orchestrate existing models. Poolside builds the underlying models themselves. pool (their agent) uses Laguna models, but the core value proposition is the model, not the agent wrapper.
For a broader comparison of coding agents, see our guide on how to choose an AI coding agent.
Amazon Bedrock integration
Poolside models are available on Amazon Bedrock, AWS’s managed AI service. This matters for enterprise teams that need:
- Compliance: Data stays within your AWS account and region
- Integration: Use Poolside models through the same Bedrock API you use for Claude, Llama, and other models
- Scaling: AWS handles the infrastructure, auto-scaling, and availability
- Security: IAM-based access control, VPC endpoints, encryption at rest and in transit
The Bedrock integration makes Poolside a viable option for enterprise teams that cannot send code to third-party APIs. Your code never leaves your AWS environment.
Pricing and access
As of May 2026:
- Laguna XS.2 on OpenRouter: Free
- Laguna M.1 on OpenRouter: Free (limited time)
- Laguna XS.2 weights: Free download, Apache 2.0
- Amazon Bedrock: Standard Bedrock pricing (pay per token)
- Direct API: Contact Poolside for pricing
- pool (CLI agent): Available through Poolside
- Shimmer: Available through Poolside
The free OpenRouter access is the easiest way to try both models right now. For local deployment of XS.2, you can download the weights from HuggingFace and run them with vLLM or other inference engines.
Who should use Poolside
Individual developers who want a free, high-quality coding model should try Laguna XS.2 on OpenRouter or run it locally. The Apache 2.0 license and small active parameter count make it one of the most accessible coding models available.
Teams evaluating AI coding tools should benchmark Laguna M.1 against their current setup. The free OpenRouter access makes this zero-risk. If M.1 outperforms your current model on your specific codebase and tasks, the Bedrock integration provides a production-ready deployment path.
Enterprises with data sovereignty requirements should look at the Bedrock integration or local deployment of XS.2. Both options keep code within your infrastructure.
Open-source enthusiasts should pay attention to XS.2. An Apache 2.0 coding model trained with RLCEF is a meaningful contribution to the open-source AI ecosystem. Fine-tuning XS.2 on your specific codebase or domain could yield a highly specialized coding assistant.
For comparisons with other coding models and agents, see our Aider vs Claude Code vs Codex comparison and our best Ollama models for coding roundup.
What to watch
Poolside is still early. The $3 billion valuation and AWS backing give them runway, but the AI coding space moves fast. Key things to watch:
- Benchmark results: As independent benchmarks include Laguna models, we will see how RLCEF training translates to real-world performance compared to frontier general-purpose models.
- M.1 pricing: The free OpenRouter access is temporary. The long-term pricing will determine whether M.1 is competitive with Claude, GPT, and Gemini for coding tasks.
- Larger models: Poolside has the funding to train larger models. A Laguna L or XL model could push into frontier territory.
- pool and Shimmer maturity: The products are new. How they evolve will determine whether Poolside becomes a model provider or a full-stack developer tools company.
- Community adoption of XS.2: Apache 2.0 models live or die by community adoption. If developers start fine-tuning and deploying XS.2, it could become a standard building block for coding tools.
FAQ
Is Poolside AI free to use?
Both Laguna models are currently free on OpenRouter. XS.2 is free permanently on OpenRouter and also available as a free Apache 2.0 download for local use. M.1 is free on OpenRouter for a limited time — Poolside has not announced when paid pricing will start. Amazon Bedrock access follows standard AWS pay-per-token pricing.
What makes Poolside different from other AI coding tools?
Poolside builds coding-specific foundation models trained from scratch with RLCEF (Reinforcement Learning from Code Execution Feedback). Most competitors either use general-purpose models or fine-tune existing models on code. Poolside’s models have never been trained on non-code tasks, and they learn from actually executing the code they generate during training. This is a fundamentally different approach.
Can I run Poolside models locally?
Yes, but only Laguna XS.2. It is released under Apache 2.0 and the weights are available on HuggingFace. At 33B total parameters with 3B active (MoE), it runs on consumer hardware — a Mac with 16 GB unified memory or a GPU with 8+ GB VRAM can handle it. Laguna M.1 (225B) is too large for most local setups and is only available through API access.
Who founded Poolside AI?
Jason Warner (CEO) and Eiso Kant (CTO) founded Poolside in 2023. Warner was previously CTO of GitHub, where he oversaw the platform during the Microsoft acquisition and the early development of GitHub Copilot. The company reached a $3 billion valuation with backing from AWS and other investors.
Does Poolside work with my IDE?
Poolside’s pool agent runs in the terminal, so it works alongside any IDE. Shimmer is a standalone cloud development environment. For IDE-integrated coding assistance, you can use Laguna models through OpenRouter with tools like Continue, Aider, or other AI coding assistants that support custom API endpoints. The models are also available on Amazon Bedrock for integration into custom toolchains.
What is the difference between Laguna M.1 and XS.2?
M.1 is the flagship: 225B total parameters, 23B active, designed for complex multi-file coding tasks. XS.2 is the lightweight model: 33B total, 3B active, Apache 2.0 licensed, runs locally. M.1 is more capable but requires API access. XS.2 is less powerful but fully open and deployable anywhere. For most everyday coding tasks — completions, function generation, bug fixes — XS.2 is sufficient. For complex architecture work, large refactors, or difficult debugging, M.1 is the better choice.