🤖 AI Tools
· 5 min read

AI Dev Weekly #3: Claude Code Goes Auto, Cursor's Chinese Secret, and GitHub Wants Your Data


AI Dev Weekly is a Thursday series where I cover the week’s most important AI developer news — with my take as someone who actually uses these tools daily.

Three stories this week, and they all share a theme: the companies building your coding tools are making big decisions about trust — and not all of them are being upfront about it.

Claude Code gets auto mode, Channels, and $2.5B in revenue

Anthropic dropped three features on Monday. The headline is auto mode — a middle ground between the default “approve every file write” and the dangerously-skip-permissions flag that lets Claude do whatever it wants.

Auto mode uses an AI safety classifier that evaluates every tool call in real time. Routine actions like writing files and running tests get auto-approved. Destructive operations — mass file deletion, data exfiltration, malicious code execution — get blocked. If Claude keeps trying blocked actions, it escalates to a human prompt. It also screens for prompt injection attacks, which is relevant given that Cowork had a data exfiltration vulnerability two days after launch in January.

The second feature is Claude Code Channels — you can now control Claude Code through Discord and Telegram. This is Anthropic’s direct response to OpenClaw, the open-source project that hit 100K+ GitHub stars by letting people run AI agents through chat apps. Anthropic sent OpenClaw’s creator a cease-and-desist over the original name “Clawd,” and he ended up joining OpenAI. Now Anthropic is building the managed alternative. Channels currently only supports Discord and Telegram, while OpenClaw covers five platforms including iMessage, Slack, and WhatsApp.

The third feature: computer use for Cowork, giving Claude direct keyboard-and-mouse control of macOS desktops. It prioritizes API connectors when available and falls back to screen interaction when they’re not.

The revenue number is the quiet bombshell: Claude Code hit $2.5 billion in annualized revenue, up from $1 billion in early January. That’s 2.5x growth in under three months.

My take: Auto mode is what Claude Code should have shipped with. The binary choice between “interrupt me for everything” and “YOLO mode” was the biggest friction point for long-running tasks. The AI classifier approach is smart — but it’s also a black box. Anthropic hasn’t published what the classifier allows or blocks, which means you’re trusting an undisclosed ML model to decide what’s safe to run on your machine. For side projects, fine. For production codebases, I’d want to see the rules. For a deeper look at how Claude Code compares to the competition, see my Claude Code vs Cursor comparison.

Channels is interesting strategically. OpenClaw proved developers want chat-based agent control. Anthropic’s response is “we’ll build it ourselves, with better security.” Whether developers choose the managed option over the open-source one will depend on how fast Anthropic adds platform support.

Cursor launches Composer 2 — then gets caught hiding its origins

Cursor shipped Composer 2 this week with impressive benchmarks on SWE-bench Multilingual and Terminal-Bench. The pitch: frontier-level coding intelligence with 200K token context, optimized for multi-file editing and long multi-step tasks.

What Cursor didn’t mention in the announcement: Composer 2 is built on Moonshot AI’s Kimi K2.5, a Chinese open-source model.

A developer named Fynn intercepted Cursor’s API traffic and found the model identifier kimi-k2p5-rl-0317-s515-fast. The internet did the rest. Elon Musk commented. Then Cursor co-founder Aman Sanger confirmed it — Kimi K2.5 was selected after an evaluation, followed by additional pre-training and heavy reinforcement learning. He called the omission from the blog post “an error.”

Kimi K2.5 is a 1 trillion parameter Mixture-of-Experts model with 32 billion active parameters per token, released under a modified MIT license by Moonshot AI (backed by Alibaba and HongShan). The commercial use was authorized through Fireworks AI.

My take: The technical choice is defensible. Fine-tuning an open-source model and adding your own RL on top is exactly how you build a competitive coding model without training from scratch. DeepSeek proved that open-source base models can compete with closed ones. Cursor picking the best available foundation is smart engineering.

The transparency failure is the problem. When you’re a $29 billion company and developers trust you with their codebases, you don’t “forget” to mention the base model. Especially when it’s from a Chinese company — not because that’s inherently bad, but because developers deserve to make informed decisions about their toolchain. If you’re weighing your options, my GitHub Copilot vs Cursor comparison covers the broader tradeoffs.

GitHub will train on your Copilot data by default

GitHub announced that starting April 24, 2026, interactions with Copilot Free, Pro, and Pro+ will be used to train AI models. This is opt-out, not opt-in. If you do nothing, your prompts and code interactions become training data.

To opt out: go to GitHub Settings → Copilot → Features → disable “Allow GitHub to use my data for AI model training” under Privacy. Enterprise and Business plan users are not affected — their data was already excluded.

My take: This was inevitable. GitHub has been watching Claude Code and Cursor grow by training on real developer interactions while Copilot relied mainly on public code. The quality gap showed. Moving to opt-out instead of opt-in is the aggressive play — most developers won’t change the default, which means GitHub gets a massive training dataset overnight.

If you’re on Copilot Free or Pro, go change that setting now if you care. If you don’t care, at least know it’s happening. And if this is the push you needed to evaluate alternatives, my how to replace GitHub Copilot guide covers the options.

Quick hits

VS Code 1.113 shipped — Two features worth knowing about. First, a Thinking Effort selector that lets you dial AI reasoning intensity up or down per request — useful when you want a quick answer without burning tokens on deep reasoning. Second, nested subagents: subagents can now call other subagents, enabling multi-step AI workflows that chain together. Both features work with shared MCP server access.

Apple confirmed WWDC 2026 for June 8-12 — The big tease: Core AI, a new framework replacing Core ML, designed for running large language models and diffusion models directly on-device. Combined with the Claude Agent SDK for Xcode announced two weeks ago, iOS developers are about to get a lot more AI tooling. Also expect a major Siri revamp with iOS 27.

OpenClaw v2026.3.24 released — New version adds improved OpenAI API compatibility, sub-agents with OpenWebUI, and native Slack and Teams integration. The open-source agent framework keeps shipping faster than the commercial alternatives can copy it.

The pattern this week

Trust is the new battleground. Anthropic is asking you to trust a black-box classifier with your file system. Cursor asked you to trust a model without telling you where it came from. GitHub is asking you to trust that opt-out is good enough.

The developers who’ll navigate this well aren’t the ones who blindly trust or blindly reject. They’re the ones who read the settings pages, check the API traffic, and make informed choices. This week proved that even the tools you pay for aren’t always transparent about what’s happening under the hood.

AI Dev Weekly drops every Thursday. Subscribe on the homepage so you don’t miss it.

Related: AI Dev Weekly #2: Garry Tan’s ‘God Mode’, Cursor Composer 1.5, and Anthropic Finds Firefox Bugs | AI Dev Weekly #1: Claude Code Takes the Crown, Musk Raids Cursor