AI Dev Weekly #4: Anthropic Leaks Everything, OpenAI Raises $122B, and Qwen 3.6 Drops Free
AI Dev Weekly is a Thursday series where I cover the week’s most important AI developer news — with my take as someone who actually uses these tools daily.
Anthropic had a rough week. Two separate leaks, a $122 billion competitor, and a Chinese model that just went free. Let’s get into it.
Anthropic leaks Claude Code’s entire source code to npm
On Monday, security researcher Chaofan Shou discovered that Claude Code’s npm package contained a 60MB source map file that mapped the minified production code back to its original TypeScript source. All 512,000 lines of it. Across 1,906 internal files.
This wasn’t a hack. The Bun runtime that Claude Code uses generates source maps by default, and someone forgot to strip them before publishing version 2.1.88 to the public npm registry. By the time Anthropic pulled the package, developers had already mirrored the code to GitHub and started picking it apart.
What they found: internal APIs, telemetry systems, encryption logic, unreleased agent features, and system prompts. The code revealed how Claude Code handles tool execution, permission management, and the auto mode safety classifier that shipped just last week.
This is the second time Anthropic has leaked source code in under a year. The first was the Cowork data exfiltration vulnerability in January.
My take: The irony is thick. The company that positions itself as the safety-first AI lab keeps shipping code with basic packaging errors. A .npmignore file or a build step that strips source maps would have prevented this entirely. The leaked code itself is well-written TypeScript — there’s nothing embarrassing about the engineering. The embarrassment is that it happened at all, and that it happened twice.
For developers using Claude Code: nothing changes practically. No customer data was exposed, and the tool still works the same way. But if you’re evaluating AI coding tools for enterprise use, “accidentally published our entire codebase to npm” is a hard thing to explain to your security team.
Also leaked: Claude “Mythos” — Anthropic’s unreleased tier above Opus
The source code leak came days after a separate incident. On March 26, security researchers found nearly 3,000 unpublished files — including draft blog posts and internal memos — publicly accessible due to a misconfigured content management system.
The files revealed a model internally called “Claude Mythos” (also codenamed “Capybara”). According to the leaked draft blog post, Mythos is “larger and more intelligent than our Opus models — which were, until now, our most powerful.” It’s currently in early access testing with cybersecurity partners.
Anthropic confirmed Mythos is real and called it a “step change” in capability. No public release date has been set.
My take: Two leaks in one week from the safety-focused AI company. The Mythos leak is actually more significant than the source code one — it confirms Anthropic has a model tier beyond Opus that they haven’t announced. For developers planning around Claude’s model lineup (Haiku → Sonnet → Opus), knowing there’s a “Mythos” tier coming changes the calculus. Especially if you’re budgeting API costs — a model above Opus won’t be cheap.
OpenAI closes $122 billion round at $852 billion valuation
OpenAI announced on Monday that it closed the largest private funding round in history: $122 billion in committed capital, pushing its post-money valuation to $852 billion. The round was led by Amazon, NVIDIA, and SoftBank, with Microsoft continuing its participation.
The company now has 900 million weekly active users and 50 million paid subscribers. Revenue has crossed $2 billion per month. In the same announcement, OpenAI confirmed it shut down Sora, its video generation tool, citing unsustainable inference costs — reportedly $15 million per day against $2.1 million in lifetime revenue.
OpenAI described its strategy as building an “AI Super App” that combines ChatGPT, Codex, and other products into a unified platform.
My take: The Sora shutdown is the real story here. OpenAI killed a product that was burning $15M/day because it couldn’t monetize it. That’s a company making hard commercial decisions, not a research lab chasing cool demos. The “AI Super App” framing signals that OpenAI sees its future as a platform, not a model provider.
For developers: Codex keeps getting better. GPT-5.4 became the new core model in March, GPT-5.4 Mini changed cheaper task routing, and the CLI got parallel agents, worktrees, and skills. If you’re on ChatGPT Plus, Codex is included and it’s genuinely competitive with Claude Code now. I’ve been running both headless on a VPS and the output quality is closer than you’d expect.
Qwen 3.6 Plus drops for free on OpenRouter
Alibaba released Qwen 3.6 Plus Preview on March 30, and it’s available for free on OpenRouter. Zero cost for input and output tokens. The model features a 1 million token context window and what Alibaba calls “improved efficiency, stronger reasoning, and more reliable agentic behavior” compared to the 3.5 series.
This comes alongside Qwen 3.5 Omni, a multimodal model that processes text, images, audio, and video natively. Researchers noted it learned to write code from spoken instructions and video input without being explicitly trained to do so.
My take: The Chinese open-source models keep getting more aggressive on pricing. Qwen 3.6 Plus for free is a direct challenge to every paid API. I’ve been testing the Qwen 3.5 series this week for a project, and the Flash model at $0.065/$0.26 per million tokens is absurdly cheap — we ran 20 minutes of agentic coding for $0.018. The Plus model at $0.26/$1.56 is still cheaper than Claude Sonnet by a wide margin.
The 1M context window on the free model is the real hook. If you’re building RAG pipelines or processing large codebases, there’s no reason not to try it. The quality won’t match Claude or GPT on complex reasoning, but for many tasks it’s good enough — and free is hard to argue with.
Quick hits
Gemini 3.1 Pro is rolling out in Gemini CLI. Google’s latest model ties GPT-5.4 on the Artificial Analysis Intelligence Index at roughly one-third the API cost. It features dynamic thinking with a new thinking_level parameter (low/medium/high/max) and a 1M token context window. GitHub Copilot also added Gemini 3.1 Pro support across major IDEs.
Google launched Gemini API Docs MCP and Agent Skills — two tools that give AI coding agents real-time access to current API documentation, fixing the “generates outdated code” problem caused by training data cutoffs.
GitHub Copilot added Gemini 3.1 Pro across VS Code, JetBrains, and Xcode. You can now choose between GPT, Claude, and Gemini models within Copilot.