Xiaomi’s agent (running MiMo V2.5 Pro via Claude Code) has built one of the most complete products in The $100 AI Startup Race: 119 pages, 75 blog posts, 33 model comparisons, a pricing calculator, Stripe integration, and a Chrome extension concept. The Product Hunt launch is scheduled for May 5.
There is just one problem. The agent has been “finalizing” the launch for 14 consecutive sessions.
The audit loop
Here are the commit messages from Sessions 92-105, in order:
| Session | Commit message |
|---|---|
| 95 | PH launch prep — fix 15 stale model counts, add savings callout, create launch playbook |
| 96 | Pre-PH-launch audit — fix missing OG image, canonical URLs, stale counts |
| 97 | Pre-PH-launch final audit — site fully ready for May 5 |
| 98 | Final pre-launch polish — fix stale counts, update What’s New sections |
| 99 | Pre-launch sitemap fix — update RSS lastBuildDate, sitemap lastmod, fix stale counts |
| 100 | Fix stale blog post counts (71 to 75) across 6 files + Chrome extension broken link |
| 101 | Final pre-launch audit — site verified launch-ready |
| 102 | Clean up PROGRESS.md — condense site status, fix session ordering |
| 103 | Pre-launch cleanup — condense PROGRESS.md, collapse backlog summaries |
| 104 | Fix stale 71 to 75 blog post counts across 14 marketing files |
| 105 | Update Claude Haiku 3.5 to 4.5 pricing |
Session 97: “site fully ready for May 5.” Session 101: “site verified launch-ready.” Sessions 100 and 104 both fix the same stale blog post count (71 to 75) — the same fix, applied twice, because the agent forgot it already did it.
The perfectionism trap
This is not a bug. The agent is doing exactly what it was told: prepare for the Product Hunt launch. The problem is that “prepare” has no natural stopping point. Every session, the agent finds something else to polish:
- A blog post count that says 71 instead of 75
- An OG image that’s missing
- A canonical URL that’s wrong
- A sitemap lastmod date that’s stale
- A pricing number that changed upstream
Each of these is a real issue. None of them matter for launch. A Product Hunt visitor will not check whether the sitemap lastmod date matches the RSS lastBuildDate. They will not count blog posts to verify the number in the footer matches reality.
But the agent cannot distinguish between “this needs to be fixed before launch” and “this could be fixed but doesn’t block launch.” So it fixes everything, every session, and never runs out of things to fix.
The stale count loop
The most telling pattern is the blog post count fix. Xiaomi’s site displays “75 blog posts” in various places — the homepage, marketing pages, meta descriptions. But the count was hardcoded as “71” in some files.
- Session 100: Fixed the count from 71 to 75 across 6 files
- Session 104: Fixed the count from 71 to 75 across 14 files
The agent fixed the same problem twice because it didn’t fix all instances the first time. And it probably still hasn’t caught every instance. This is the kind of task that could loop forever — there is always one more file with a stale number.
What the agent should be doing
The Product Hunt launch is scheduled for May 5. The product is ready. It has been ready since at least Session 97 (“site fully ready for May 5”). Everything after that is polishing a product that nobody has seen yet.
The agent should be:
- Writing the Product Hunt maker comment
- Preparing engagement responses for launch day
- Setting up analytics to track launch traffic
- Planning post-launch follow-up content
Instead, it is fixing blog post counts for the third time.
The broader pattern
This is not unique to Xiaomi. It is a common failure mode for AI coding agents: when the next step requires a different type of work (marketing, distribution, user outreach), the agent defaults to what it knows (code, content, polish).
Claude spent 20 sessions in a verification loop before the context cleanup broke it out. Codex runs validation checkpoints every 2 minutes when there is nothing to validate. Gemini builds 21,799 files instead of registering a domain.
The agents that break out of these loops are the ones that receive external signals. Kimi got Reddit feedback and immediately pivoted to addressing it. Claude got a “you are the founder” prompt change and started asking for distribution help.
Xiaomi’s agent needs a signal that says “stop polishing, start launching.” The Growth Plan surprise event on May 2 was designed to provide exactly that. We will see if it works.
The irony
Xiaomi has the most launch-ready product in the race. 119 pages, 75 blog posts, Stripe integration, PH engagement templates, a Chrome extension, and pricing data for 33 AI models. If any agent deserves a successful Product Hunt launch, it is Xiaomi.
But the agent that built all of this cannot stop building long enough to ship it.
Follow Xiaomi’s launch on the race dashboard.
FAQ
What is Xiaomi building?
APIpulse — an AI model pricing comparison tool at getapipulse.com. It tracks pricing across 33 AI models from 10 providers, with a calculator, comparison tools, and 75 blog posts about AI pricing.
Which AI model powers Xiaomi?
MiMo V2.5 Pro via Claude Code, using Xiaomi’s Anthropic-compatible API endpoint. Premium sessions use mimo-v2.5-pro, cheap sessions use mimo-v2.5.
When is the Product Hunt launch?
May 5, 2026. The product has been “launch-ready” since Session 97 (May 1). The agent has spent 8 additional sessions polishing since then.
Is this a MiMo problem or a prompt problem?
It is a prompt problem. MiMo V2.5 Pro is a capable model — it built 119 pages and 75 blog posts. The issue is that the prompt says “prepare for Product Hunt launch” without defining when preparation is complete. The agent interprets this as “keep preparing until launch day.”