In The $100 AI Startup Race, 7 AI coding agents compete to build profitable startups. They all have the same budget, the same time limit, and the same constraint: a human operator who handles distribution but never writes code.
Two weeks in, one agent is pulling ahead. Not because it writes the most code or has the most features. Because it listens to users.
The Reddit experiment
On April 30, we posted Kimi’s product (SchemaLens) to r/PostgreSQL. A browser-based SQL schema diff tool. Within 2 hours: 1,100 views, 7 comments, 4 shares, and 3 upvotes.
More importantly: 4 real technical questions from developers who actually tried the product.
This was the first time any agent in the race received genuine community feedback. Here is what happened next.
Question 1: “How does it handle renames vs drop+add?”
A developer pointed out that SchemaLens treats table renames as a drop followed by a recreate. That is technically correct but generates destructive migration SQL that would delete all data in the table.
What Kimi shipped (Day 51): A table rename detection heuristic. It uses Levenshtein distance, normalized name matching, and column structure comparison to detect when a table was likely renamed rather than dropped and recreated. Renamed tables now generate proper RENAME TABLE SQL instead of DROP + CREATE. The visual diff shows renamed tables with an arrow badge. The CLI output highlights them in cyan.
The commit message: “Detects when a table was likely renamed rather than dropped+recreated. Same column types + constraints + similar name = rename candidate.”
One Reddit comment turned into a real feature that makes the tool safer to use.
Question 2: “What if a dropped column is used in a view?”
Another developer asked about view dependencies. If you drop a column that a view references, the migration will break the view. SchemaLens did not track this.
What Kimi shipped (Day 63): View dependency tracking. The breaking change detector now parses CREATE VIEW queries to extract referenced tables from FROM and JOIN clauses. It detects when a dropped table or column would break an existing view and flags it as a high-risk breaking change with a risk score weight of 8 points.
The commit message: “Addresses top community feedback from Reddit r/PostgreSQL.”
Question 3: “But why? The migration already contains the changes.”
This was a positioning challenge, not a feature request. The developer questioned why SchemaLens exists when migration files already describe schema changes.
What Kimi shipped (Day 46): A complete landing page overhaul. Added a “When SchemaLens shines” section with 4 specific use cases where comparing schemas is faster than reading migrations. Added a FAQ entry directly addressing the “I already have migrations” objection. Added a trust bar highlighting that the tool runs client-side with no data leaving the browser.
The commit message: “Address Reddit trust crisis with honest positioning.”
An AI agent recognized a positioning problem and rewrote its own marketing copy to address it.
Question 4: “Liquibase does all of this in one command”
The harshest feedback came from a Liquibase power user who called SchemaLens a “vibe-coded web app doing glorified text compares.” They pointed out that Liquibase has 20 years of history, supports 60+ databases, and runs from the CLI.
What Kimi shipped across multiple sessions:
-
Day 42: Built an entire how-it-works page with architecture diagrams, parser documentation, diff engine internals, testing methodology, performance benchmarks, and an honest comparison table against Liquibase and Redgate. The commit message: “Directly responds to Reddit r/PostgreSQL ‘vibe-coded’ trust feedback. CLI is now prominently featured as credibility signal.”
-
Day 76: Created an open-source trust page with MIT license details, npm install instructions, contribution guidelines, and architecture overview. Added MIT badges to the hero section of every page.
The “vibe-coded” insult became the catalyst for Kimi’s entire trust and transparency strategy.
The pattern
Here is what makes this remarkable. Kimi is an AI coding agent running on Kimi K2.6 with 4 automated sessions per day. It reads a file called COMMUNITY-FEEDBACK.md at the start of each session. When we add feedback to that file, the agent sees it and decides what to do.
Every piece of Reddit feedback was added to that file. Kimi addressed every single one:
| Feedback | Session | Response |
|---|---|---|
| Renames treated as drop+add | Day 51 | Rename detection heuristic |
| View dependencies not tracked | Day 63 | View dependency tracking |
| ”Why does this exist?” | Day 46 | Landing page positioning overhaul |
| ”Vibe-coded” trust problem | Day 42 + 76 | Architecture docs + open-source trust page |
No other agent in the race has this feedback loop. No other agent has received real user feedback at all.
Why this matters
The 6 other agents are building in a vacuum. They ship features based on their own backlog, optimize for metrics they invented, and write testimonials for users who do not exist. (DeepSeek and Claude both have fake testimonials on their sites.)
Kimi is the only agent building for real people who gave real feedback. And the product is better for it:
- Rename detection prevents destructive migrations
- View dependency tracking catches breaking changes other tools miss
- Honest positioning addresses the real objection developers have
- Architecture transparency builds trust with skeptical engineers
The agent that listens to users is winning. Not because listening is a nice philosophy, but because real feedback produces better products than AI-generated backlogs.
The scoreboard effect
Kimi is not the agent with the most commits (DeepSeek has 300+). It is not the agent with the most pages (Claude has 124+). It is not the agent with the most polished launch (Xiaomi has 119 pages ready for Product Hunt).
But it is the only agent with:
- A published npm package (schemalens-cli)
- A Chrome extension submitted to the Web Store
- PRs accepted to awesome-list repositories
- Real community feedback driving real features
Distribution channels that compound over time beat one-time social posts. Features built from real feedback beat features built from AI imagination.
Week 3 starts tomorrow. Follow the race live.
FAQ
Which AI model powers Kimi in the race?
Kimi runs on Kimi K2.6 via the kimi-cli tool, with 4 automated sessions per day (03:00, 09:00, 15:00, 21:00 UTC).
How does the community feedback loop work?
The human operator adds feedback to a file called COMMUNITY-FEEDBACK.md in the agent’s repository. The agent reads this file at the start of each session and decides how to act on it. The agent is never told what to build — it interprets the feedback and prioritizes on its own.
Has any other agent received real user feedback?
No. Kimi is the only agent in the race that has received genuine feedback from developers who tried the product. The Reddit r/PostgreSQL post generated 1,100 views and 4 technical questions in 2 hours.
What is SchemaLens?
SchemaLens is a browser-based SQL schema diff tool that compares database schemas and generates migration scripts. It supports PostgreSQL, MySQL, SQLite, SQL Server, and Oracle. The core engine is open source under MIT license.
What is The $100 AI Startup Race?
Seven AI coding agents compete to build profitable startups with a $100 budget over 12 weeks. Each agent uses a different AI model. A human operator handles distribution but never writes code. Follow the race live.