This is week 5 of my “I Used It for a Week” series. So far I’ve reviewed Cursor (speed), Kiro (specs), GitHub Copilot (ecosystem), and Windsurf (budget pick). This week: the tool everyone already uses but nobody thinks of as a coding tool.
Let me be upfront: ChatGPT is not a code editor. It doesn’t live in your IDE, it doesn’t index your codebase, and it can’t edit your files. Comparing it directly to Cursor or Kiro isn’t fair.
But here’s the thing — I used it more than any of them this week. Just not for the same things.
The Setup
I subscribed to ChatGPT Plus at $20/month. That gets you GPT-5.2, DALL-E 3, and priority access. There’s also a Go tier at $8/month and the Pro tier at $200/month for power users, but Plus is what most developers use.
OpenAI’s pricing tiers in 2026:
- Free: GPT-5 with strict limits
- Go: $8/month — extended limits, custom GPTs, voice
- Plus: $20/month — GPT-5.2, higher limits, DALL-E 3
- Pro: $200/month — GPT-5.4 Thinking, highest limits, Sora
I stuck with Plus because $200/month for Pro is hard to justify when Cursor costs $20 and does the actual coding part better.
What ChatGPT Is Actually Great At
Thinking partner, not typing partner
The biggest shift in my week was realizing ChatGPT’s value isn’t in writing code — it’s in thinking about code. I used it to:
- Debate architecture decisions before opening my editor
- Explain unfamiliar codebases (“here’s a 200-line file, explain what it does”)
- Rubber-duck debug problems I was stuck on
- Generate regex patterns and SQL queries I’d otherwise spend 20 minutes on
- Draft API contracts before implementing them
None of the IDE tools do this well. Cursor’s chat is focused on your current codebase. Kiro’s spec mode is structured and formal. ChatGPT is just… a conversation. Sometimes that’s exactly what you need.
Learning accelerator
I was picking up a new library this week, and ChatGPT was invaluable. “Explain how React Server Components work with concrete examples.” “What’s the difference between these two approaches?” “Show me the tradeoffs.”
It’s like having a patient senior developer who never gets annoyed by basic questions. The IDE tools assume you already know what you’re building. ChatGPT helps you figure out what to build.
Writing everything that isn’t code
Documentation, commit messages, PR descriptions, technical specs, email drafts, blog outlines — ChatGPT handles all of this faster than I can type. A peer-reviewed study in Science found that writers using ChatGPT completed tasks 40% faster with 18% higher quality output.
This is where the $20/month pays for itself even if you never write a line of code with it.
Canvas mode for iteration
The Canvas feature lets you collaborate on a document or code snippet side by side. It’s not as powerful as Cursor’s multi-file editing, but for iterating on a single file or algorithm, it’s surprisingly good. You can highlight a section and say “make this more efficient” or “add error handling here.”
What Frustrated Me
The coding quality rollercoaster
Multiple OpenAI forum threads tell the same story: GPT-5’s coding ability feels inconsistent. One user wrote: “Scripts that used to work now fail, solutions are weaker, and the model is less consistent.” Another said GPT-5 is “intelligent, but it absolutely sucks at code” compared to earlier models for sustained coding sessions.
My experience matched this. For isolated coding questions — “write a function that does X” — it’s great. For anything requiring sustained context across a long conversation, it starts losing track. By message 15 in a coding session, it would forget constraints I’d set in message 3.
No codebase awareness
This is the fundamental limitation. ChatGPT doesn’t know your project. You have to manually paste code, explain your architecture, and re-establish context every session. After using Cursor’s deep indexing and Kiro’s spec-driven context, going back to copy-pasting code snippets into a chat window feels primitive.
Yes, you can upload files. But it’s not the same as an AI that’s read your entire codebase and understands how everything connects.
The limits are real
Even on Plus, you hit usage caps on GPT-5.2. During heavy use days, I got throttled to slower models. The dynamic caps mean you never quite know when you’ll hit the wall. One reviewer noted: “While the $20 plan unlocks GPT-5.2 and DALL-E 3, it still has a trap: limits.”
Pro at $200/month removes most limits, but that’s 10x the price of Cursor or Copilot.
It doesn’t execute
ChatGPT generates code. You copy it. You paste it. You run it. It fails. You copy the error. You paste it back. It fixes it. You copy again.
This loop is exhausting after using tools that edit your files directly. Cursor’s agent runs the code, sees the error, and fixes it — all without you touching the clipboard. Kiro’s hooks run tests automatically. ChatGPT just… talks about code.
Where ChatGPT Fits in My Stack
After four weeks of testing, here’s how I actually use each tool:
| Task | Best Tool | Why |
|---|---|---|
| Writing code in my editor | Cursor | Tab completion, multi-file agent |
| Planning new features | Kiro | Spec workflow, structured design |
| Learning new tech | ChatGPT | Conversational, patient, broad knowledge |
| Debugging logic | ChatGPT | Good at reasoning about problems |
| Architecture decisions | ChatGPT | Thinks through tradeoffs well |
| Writing docs/emails | ChatGPT | Fast, good quality prose |
| Quick code generation | ChatGPT | Isolated snippets, regex, SQL |
| Large refactoring | Cursor | Subagents, codebase awareness |
ChatGPT is the tool I use around coding, not for coding. And that’s fine — it’s genuinely the best at that role.
My Verdict After 7 Days
ChatGPT Plus is worth $20/month for any developer, but not as a coding tool. It’s a thinking tool, a learning tool, and a writing tool that happens to understand code.
If you’re choosing between ChatGPT Plus and Cursor Pro and can only afford one, get Cursor. It’ll save you more time on actual coding. But if you can afford both, they complement each other perfectly — Cursor for the doing, ChatGPT for the thinking.
Would I keep paying? Yes, without hesitation. But I’d never use it as my primary coding tool when Cursor, Kiro, and Copilot exist.
Who should subscribe:
- Every developer (the thinking/learning value alone is worth it)
- Non-technical founders who need to understand code
- Anyone who writes documentation, emails, or specs
Who doesn’t need it for coding:
- Anyone already using Cursor or Kiro (they’re better at the actual coding)
- Developers who only need inline completions (Copilot is cheaper)
Next week: I Used Devin for a Week — the most hyped AI tool in recent memory. Is the “first AI software engineer” real, or just a great demo?
Related: GPT-5 vs Gemini · AI Coding Tools Pricing · How to Choose an AI Coding Agent