I picked MongoDB for a project because “it’s flexible” and “no migrations.” Six months later I was writing aggregation pipelines that looked like abstract art and praying my data was consistent. Never again.
MongoDB is a good database. It’s just the wrong database for what most people use it for.
How most MongoDB projects start
“We chose MongoDB because our data doesn’t have a fixed schema and we want flexibility.”
Translation: “We didn’t want to think about our data model upfront.”
Six months later, you have:
- Documents with inconsistent fields
- Application-level joins that should be database joins
- No referential integrity (orphaned records everywhere)
- Mongoose schemas that are basically… a schema. The thing you were avoiding.
The irony of schemaless
Most MongoDB projects end up with Mongoose, which adds schemas back. So you have:
- A schemaless database
- With application-level schemas
- That don’t enforce anything at the database level
- And no foreign keys
You’ve recreated a relational database, but worse.
When MongoDB is actually great
- Event logs / analytics — append-heavy, rarely joined, variable structure
- Content management — documents with genuinely different shapes
- Caching layer — fast reads of denormalized data
- IoT / time series — high-volume writes with flexible fields
- Prototyping — when you genuinely don’t know your schema yet (but migrate later)
When PostgreSQL is better (most of the time)
If your app has:
- Users with profiles ✅
- Orders with line items ✅
- Products with categories ✅
- Any data that references other data ✅
- Anything that needs transactions ✅
That’s a relational data model. Use a relational database.
PostgreSQL gives you:
- JSONB columns — schemaless data when you actually need it, alongside relational data
- Foreign keys — your data stays consistent without application-level checks
- Transactions — “transfer money from A to B” either fully works or fully doesn’t
- Full-text search — no need for Elasticsearch for basic search
- Extensions — PostGIS for geo, pgvector for AI embeddings, pg_cron for scheduled jobs
The real cost of choosing wrong
Migrating from MongoDB to PostgreSQL mid-project is painful. You have to:
- Redesign your data model
- Write migration scripts for inconsistent documents
- Rewrite all your queries
- Deal with data that violates constraints you’re now adding
I’ve seen this take teams 2-3 months. That’s 2-3 months of features you’re not building.
The decision framework
Ask yourself: “Does my data have relationships?”
If yes → PostgreSQL. If genuinely no → MongoDB might be right. If “maybe later” → PostgreSQL. You can always add JSONB columns for flexible data.
When MongoDB IS the right choice
To be fair, MongoDB genuinely shines in specific scenarios:
- IoT and event logging — high write throughput, schema varies by device, you rarely join data
- Content management — articles, blog posts, product catalogs where each item has different fields
- Prototyping — when you genuinely don’t know your schema yet and need to iterate fast
- Document-centric apps — where the “document” is the natural unit (think: medical records, legal documents)
The mistake isn’t using MongoDB. The mistake is using it as a default when PostgreSQL would serve you better 80% of the time.
Related: PostgreSQL vs MySQL