Why PostgreSQL? Part 4 — MongoDB & Oracle: real migration stories
If you are already on MongoDB or Oracle, moving to PostgreSQL is not a vague someday project—it hits budget, performance, and operations immediately. This post stays between the two lazy extremes (“trivial” vs “impossible”): what triggered real MongoDB→Postgres and Oracle→Postgres moves, how long they took, what hurt more than expected, and what changed afterward. Figures such as Infisical’s reported database cost reduction after leaving MongoDB, or wide TCO improvement narratives away from Oracle, come from public write‑ups, case studies, and vendor materials—so you should separate license vs labor vs migration project costs and read them as directional, not universal guarantees. The article also stresses that schema redesign and query work often travel with the database change, so “Postgres magic” and “refactoring wins” should not be collapsed into one headline. It closes with a compact decision frame so you can judge whether migration is a sensible next step for your team—not a moral obligation.
Series outline
- Part 1 — PostgreSQL in the numbers
- Part 2 — Why big tech chose PostgreSQL
- Part 3 — Startups: speed vs. cost
- Part 4 — MongoDB & Oracle: real migration stories (this post)
- Part 5 — The ecosystem: pgvector, PostGIS, TimescaleDB
Table of contents
- Introduction: when “we already use another database” stops being a good reason
- MongoDB → PostgreSQL: Infisical — what a 50% database cost reduction actually reflects
- The recurring MongoDB pain: naming the structural issues
- Oracle → PostgreSQL: changing decades of enterprise habit
- Migration reality: polished success stories vs on‑the‑ground friction
- Patterns that show up in successful migrations
- Closing: when to move—and when to stay
1. Introduction: when “we already use another database” stops being a good reason
“We run MongoDB today. How hard is a migration, honestly?”
Honest answers are rare. You usually get one of two extremes—“it’s easy, just migrate”—or—“never touch it, it’s too complex.”
Part 4 aims for the messy middle: organizations that actually moved from MongoDB or Oracle to PostgreSQL—what triggered the work, how long it took, what hurt more than slides suggested, and what changed afterward.
Bottom line up front: migration is usually not easy—and it is still often worth doing.
How to read the numbers
The percent savings and TCO deltas below are persuasive signals, but they often come from public blogs, vendors, and case studies. Cost line items (CPU, storage, labor, migration project fees, support contracts) and comparison windows differ by source—treat them as directional, and verify the original assumptions.
In practice, schema redesign, query cleanup, and operational changes land in the same project as the engine change. If you collapse “Postgres did it” and “we refactored everything else too” into one headline, the claim reads stronger than it should. This article calls out that boundary where it matters.
2. MongoDB → PostgreSQL: Infisical — what a 50% database cost reduction actually reflects
Infisical is an open‑source platform for centrally managing application secrets (API keys, certificates, SSH keys, and so on). In early 2024 it was processing more than 50 million secret operations per day—and MongoDB sat underneath.
Why MongoDB made sense early
Early Infisical chose MongoDB for pragmatic reasons: the team knew it; Mongoose supported fast iteration; and schema‑less flexibility helped ship features quickly. That is a common early‑stage trade.
The pain surfaced as the product grew: the core data model was relational—secrets, projects, environments, users, and permissions intertwined. Expressing that in MongoDB led to heavy $lookup usage to mimic joins, inefficient aggregation paths, and constant scale‑up of both database and app tiers.
Transactions were another snag: multi‑document transactions pushed teams toward cluster configurations that raised the bar for self‑hosted customers—even simple PoCs looked like “production Mongo topology or bust.”
Schema‑less flexibility also cut both ways: bypassing Mongoose guardrails could introduce inconsistent documents over time.
Decision and execution
Infisical moved to PostgreSQL—not only for “better SQL,” but for community scale, documentation depth, managed service availability across clouds, and open‑source ergonomics that reduce self‑hosting friction.
The migration was large: new relational schema, rewritten query paths, and moving tens of millions (or more) rows. The team replaced Mongoose with Knex.js for SQL control plus migration tooling and safer typing.
They aimed to avoid user‑visible downtime, and compromised with a pragmatic window: pause writes while keeping reads—reasonable for a secrets platform where reads dominate and configuration writes are comparatively rare.
That pattern only works when reads dominate and a short write pause is acceptable. It is not a general recipe for write‑heavy domains like payments or real‑time collaboration—validate your traffic shape first.
After migration
The visible win was performance: replacing inefficient aggregations and chatty application patterns with relational joins reduced pressure to overscale instances. Public write‑ups cite roughly 50% lower database spend as one outcome.
As noted above, that outcome sits next to schema redesign and query rewrites—it is not “flip the engine and bills halve automatically.”
Stronger database‑level validation and simpler self‑hosting (standard transactions without Mongo cluster gymnastics) also mattered, and feature work could standardize on PostgreSQL.
3. The recurring MongoDB pain: naming the structural issues
Infisical is not a one‑off. Teams leaving MongoDB for PostgreSQL repeat a few patterns.
The schema‑less paradox
Schema‑less feels liberating early; later it becomes chaos: the same field is a number in one document and a string in another, and nobody can print the authoritative schema. Cleaning that up is expensive.
Migrators often mention date strings: some documents store MM/DD/YYYY, others DD/MM/YYYY. Application layers may accept both; MongoDB stores what you send. PostgreSQL’s TIMESTAMPTZ rejects ambiguous strings—data cleanup becomes a big slice of migration time.
The translation tax of aggregation pipelines
MongoDB aggregation ($lookup, $unwind, $group) is not SQL. Translating pipelines query‑by‑query is slow and subtle. Industry write‑ups cite cases where thousands of queries took far longer than initial estimates—treat timelines as risky unless you have measured coverage.
JSONB is not a free “document dump”
“If we just load Mongo documents into JSONB columns, we’re done, right?” That usually defeats the purpose: you lose relational joins, foreign keys, and predictable analytics performance. If you truly need a document store, staying on MongoDB can be rational. Postgres migrations aim to earn relational benefits through real modeling work.
4. Oracle → PostgreSQL: changing decades of enterprise habit
MongoDB migrations often look like “fixing an early pragmatic choice.” Oracle migrations are different: mission‑critical systems, years of PL/SQL, Oracle‑specific features—and license audits layered on top.
Oracle license economics (as commonly narrated)
Oracle pricing is not one line item: core‑based licensing, add‑on packs for HA and security, annual support, and periodic audits. Mid‑size enterprise totals in the hundreds of thousands of USD per year show up frequently in public discussions.
Case studies often claim 70–90% TCO reductions after moving to PostgreSQL—not only license elimination, but DBA scarcity premiums, hardware constraints, and cloud portability. Some analyses cite hundreds of thousands of USD/year savings on license+support alone for typical core counts, with payback in a few years even after migration project costs—again, verify what each report includes.
Why Oracle migrations stall
Cost savings are attractive; execution is hard.
First, PL/SQL rewrite: procedures, packages, and functions do not port line‑for‑line to PL/pgSQL. Patterns like CONNECT BY, old outer‑join (+), ROWNUM, and subtle TO_DATE behavior need deliberate rewrites—months to years when code volume is large.
Second, type mapping: Oracle VARCHAR2, NUMBER(p,s), DATE semantics, ROWID, and more do not map 1:1. Tools automate a lot; edge cases remain manual.
Third, organizational inertia: “it works—why change?” is not a technical question. Long‑tenured Oracle teams and risk‑averse culture can block a migration before architecture debates start.
Beyond money: availability, operations, compliance
Savings often come with new responsibilities: RAC‑class clustering, packaged security/audit features, vendor SLAs, and named support contracts do not reappear as a single Postgres checkbox. Regulated industries may need evidence beyond feature parity spreadsheets—plan who owns HA, security operations, and compliance artifacts after the move.
Accelerators
Tooling improves: Ora2Pg, vendor migration assistants, and AI‑assisted conversion can raise automation rates—often described as majority automation for schema/data movement, with verification and business logic testing still manual.
Cloud migrations also push “while we’re moving, let’s exit Oracle” decisions—cloud‑native architectures and Oracle’s economics frequently clash.
5. Migration reality: polished success stories vs on‑the‑ground friction
Success stories emphasize outcomes; the field work is messier.
It takes longer than the optimistic estimate. MongoDB query rewrites commonly run multiples of first guesses; Oracle projects can span months to a couple of years. Distrust the one‑week promise.
Data quality surprises appear late. Long‑lived inconsistencies surface when scripts enforce new constraints—cleansing work then dominates schedules.
Zero‑downtime is not free. Dual‑write plus CDC is robust and expensive to build. Small teams may accept short write pauses instead.
PostgreSQL tuning is its own skill. shared_buffers, work_mem, max_connections, and indexing strategy still matter—plan learning time.
-- Example checks (values depend on environment)
SELECT
current_setting('shared_buffers') AS shared_buffers,
current_setting('work_mem') AS work_mem,
current_setting('max_connections') AS max_connections;
6. Patterns that show up in successful migrations
Across many case studies, a few practices repeat.
Invest disproportionately in schema design first. “JSONB everything” defers pain; you often pay twice.
Start with a safety net like dual‑write—write to both systems, shift reads gradually, then cut writes after verification. Public Aurora migration stories describe similar phased patterns; details vary by product and risk tolerance.
Treat migration as modernization—partitioning, data retirement, service boundaries—rather than a lift‑and‑shift of legacy inefficiency.
Pilot on a bounded surface—a high‑churn module or expensive query family—before betting the whole estate.
7. Closing: when to move—and when to stay
A compact decision frame
| Signal | Take migration more seriously |
|---|---|
| Data model | Your workload is relational—joins and transactions—yet you keep bending a document store around it |
| Team capacity | You can hire or train SQL/ops capacity within ~6–12 months |
| Downtime tolerance | You can accept a short write pause or invest in dual‑write |
| Budget | You can fund migration as its own line item—not “nights and weekends only” |
Signals to not rush:
- The workload is genuinely document‑centric with few relational queries.
- The team runs MongoDB confidently and the system is stable.
- You cannot afford rewrite time right now—a delayed, planned migration beats a rushed one.
Migration is a means, not a virtue. Ask whether PostgreSQL is the right instrument for cost, performance, and developer productivity—then decide.
If the answer is yes, Infisical’s reported savings and Oracle escape narratives are not “marketing only”—they resemble what teams see when the surrounding work is understood. Read the footnotes: what was included, what else changed, and what trade‑offs remained.
Part 5 looks past the “why migrate” question into the “what Postgres unlocks next” world—pgvector, PostGIS, TimescaleDB, ParadeDB, and whether “one database, many workloads” is realistic in 2026.
Next — Part 5: The ecosystem — pgvector, PostGIS, TimescaleDB
Vector search, geospatial, time series, and search extensions—benchmarks, trade‑offs, and how far the “simplify the stack with Postgres” idea goes in 2026.
References
- Infisical — The Great Migration from MongoDB to PostgreSQL · Migration overview · Self‑hosting guide
- MongoDB — BSON types · schema validation
- PostgreSQL — Date/time types · Resource settings
- Oracle → PostgreSQL — PostgreSQL Wiki — converting from other databases · Ora2Pg · EDB migration
- Reddit / Aurora — public sources differ in scope; background on Aurora adoption: AWS — Reddit case study (dual‑write discussion in the article maps to common phased migration patterns)
- TCO narratives — definitions vary by vendor report; cross‑check line items (e.g., EDB public materials)
Written April 2026 · Figures reflect public posts and materials at their time; verify originals when citing.