Friday, April 17, 2026
All posts
Lv.3 IntermediateMongoDB
38 min readLv.3 Intermediate
SeriesMongoDB ACID Mastery · Part 5/5View series hub

MongoDB ACID — Part 5: Production patterns, optimization, and what to avoid

MongoDB ACID — Part 5: Production patterns, optimization, and what to avoid

If you have followed the first four posts on MongoDB ACID internals, this closing installment moves to how you ship it in application code. It centers on a safe `withTransaction` calling style, reusable multi-document patterns (transfers, inventory, reservations, idempotency keys, batching), and ways to avoid unnecessary transactions via embedding and atomic conditional updates. It also walks through the Transactional Outbox, at-least-once delivery, and consumer-side idempotency. Performance, limits, and observability are framed around why things fail under load rather than declaring fixed magic numbers, and the article closes with a one-table ACID recap plus curated MongoDB manual and driver links. Always cross-check with the official docs for your exact version, driver major, and topology.

Series outline

Table of contents

  1. Introduction
  2. A production-grade transaction helper template
  3. Five domain patterns
  4. Consistency without multi-document transactions — embedding and atomic updates
  5. Transactional Outbox and consumer-side idempotency
  6. Performance and lifetime — keep transactions short
  7. Limits, checklists, and observability
  8. Seven anti-patterns to avoid
  9. Series recap — MongoDB ACID in one table
  10. References — official documentation
  11. Closing

1. Introduction

Through Part 4 we treated Durability and writeConcern — “after the driver returns success, will the write still be there?” This article moves the same ACID frame into application code.

The focus is threefold:

  1. Multi-document transactions without foot-guns: pass session everywhere, close sessions, and lean on driver retry behavior where appropriate.
  2. Domain-shaped patterns you will see again and again: transfers, inventory, reservations, idempotency keys, and batched updates.
  3. Avoiding transaction sprawl — single-document design, atomic conditional updates, and Outbox boundaries.

Sample code targets the MongoDB Node.js driver 6.x surface area. Return object shapes and option names can differ by major version, so keep your project’s driver documentation open alongside this post.


2. A production-grade transaction helper template

Common production mistakes include: passing session to only some operations, never ending sessions, and surfacing transient conflicts without a retry strategy.

Instead of hand-rolling startTransaction/commitTransaction, wrapping work in session.withTransaction usually centralizes commit/abort paths and pairs well with driver retry logic.

const { MongoClient, MongoError } = require('mongodb');

const client = new MongoClient(process.env.MONGO_URI);

/**
 * Every read/write inside the callback must pass { session }.
 * TransientTransactionError handling depends on driver version — read the manual.
 */
async function runTransaction(callback) {
  const session = client.startSession();

  try {
    const result = await session.withTransaction(
      async (s) => callback(s),
      {
        readConcern: { level: 'snapshot' },
        writeConcern: { w: 'majority', wtimeout: 10000 },
        maxCommitTimeMS: 5000,
      }
    );
    return result;
  } catch (error) {
    if (error instanceof MongoError) {
      console.error('[MongoDB Transaction Error]', {
        code: error.code,
        codeName: error.codeName,
        message: error.message,
      });
    }
    throw error;
  } finally {
    await session.endSession();
  }
}

The exact withTransaction callback signature is defined by your driver version — the snippet assumes a callback(session) shape.


3. Five domain patterns

3.1 Account transfer

Transfers require debit, credit, and ledger rows to commit or roll back together. A typical approach is findOneAndUpdate with predicates on balance and status.

Important: findOneAndUpdate return shapes vary by driver and options. The example below accepts both ModifyResult.value and a direct document return.

async function transferFunds({ fromId, toId, amount, currency = 'KRW' }) {
  return runTransaction(async (session) => {
    const accounts = client.db('banking').collection('accounts');

    const debitModifyResult = await accounts.findOneAndUpdate(
      {
        _id: fromId,
        balance: { $gte: amount },
        status: 'active',
        currency,
      },
      {
        $inc: { balance: -amount },
        $set: { updatedAt: new Date() },
        $push: {
          history: {
            type: 'debit',
            amount,
            toAccount: toId,
            timestamp: new Date(),
            status: 'completed',
          },
        },
      },
      { session, returnDocument: 'after' }
    );

    const debited = debitModifyResult?.value ?? debitModifyResult;
    if (!debited) {
      throw new Error('INSUFFICIENT_FUNDS_OR_INVALID_ACCOUNT');
    }

    const creditModifyResult = await accounts.findOneAndUpdate(
      { _id: toId, status: 'active', currency },
      {
        $inc: { balance: amount },
        $set: { updatedAt: new Date() },
        $push: {
          history: {
            type: 'credit',
            amount,
            fromAccount: fromId,
            timestamp: new Date(),
            status: 'completed',
          },
        },
      },
      { session, returnDocument: 'after' }
    );

    const credited = creditModifyResult?.value ?? creditModifyResult;
    if (!credited) {
      throw new Error('RECIPIENT_ACCOUNT_NOT_FOUND');
    }

    await client.db('banking').collection('ledger').insertOne(
      {
        type: 'transfer',
        fromAccount: fromId,
        toAccount: toId,
        amount,
        currency,
        executedAt: new Date(),
        balanceAfterDebit: debited.balance,
        balanceAfterCredit: credited.balance,
      },
      { session }
    );
  });
}

3.2 Inventory decrement + order creation

For concurrent purchases, conditional updates decrement stock; only then insert the order document.

async function createOrder({ customerId, items }) {
  return runTransaction(async (session) => {
    const db = client.db('shop');
    let totalAmount = 0;

    for (const { productId, quantity } of items) {
      const product = await db.collection('products').findOneAndUpdate(
        {
          _id: productId,
          stock: { $gte: quantity },
          status: 'available',
        },
        { $inc: { stock: -quantity } },
        { session, returnDocument: 'after' }
      );

      const updated = product?.value ?? product;
      if (!updated) {
        throw new Error(`STOCK_INSUFFICIENT:${productId}`);
      }

      totalAmount += updated.price * quantity;
    }

    const order = await db.collection('orders').insertOne(
      {
        customerId,
        items,
        totalAmount,
        status: 'confirmed',
        createdAt: new Date(),
      },
      { session }
    );

    await db.collection('customers').updateOne(
      { _id: customerId },
      {
        $inc: { totalOrders: 1, totalSpent: totalAmount },
        $set: { lastOrderAt: new Date() },
      },
      { session }
    );

    return order.insertedId;
  });
}

3.3 Reservations — unique index + transaction

Unique indexes plus insertOne (and handling 11000) are a simple, strong way to prevent double booking.

await db.collection('reservations').createIndex(
  { resourceId: 1, timeSlot: 1 },
  { unique: true }
);

async function makeReservation({ resourceId, timeSlot, userId }) {
  return runTransaction(async (session) => {
    const db = client.db('booking');

    try {
      await db.collection('reservations').insertOne(
        {
          resourceId,
          timeSlot,
          userId,
          status: 'confirmed',
          createdAt: new Date(),
        },
        { session }
      );
    } catch (error) {
      if (error.code === 11000) {
        throw new Error('TIME_SLOT_ALREADY_BOOKED');
      }
      throw error;
    }

    await db.collection('outbox').insertOne(
      {
        type: 'RESERVATION_CONFIRMED',
        payload: { resourceId, timeSlot, userId },
        status: 'pending',
        createdAt: new Date(),
      },
      { session }
    );
  });
}

3.4 Idempotency keys

Clients may retry after timeouts. Store results keyed by an idempotency key so duplicates return the same outcome.

async function processPaymentIdempotent({ idempotencyKey, paymentData }) {
  return runTransaction(async (session) => {
    const db = client.db('payments');

    const existing = await db
      .collection('idempotency_keys')
      .findOne({ key: idempotencyKey }, { session });

    if (existing) {
      return existing.result;
    }

    const payment = await db.collection('payments').insertOne(
      { ...paymentData, processedAt: new Date() },
      { session }
    );

    await db.collection('idempotency_keys').insertOne(
      {
        key: idempotencyKey,
        result: { paymentId: payment.insertedId },
        createdAt: new Date(),
      },
      { session }
    );

    return { paymentId: payment.insertedId };
  });
}

3.5 Batched multi-document updates

Stuffing too many writes into one transaction stresses cache, oplog, and lifetime limits. A “~1000 operations” figure often appears as guidance, but what you can afford depends on lock footprint, document size, contention, and hardware. Tune batch sizes by observing why latency and failure rates climb as transactions grow.

async function bulkUpdateWithBatching(updates, batchSize = 500) {
  const results = { success: 0, failed: 0, errors: [] };

  for (let i = 0; i < updates.length; i += batchSize) {
    const batch = updates.slice(i, i + batchSize);

    try {
      await runTransaction(async (session) => {
        const db = client.db('app');

        for (const update of batch) {
          await db.collection('records').updateOne(
            { _id: update.id },
            { $set: update.fields },
            { session }
          );
        }
      });

      results.success += batch.length;
    } catch (error) {
      results.failed += batch.length;
      results.errors.push({ batchStart: i, error: error.message });
    }
  }

  return results;
}

4. Consistency without multi-document transactions — embedding and atomic updates

4.1 Embedded documents

Fields that always move together can live in one document and rely on single-document atomicity.

await db.collection('users').updateOne(
  { _id: userId },
  {
    $set: { 'auth.lastLoginAt': new Date(), 'auth.failedAttempts': 0 },
    $inc: { 'stats.loginCount': 1 },
  }
);

4.2 findOneAndUpdate — read, predicate, and write in one step

Splitting “read → decide → write” invites races. Prefer a single conditional update.

const result = await db.collection('tickets').findOneAndUpdate(
  { _id: ticketId, status: 'available' },
  { $set: { status: 'reserved', userId, reservedAt: new Date() } },
  { returnDocument: 'after' }
);

const ticket = result?.value ?? result;
if (!ticket) {
  throw new Error('TICKET_ALREADY_SOLD');
}

5. Transactional Outbox and consumer-side idempotency

5.1 Why naive dual writes are risky

Writing the database first and a message broker second (or vice versa) leaves failure windows where only one side succeeds.

5.2 Outbox — producer-side atomicity

Putting business rows and an outbox collection in the same transaction narrows the split-brain window.

async function createOrderWithOutbox(orderData) {
  return runTransaction(async (session) => {
    const db = client.db('shop');

    const order = await db.collection('orders').insertOne(
      { ...orderData, status: 'pending', createdAt: new Date() },
      { session }
    );

    await db.collection('outbox').insertOne(
      {
        aggregateType: 'Order',
        aggregateId: order.insertedId,
        eventType: 'OrderCreated',
        payload: { orderId: order.insertedId, ...orderData },
        status: 'PENDING',
        createdAt: new Date(),
        retryCount: 0,
      },
      { session }
    );

    return order.insertedId;
  });
}

The diagram below summarizes the flow: the application inserts order + outbox in one commit; a relay publishes downstream and updates status.

5.3 Relay delivery and at-least-once

Relays retry; most stacks approximate at-least-once delivery. Consumers must assume duplicates and implement idempotency (event IDs, business keys, dedupe stores).

Exactly-once end-to-end is hard to promise without careful cooperation across broker, consumer, and side effects — treat it as a system property, not a slogan.

async function outboxRelayLoop() {
  const db = client.db('shop');

  const cursor = db.collection('outbox').find({ status: 'PENDING' }).batchSize(50);

  for await (const event of cursor) {
    try {
      await kafka.produce(event.eventType, event.payload);

      await db.collection('outbox').updateOne(
        { _id: event._id },
        { $set: { status: 'PUBLISHED', publishedAt: new Date() } }
      );
    } catch (error) {
      await db.collection('outbox').updateOne(
        { _id: event._id },
        {
          $inc: { retryCount: 1 },
          $set: {
            status: event.retryCount >= 3 ? 'FAILED' : 'PENDING',
            lastError: error.message,
          },
        }
      );
    }
  }
}

6. Performance and lifetime — keep transactions short

6.1 Never mix long external I/O inside a transaction

HTTP, payments, message brokers belong outside the transaction — external latency becomes lock time.

6.2 Indexes

Unindexed reads inside transactions can devolve into collection scans and wider locks. Validate plans before go-live.

6.3 Measure lifetime

Track average / p95 / p99 transaction latency alongside WriteConflict rates to see what to fix first.

async function monitoredTransaction(callback, label = 'unnamed') {
  const startTime = Date.now();
  const result = await runTransaction(callback);
  const duration = Date.now() - startTime;

  if (duration > 1000) {
    console.warn(`[SLOW TRANSACTION] ${label}: ${duration}ms`);
  }
  if (duration > 5000) {
    console.error(`[CRITICAL TRANSACTION] ${label}: ${duration}ms`);
  }

  return result;
}

7. Limits, checklists, and observability

Treat numeric limits (max lifetime seconds, etc.) as versioned server parameters — read the manual for the build you run.

Vendor benchmark headlines are only meaningful next to workload, hardware, and topology. Do not generalize a “% faster” line without those footnotes.

SymptomLikely causeWhat to inspect first
Intermittent WriteConflicthot keys, large lock rangeslatency distribution, conflict keys, indexes
Frequent wtimeoutsecondaries lagging, network, aggressive majority waitsreplication lag, RS health, SLO vs wtimeout
p99 transaction latency spikesexternal calls inside TX, oversized batches, scansquery plans inside TX, batch sizing
WiredTiger cache pressurelong transactions, large scanscache utilization, eviction stalls, concurrent TX

Checklist (short):

  • Prefer a replica set topology even in dev when exercising transactions.
  • Pair majority with a sensible wtimeout to avoid unbounded waits.
  • Choose readConcern / writeConcern explicitly for your consistency needs.
  • Remember restricted operations on system databases (config, admin, local).

8. Seven anti-patterns to avoid

8.1 Passing session to only some operations

You get split commits — one write visible outside the transaction while another rolls back.

8.2 Long-running transactions

Huge cursors inside one transaction stress cache and lifetime limits. Batch and shorten.

8.3 Multi-document transactions for single-document updates

Often pure overhead — question whether you need a transaction at all.

8.4 majority without wtimeout

Secondaries that stop acknowledging can block application threads indefinitely.

8.5 Surfacing retryable errors raw

Separate what withTransaction retries from what your API should translate.

8.6 Multi-document transactions on standalone mongod

Use a replica set topology for realistic local testing.

8.7 Writing via transactions against system databases

Keep application data in application databases.


9. Series recap — MongoDB ACID in one table

LetterWhat it protectsTypical MongoDB tools
Aall writes succeed or nonemulti-document transactions; single-document atomicity
Crules and invariantsschema validation; unique indexes; app-level invariants
Iwhat concurrent readers seesnapshots; conflict policy; Read Concern
Dsurvives after commit ackjournal; checkpoints; Write Concern

The through-line of the series: transactions are a tool — schema and boundaries that remove the need for them usually cost less to operate.


10. References — official documentation

Driver and server behavior varies by version — use the list below as a conceptual map, then verify against the manual for your deployment.


11. Closing

Multi-document transactions only become safe when read together with server parameters, driver majors, and topology. Throughput or “% faster” figures from benchmarks are not portable without their measurement assumptions.

If this series helps you build a code-review checklist and an on-call symptom table, it has done its job. Before your next change to writeConcern or transaction defaults, re-validate against staging load and the manual for that release.


Previous: Part 4 — Durability, WiredTiger, Write Concern

Share This Article

Series Navigation

MongoDB ACID Mastery

5 / 5 · 5

Explore this topic·Start with featured series

한국어

Follow new posts via RSS

Until the newsletter opens, RSS is the fastest way to get updates.

Open RSS Guide