Skip to content

TPC-C (OLTP)

Machine: AWS c8g.metal-48xl (192 Graviton4 cores, 384 GB RAM) Benchmark: TPC-C (New Order, Payment, Delivery, Order Status, Stock Level mix) Warehouses: Equal to worker count (TPC-C standard) Duration: 15 seconds per run

ParticleDB vs PostgreSQL 17 — Default Settings

Section titled “ParticleDB vs PostgreSQL 17 — Default Settings”

Both databases run with their default durability:

  • ParticleDB: groupsync WAL (batched fsync)
  • PostgreSQL 17: synchronous_commit = on (fsync per commit)
WorkersParticleDB (fast)PostgreSQL 17Ratio
17,225 TPS9,433 TPS0.8x
873,176 TPS49,389 TPS1.5x
16110,314 TPS88,596 TPS1.2x
3279,737 TPS107,313 TPS0.7x
6478,025 TPS118,782 TPS0.7x

ParticleDB wins at 8-16 workers (1.2-1.5x faster). PostgreSQL scales better to 32-64 workers. We’re actively working on improving high-concurrency scaling.

Workersfast+groupsyncfast+nosyncocc+groupsyncocc+nosync
17,22540,7326,19326,588
873,176152,78028,93831,958
16110,314175,56845,15553,763
3279,737115,46183,90297,018
6478,02572,06075,934104,900
Workerssync_commit=onsync_commit=off
19,4339,650
849,38954,939
1688,596102,320
32107,313145,679
64118,782119,304
  1. ParticleDB’s sweet spot is 8-16 workers — 1.2-1.5x faster than PG with durability
  2. OCC mode scales better at high concurrency than fast mode (105K at 64w vs 78K)
  3. WAL sync is the single-threaded bottleneck — nosync gives 5.6x at 1 worker (41K vs 7K)
  4. Peak throughput: 175K TPS (fast+nosync at 16 workers)
TransactionPostgreSQLParticleDB
new_order0.16ms0.02ms
payment0.13ms0.02ms
delivery0.37ms0.10ms

8x lower latency on individual transactions.

What Makes ParticleDB Fast at Low Concurrency

Section titled “What Makes ParticleDB Fast at Low Concurrency”
  1. Arrow columnar in-memory storage — no disk I/O for reads
  2. Async WAL pipeline — dedicated flusher thread with crossbeam channel
  3. Zone maps — skip irrelevant data chunks without scanning
  4. SIMD-optimized operations — Arrow compute kernels with hardware acceleration
  • 32+ worker regression in fast mode — shared table write lock becomes a bottleneck. Under active investigation.
  • Single-worker overhead — PG is faster at 1 worker due to our per-query overhead (parsing, planning). PG uses prepared statements that skip this.
  • Both databases on the same machine, same session, idle (no competing workloads)
  • PostgreSQL tuned with shared_buffers=64GB (proportional to RAM)
  • ParticleDB uses default settings (no special tuning)
  • pgbench for PostgreSQL, custom Rust TPC-C bench for ParticleDB (both use PG wire protocol)
  • Warehouses scale with workers (TPC-C standard: 1 warehouse per worker minimum)

These performance numbers come with mathematical correctness guarantees:

  • TLA+ formally verified WAL crash recovery (833 states, complete proof)
  • TLA+ formally verified transaction model (331K states, bounded complete proof)
  • Hermitage 14/14 isolation anomaly tests pass