TPC-C (OLTP)
Results
Section titled “Results”Machine: AWS c8g.metal-48xl (192 Graviton4 cores, 384 GB RAM) Benchmark: TPC-C (New Order, Payment, Delivery, Order Status, Stock Level mix) Warehouses: Equal to worker count (TPC-C standard) Duration: 15 seconds per run
ParticleDB vs PostgreSQL 17 — Default Settings
Section titled “ParticleDB vs PostgreSQL 17 — Default Settings”Both databases run with their default durability:
- ParticleDB: groupsync WAL (batched fsync)
- PostgreSQL 17: synchronous_commit = on (fsync per commit)
| Workers | ParticleDB (fast) | PostgreSQL 17 | Ratio |
|---|---|---|---|
| 1 | 7,225 TPS | 9,433 TPS | 0.8x |
| 8 | 73,176 TPS | 49,389 TPS | 1.5x |
| 16 | 110,314 TPS | 88,596 TPS | 1.2x |
| 32 | 79,737 TPS | 107,313 TPS | 0.7x |
| 64 | 78,025 TPS | 118,782 TPS | 0.7x |
ParticleDB wins at 8-16 workers (1.2-1.5x faster). PostgreSQL scales better to 32-64 workers. We’re actively working on improving high-concurrency scaling.
All ParticleDB Modes
Section titled “All ParticleDB Modes”| Workers | fast+groupsync | fast+nosync | occ+groupsync | occ+nosync |
|---|---|---|---|---|
| 1 | 7,225 | 40,732 | 6,193 | 26,588 |
| 8 | 73,176 | 152,780 | 28,938 | 31,958 |
| 16 | 110,314 | 175,568 | 45,155 | 53,763 |
| 32 | 79,737 | 115,461 | 83,902 | 97,018 |
| 64 | 78,025 | 72,060 | 75,934 | 104,900 |
PostgreSQL 17 Modes
Section titled “PostgreSQL 17 Modes”| Workers | sync_commit=on | sync_commit=off |
|---|---|---|
| 1 | 9,433 | 9,650 |
| 8 | 49,389 | 54,939 |
| 16 | 88,596 | 102,320 |
| 32 | 107,313 | 145,679 |
| 64 | 118,782 | 119,304 |
Key Observations
Section titled “Key Observations”- ParticleDB’s sweet spot is 8-16 workers — 1.2-1.5x faster than PG with durability
- OCC mode scales better at high concurrency than fast mode (105K at 64w vs 78K)
- WAL sync is the single-threaded bottleneck — nosync gives 5.6x at 1 worker (41K vs 7K)
- Peak throughput: 175K TPS (fast+nosync at 16 workers)
Per-Transaction Latency (1 worker)
Section titled “Per-Transaction Latency (1 worker)”| Transaction | PostgreSQL | ParticleDB |
|---|---|---|
| new_order | 0.16ms | 0.02ms |
| payment | 0.13ms | 0.02ms |
| delivery | 0.37ms | 0.10ms |
8x lower latency on individual transactions.
What Makes ParticleDB Fast at Low Concurrency
Section titled “What Makes ParticleDB Fast at Low Concurrency”- Arrow columnar in-memory storage — no disk I/O for reads
- Async WAL pipeline — dedicated flusher thread with crossbeam channel
- Zone maps — skip irrelevant data chunks without scanning
- SIMD-optimized operations — Arrow compute kernels with hardware acceleration
Known Limitations
Section titled “Known Limitations”- 32+ worker regression in fast mode — shared table write lock becomes a bottleneck. Under active investigation.
- Single-worker overhead — PG is faster at 1 worker due to our per-query overhead (parsing, planning). PG uses prepared statements that skip this.
Methodology
Section titled “Methodology”- Both databases on the same machine, same session, idle (no competing workloads)
- PostgreSQL tuned with shared_buffers=64GB (proportional to RAM)
- ParticleDB uses default settings (no special tuning)
- pgbench for PostgreSQL, custom Rust TPC-C bench for ParticleDB (both use PG wire protocol)
- Warehouses scale with workers (TPC-C standard: 1 warehouse per worker minimum)
Formal Correctness
Section titled “Formal Correctness”These performance numbers come with mathematical correctness guarantees:
- TLA+ formally verified WAL crash recovery (833 states, complete proof)
- TLA+ formally verified transaction model (331K states, bounded complete proof)
- Hermitage 14/14 isolation anomaly tests pass