High-throughput transactions, blazing-fast analytics, instant key-value access, low-latency vector search, AI agent and workflow orchestration. One powerful engine.
docker run -p 5432:5432 -p 8080:8080 particledbai/particledb ClickBench 43-query suite on AWS c8g.metal-48xl. Lower is better. Full results at /benchmarks/clickbench
One command to install. Connect with any PostgreSQL driver. Full SQL, vectors, and KV from line one.
Downloads the latest binary for your platform. macOS, Linux, Docker. No dependencies.
# install ParticleDB $ curl -fsSL get.particledb.ai | bash # start the server $ particledb start # that's it — PG wire on :5432
Use the ParticleDB SDK over REST or gRPC. Zero external dependencies.
from particledb import ParticleDB db = ParticleDB(host="localhost", port=8080) rows = db.query("SELECT * FROM users WHERE age > $1", [25]) for row in rows: print(row["name"])
HNSW and IVFFlat indexes. Semantic search, RAG, hybrid queries — all in standard SQL.
CREATE TABLE docs ( id SERIAL PRIMARY KEY, content TEXT, embedding VECTOR(1536) ); CREATE INDEX ON docs USING hnsw (embedding vector_cosine_ops); SELECT content FROM docs ORDER BY embedding <=> $1 LIMIT 10;
Stop running separate systems for transactions, analytics, vectors, and caching.
MVCC with snapshot isolation. Concurrent reads never block writes. WAL for crash recovery.
BEGIN; INSERT INTO orders (user_id, total) VALUES (42, 99.95); UPDATE inventory SET qty = qty - 1 WHERE sku = 'PDB-100'; COMMIT;
Arrow columnar format with zone maps and dictionary encoding. Grouped aggregations in single-digit milliseconds.
SELECT country, COUNT(*), AVG(revenue) FROM events WHERE created_at > '2026-01-01' GROUP BY country ORDER BY 2 DESC; -- 10M rows: ~3.4ms -- 100M rows: ~24ms
Full RESP protocol. Connect with redis-cli, Jedis, redis-py. Strings, lists, sets, sorted sets, hashes.
# connect with any Redis client $ redis-cli -p 6379 SET session:abc '{"user":42}' OK GET session:abc '{"user":42}' LPUSH queue:tasks "process-order" ZADD leaderboard 9500 "player:42"
Drop-in PG wire protocol. psql, pgAdmin, JDBC, SQLAlchemy, Prisma, Drizzle, TypeORM.
# psql $ psql -h localhost -p 5432 # SQLAlchemy engine = create_engine("postgresql://localhost/main") # Prisma DATABASE_URL="postgresql://localhost:5432/main"
Built-in vector search, similarity joins, and AI functions. No external vector DB needed.
-- Hybrid search: keyword + vector SELECT title, content FROM articles WHERE content ILIKE '%kubernetes%' ORDER BY embedding <=> $1 LIMIT 10;
One install script. Start the server. Connect with psql. Run your first query.
import psycopg2 # connect — same as PostgreSQL conn = psycopg2.connect("host=localhost dbname=main") cur = conn.cursor() # create a table cur.execute(""" CREATE TABLE IF NOT EXISTS users ( id SERIAL PRIMARY KEY, name TEXT NOT NULL, email TEXT UNIQUE, embedding VECTOR(384) ) """) # insert data cur.execute(""" INSERT INTO users (name, email) VALUES (%s, %s) RETURNING id """, ["Alice", "alice@example.com"]) user_id = cur.fetchone()[0] conn.commit() # query cur.execute("SELECT * FROM users") for row in cur.fetchall(): print(row) conn.close()
Connect how you want. Every protocol talks to the same engine, same data, same ACID guarantees.
SQL, vectors, KV, and RAG — all with typed, idiomatic APIs.
from particledb import ParticleDB db = ParticleDB("localhost:5432") # SQL users = db.query("SELECT * FROM users WHERE age > $1", [25]) # Vector search results = db.vector.search("docs", query_vec, limit=10) # KV db.kv.set("session:abc", {"user": 42})
import { ParticleDB } from 'particledb'; const db = new ParticleDB({ host: 'localhost' }); // SQL const users = await db.query('SELECT * FROM users'); // Vector search const similar = await db.vector.search('docs', vec); // KV await db.kv.set('session:abc', { user: 42 });
Install in seconds. The quickstart covers setup, your first table, queries, and vector search — in five minutes.