Skip to content

INSERT / UPDATE / DELETE

Add new rows to a table.

INSERT INTO products (id, name, price, category)
VALUES (1, 'Wireless Mouse', 29.99, 'electronics');

Insert multiple rows in a single statement:

INSERT INTO products (id, name, price, category) VALUES
(1, 'Wireless Mouse', 29.99, 'electronics'),
(2, 'Mechanical Keyboard', 89.99, 'electronics'),
(3, 'USB-C Hub', 49.99, 'accessories'),
(4, 'Standing Desk', 599.00, 'furniture');

Columns with DEFAULT values can be omitted:

CREATE TABLE events (
id BIGINT PRIMARY KEY,
name TEXT NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
is_active BOOLEAN DEFAULT TRUE
);
-- created_at and is_active use their defaults
INSERT INTO events (id, name) VALUES (1, 'user_signup');

Populate a table from a query result:

-- Archive old orders
INSERT INTO orders_archive
SELECT * FROM orders
WHERE created_at < CURRENT_TIMESTAMP - 7776000000000; -- 90 days in microseconds
-- Insert aggregated data
INSERT INTO daily_summary (date, total_orders, total_revenue)
SELECT
DATE_TRUNC('day', created_at) AS date,
COUNT(*) AS total_orders,
SUM(total) AS total_revenue
FROM orders
GROUP BY DATE_TRUNC('day', created_at);

Return the inserted rows (useful for getting generated values):

INSERT INTO products (id, name, price)
VALUES (10, 'New Widget', 24.99)
RETURNING *;
INSERT INTO products (id, name, price)
VALUES (11, 'Another Widget', 19.99)
RETURNING id, name;

Handle conflicts on unique/primary key constraints without errors.

Skip rows that would violate a uniqueness constraint:

-- Insert if not exists, skip if id already taken
INSERT INTO products (id, name, price)
VALUES (1, 'Wireless Mouse', 29.99)
ON CONFLICT (id) DO NOTHING;

Update the existing row on conflict:

-- Upsert: insert or update price if product already exists
INSERT INTO products (id, name, price, updated_at)
VALUES (1, 'Wireless Mouse', 24.99, NOW())
ON CONFLICT (id) DO UPDATE SET
price = EXCLUDED.price,
updated_at = EXCLUDED.updated_at;

The special EXCLUDED table refers to the row that was proposed for insertion.

Only update if a condition is met:

-- Only update price if the new price is lower
INSERT INTO products (id, name, price)
VALUES (1, 'Wireless Mouse', 24.99)
ON CONFLICT (id) DO UPDATE SET
price = EXCLUDED.price
WHERE EXCLUDED.price < products.price;

Target a named constraint instead of columns:

INSERT INTO products (id, name, price)
VALUES (1, 'Wireless Mouse', 24.99)
ON CONFLICT ON CONSTRAINT products_pkey DO UPDATE SET
price = EXCLUDED.price;

Modify existing rows.

UPDATE products
SET price = 19.99
WHERE id = 1;
UPDATE products
SET
price = 24.99,
category = 'peripherals',
updated_at = NOW()
WHERE id = 1;
-- 10% price increase for all electronics
UPDATE products
SET price = price * 1.10
WHERE category = 'electronics';
-- Conditional update using CASE
UPDATE employees
SET bonus = CASE
WHEN performance_rating >= 9 THEN salary * 0.20
WHEN performance_rating >= 7 THEN salary * 0.10
ELSE salary * 0.05
END
WHERE review_year = 2025;
-- Set each product's price to the category average
UPDATE products p
SET price = (
SELECT AVG(price)
FROM products
WHERE category = p.category
);

Remove rows from a table.

DELETE FROM products WHERE id = 1;
-- Delete products that haven't been ordered in 2 years
DELETE FROM products
WHERE id NOT IN (
SELECT DISTINCT product_id
FROM order_items
WHERE created_at > CURRENT_TIMESTAMP - 63072000000000 -- 2 years in microseconds
);
-- Delete all rows matching multiple conditions
DELETE FROM sessions
WHERE expires_at < CURRENT_TIMESTAMP
AND is_active = FALSE;

To remove all rows from a table, TRUNCATE is faster than DELETE:

-- Slow: scans and deletes row by row
DELETE FROM temp_data;
-- Fast: drops and recreates storage
TRUNCATE TABLE temp_data;

High-performance bulk data transfer between ParticleDB and files.

Load data from a file into a table:

-- Import CSV (default: comma-separated, with header row)
COPY products FROM '/data/products.csv' WITH (FORMAT CSV);
-- Import CSV with custom delimiter and no header
COPY products FROM '/data/products.tsv' WITH (FORMAT CSV, DELIMITER '\t', HEADER FALSE);
-- Import specific columns
COPY products (id, name, price) FROM '/data/products.csv' WITH (FORMAT CSV);
-- Import Parquet
COPY products FROM '/data/products.parquet' WITH (FORMAT PARQUET);
-- Import JSON (one JSON object per line, newline-delimited)
COPY products FROM '/data/products.json' WITH (FORMAT JSON);

Export data from a table to a file:

-- Export to CSV
COPY products TO '/data/export.csv' WITH (FORMAT CSV);
-- Export to Parquet (columnar, compressed)
COPY products TO '/data/export.parquet' WITH (FORMAT PARQUET);
-- Export to JSON
COPY products TO '/data/export.json' WITH (FORMAT JSON);
-- Export specific columns
COPY products (id, name, price) TO '/data/export.csv' WITH (FORMAT CSV);
OptionDefaultDescription
FORMATCSVFile format: CSV, PARQUET, or JSON
HEADERTRUEWhether the CSV file has a header row
DELIMITER,Column delimiter character for CSV
  • Parquet is the fastest format for both import and export — columnar layout matches ParticleDB’s internal storage
  • CSV with header is the most portable format
  • JSON (newline-delimited) is useful for semi-structured data
  • For large imports, COPY FROM is significantly faster than many individual INSERT statements

All DML statements execute within a transaction. Without an explicit BEGIN, each statement runs in its own auto-committed transaction.

BEGIN;
INSERT INTO accounts (id, name, balance) VALUES (1, 'Alice', 1000);
INSERT INTO accounts (id, name, balance) VALUES (2, 'Bob', 500);
-- Transfer $100 from Alice to Bob
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
UPDATE accounts SET balance = balance + 100 WHERE id = 2;
COMMIT;

If any statement fails, use ROLLBACK to undo all changes since BEGIN:

BEGIN;
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
-- Oops, wrong amount
ROLLBACK;

See Transactions for isolation levels and concurrency control.