The first tutorial covered native PostgreSQL partitioning mechanics. This one focuses on the operational payoff — using partitioning for data lifecycle management.
The scenario: An application generates short-lived activation codes with a 12-hour TTL. A nightly job deletes all expired codes with DELETE FROM ... WHERE expires_at < now(), scanning and deleting rows one by one. With range partitioning on expires_at, expired codes naturally land in "cold" partitions that can be dropped instantly — no row scanning, no WAL generation, no vacuum pressure.
We'll use pg_partman to automate partition creation and retention.
Requirements: PostgreSQL 13 or later. pg_partman must be installed as an extension — it's pre-installed on managed services like Crunchy Bridge and available as a system package on most Linux distributions (e.g. postgresql-16-partman).
Create a fresh database and connect to it:
psql -c "CREATE DATABASE activation_code_demo"
psql -d activation_code_demo
Enable pg_partman:
CREATE EXTENSION IF NOT EXISTS pg_partman;
gen_random_uuid()is built-in since PostgreSQL 13 — nopgcryptoextension needed.
For reference, here's a typical non-partitioned schema for this table. Don't run this — it's just for comparison with the partitioned version below.
-- From db/structure.sql (reference only — do not run)
CREATE TABLE public.dashboard_activation_codes (
id uuid DEFAULT gen_random_uuid() NOT NULL,
code character varying NOT NULL,
expires_at timestamp without time zone NOT NULL,
created_at timestamp without time zone NOT NULL,
updated_at timestamp without time zone NOT NULL
);
ALTER TABLE ONLY public.dashboard_activation_codes
ADD CONSTRAINT dashboard_activation_codes_pkey PRIMARY KEY (id);
CREATE UNIQUE INDEX index_dashboard_activation_codes_on_code
ON public.dashboard_activation_codes USING btree (code);
CREATE INDEX index_dashboard_activation_codes_on_expires_at
ON public.dashboard_activation_codes USING btree (expires_at);Now the partitioned version:
CREATE TABLE dashboard_activation_codes (
id uuid DEFAULT gen_random_uuid() NOT NULL,
code varchar NOT NULL,
expires_at timestamp NOT NULL,
created_at timestamp NOT NULL DEFAULT now(),
updated_at timestamp NOT NULL DEFAULT now(),
PRIMARY KEY (id, expires_at)
) PARTITION BY RANGE (expires_at);
CREATE UNIQUE INDEX idx_dac_code_expires ON dashboard_activation_codes (code, expires_at);
CREATE INDEX idx_dac_expires ON dashboard_activation_codes (expires_at);Why the primary key changed: PostgreSQL requires the partition key to be part of every unique constraint, including the primary key. So PRIMARY KEY (id) becomes PRIMARY KEY (id, expires_at), and the unique index on code becomes UNIQUE (code, expires_at).
The code column holds random 8-character strings with a 12-hour TTL — the odds of a duplicate code landing in the same partition are negligible. The id is a UUID, globally unique regardless.
Tell pg_partman to manage this table with daily partitions. We start 5 days back so we have a spread of cold partitions to work with:
SELECT public.create_parent(
p_parent_table := 'public.dashboard_activation_codes',
p_control := 'expires_at',
p_interval := '1 day',
p_premake := 2,
p_start_partition := (now() - INTERVAL '5 days')::date::text
);Inspect the resulting partition layout:
SELECT
child.relname AS partition_name,
pg_get_expr(child.relpartbound, child.oid) AS partition_bound
FROM
pg_catalog.pg_inherits inh
JOIN pg_catalog.pg_class parent ON inh.inhparent = parent.oid
JOIN pg_catalog.pg_class child ON inh.inhrelid = child.oid
WHERE
parent.relname = 'dashboard_activation_codes'
ORDER BY
child.relname;You should see daily partitions spanning from 5 days ago through 2 days into the future, plus a default. The exact partition names include dates (e.g. dashboard_activation_codes_p20260314) — note yours for later sections.
Insert codes spanning the past five days. The mix of expired and active codes mirrors real usage:
INSERT INTO dashboard_activation_codes (code, expires_at) VALUES
-- 5 days ago — fully expired (cold)
('AAAA1111', now() - INTERVAL '5 days'),
('AAAA2222', now() - INTERVAL '5 days' + INTERVAL '6 hours'),
-- 4 days ago — expired (cold)
('BBBB1111', now() - INTERVAL '4 days'),
('BBBB2222', now() - INTERVAL '4 days' + INTERVAL '6 hours'),
-- 3 days ago — expired (cold)
('CCCC1111', now() - INTERVAL '3 days'),
('CCCC2222', now() - INTERVAL '3 days' + INTERVAL '6 hours'),
-- 2 days ago — expired (cold)
('DDDD1111', now() - INTERVAL '2 days'),
('DDDD2222', now() - INTERVAL '2 days' + INTERVAL '6 hours'),
-- Yesterday — expired (cold)
('EEEE1111', now() - INTERVAL '1 day'),
('EEEE2222', now() - INTERVAL '1 day' + INTERVAL '6 hours'),
-- Today — recently expired and still active (warm/hot)
('FFFF1111', now() - INTERVAL '1 hour'),
('FFFF2222', now() + INTERVAL '6 hours'),
-- Tomorrow — active (hot)
('GGGG1111', now() + INTERVAL '1 day'),
('GGGG2222', now() + INTERVAL '1 day' + INTERVAL '6 hours');Now bulk-insert 10,000 expired codes into the 5-days-ago partition — enough to see a real cost difference between DELETE and DROP:
INSERT INTO dashboard_activation_codes (code, expires_at)
SELECT
'X' || lpad(i::text, 7, '0'),
now() - INTERVAL '5 days' + (random() * INTERVAL '23 hours')
FROM generate_series(1, 10000) AS s(i);Query expired and active codes using the same conditions as the app's scopes:
-- Mirrors: scope :expired, -> { where("expires_at < now()") }
SELECT code, expires_at FROM dashboard_activation_codes
WHERE expires_at < now()
ORDER BY expires_at
LIMIT 10;
-- Mirrors: scope :not_expired, -> { where("expires_at > now()") }
SELECT code, expires_at FROM dashboard_activation_codes
WHERE expires_at > now()
ORDER BY expires_at;The tableoid::regclass column reveals which physical partition each row lives in:
SELECT
tableoid::regclass AS partition_name,
count(*)
FROM dashboard_activation_codes
GROUP BY tableoid
ORDER BY partition_name;When a user re-accesses an expired activation code, the application regenerates the code and updates expires_at to push it 12 hours into the future. In a partitioned table, this moves the row from one partition to another.
First, find an expired code and note which partition it's in:
SELECT code, expires_at, tableoid::regclass AS partition
FROM dashboard_activation_codes
WHERE code = 'CCCC1111';This should show the code in the 3-days-ago partition. Now simulate refreshing the code:
UPDATE dashboard_activation_codes
SET expires_at = now() + INTERVAL '12 hours',
updated_at = now()
WHERE code = 'CCCC1111';Check where it lives now:
SELECT code, expires_at, tableoid::regclass AS partition
FROM dashboard_activation_codes
WHERE code = 'CCCC1111';The tableoid changed — the row moved to today's or tomorrow's partition. PostgreSQL handles this automatically: it deletes from the old partition and inserts into the new one, all within the same transaction.
See what a bulk delete of old expired codes costs:
EXPLAIN ANALYZE
DELETE FROM dashboard_activation_codes
WHERE expires_at < now() - INTERVAL '4 days';Look at the execution time and row count. Every matching row gets a WAL entry and leaves a dead tuple for vacuum.
Instead of deleting rows, drop entire cold partitions. pg_partman provides drop_partition_time() for exactly this:
SELECT public.drop_partition_time(
p_parent_table := 'public.dashboard_activation_codes',
p_retention := '4 days',
p_keep_table := false
);This drops every partition whose entire range is older than 4 days and returns the count dropped. pg_partman detaches each partition from the parent and then drops it — a metadata operation regardless of row count. No WAL, no dead tuples.
With p_keep_table := true (the default), it detaches without dropping, so you can inspect or archive before removing.
Rather than calling drop_partition_time() by hand, configure pg_partman to do it on every maintenance run. First, set the retention policy:
UPDATE public.part_config
SET retention = '2 days',
retention_keep_table = false
WHERE parent_table = 'public.dashboard_activation_codes';retention_keep_table is the config equivalent of the p_keep_table parameter we used above.
Now let's see it in action. The manual drop_partition_time() above removed the oldest partition (5 days ago), but several cold partitions remain.
Check the per-partition row counts:
SELECT
tableoid::regclass AS partition_name,
count(*)
FROM dashboard_activation_codes
GROUP BY tableoid
ORDER BY partition_name;Now run maintenance:
SELECT public.run_maintenance('public.dashboard_activation_codes');Now check the layout again:
SELECT
child.relname AS partition_name,
pg_get_expr(child.relpartbound, child.oid) AS partition_bound
FROM
pg_catalog.pg_inherits inh
JOIN pg_catalog.pg_class parent ON inh.inhparent = parent.oid
JOIN pg_catalog.pg_class child ON inh.inhrelid = child.oid
WHERE
parent.relname = 'dashboard_activation_codes'
ORDER BY
child.relname;The total partition count may go up — run_maintenance() dropped old cold partitions and created new empty future partitions based on the p_premake setting.
The partition at the retention boundary may survive — pg_partman only drops a partition when its entire range is older than the retention window. A daily partition for 2 days ago covers up to midnight — so depending on what time you run this, it may not be fully outside the 2-day window yet. It'll be dropped on the next maintenance run once it ages out completely.
In production, run_maintenance() should run on a schedule — daily is typical for this use case.
Options:
- pg_cron (if available):
SELECT cron.schedule('partman-maintenance', '0 3 * * *', $$SELECT public.run_maintenance()$$); - External scheduler: A cron job, Sidekiq job, or Kubernetes CronJob that calls
SELECT public.run_maintenance()against the database daily.
DROP TABLE dashboard_activation_codes;
DROP EXTENSION pg_partman;Then disconnect and drop the database:
\q
psql -c "DROP DATABASE activation_code_demo"