A Laravel Artisan command built for the Curitics Care-Management app to stress-test it at production scale and beyond. Generates N new patient records plus proportional related data (medication refills, conditions, care gaps, notes, tasks, plans) so performance benchmarks exercise the queries that actually get slow in prod.
Command lives at: app/Console/Commands/Performance/VolumeSeedCommand.php
Production had ~500K patients when this work started. The goal was to find out what happens at 1M, 3M, and beyond — before prod got there.
Factories don't scale (hours for 1M patients, effectively infinite for 60M+ related rows). Seeding only patients with empty relations hides the real bottlenecks: the slow queries in prod are counts and joins against the big per-patient tables. A patient list loads fine with 1M empty patients — the bottlenecks only surface when each patient has 15 care gaps, 24 medication refills, and 17 conditions to pull from.
- Creates N new patients + related records at production ratios (24 refills, 17 conditions, 15 care gaps, 2 notes, 1 task, 1 plan per patient).
- 1M patients → ~60M related rows in ~18 minutes on a local Mac.
- Additive, local-only.
Most benchmarking ran on cm_20260402 — a clone of prod at ~500K patients (real data shape). Volume-seeded DBs (1M–5M) were the scaling probe. Benchmarks run via our internal app:performance-snapshot against the monitored routes.
- At current prod shape (~500K), multiple routes were already underperforming. Care Plan 55, Task List 87 and dropping, Dashboard widgets running unbounded count queries.
- 500K → 1M volume runs showed Task List and Plan List dropping 9-11 points each, driven by full-table scans on count queries. Extrapolation put multiple routes in the 70s at 2M.
- Cold Dashboard queries at 3M would hit ~800ms, running on every load.
A handful of findings delivered most of the impact:
- Care Plan: 55 → 83. One index on the
notestable collapsed an 815ms polymorphic lookup to <20ms. Biggest single finding. - Dashboard: degrading → perfect 100 at 5M. Widgets were running unbounded count queries ("open cases," "open tasks," "overdue tasks") on every page load — ~150-300ms each at 500K, projecting to ~800ms at 3M. Wrapped them in tenant-scoped 5-minute caches; fixed the cold-cache edge case by aligning local/prod warmup behavior.
- Member Explorer patient count: 151ms → 16ms via partial index on active non-training members.
- Master-status lookups: ~10x reduction per page load. New
CachesIdByNametrait (per-request + Redis) stopped "find ID for status 'Closed'" running 10+ times per page, across 5+ routes. - Patient detail page: ~5s → ~400ms on worst shapes. N+1 eliminations, lazy-loading of heavy tabs, consolidated eager loads.
- Index audit: added
patient_idFK indexes on 23 tables, GIN trigram indexes for ILIKE lookups, dropped redundant medication_refills indexes. - Frontend Lighthouse: 56 → 98 across routes. Most pages scored 56 in baseline (unbuilt assets, synchronous chart rendering, 200KB Maps GL loaded globally).
npm run builddiscipline, chart defer-loading, 5s polling disabled on 28 widgets, Maps GL scoped to the map view, and a 239KB GIF → 63KB WebP swap got Dashboard to 90 and every other monitored route to 97-99.
| Category | Count |
|---|---|
| New indexes (partial, composite, GIN, FK) | 10 |
| Caching additions (per-request, tenant-scoped Redis, widget) | 10 |
| Query rewrites (N+1, joins, scope fixes) | 5 |
| Eager-load / lazy-load consolidations | 4 |
| Frontend optimizations (defer-load, poll disable, asset swaps) | 5 |
| Total findings applied | 34 |
- 10x volume increase over prod, zero measurable degradation. Overall score flat, every route within ±1 point.
- Zero slow queries (>100ms) anywhere. Max ~50ms worst route, <30ms on most. Pre-fix we had 800ms queries on current prod shape.
- Dashboard, Task List, Plan List, Member Explorer all hold 96-100 — the routes headed for the cliff are now among the best.
The app has roughly 10x the headroom it had at the start of this effort.
Patients are seeded at production ratios — the command doesn't increase record density on existing patients. It answers "what happens at 5M patients, each with ~15 care gaps" but not "what if each patient has 30 care gaps" (e.g. a new ETL feed). Separate axis if we ever need it.