How do you set performance guardrails during a framework migration without hiding real regressions?

What’s up everyone? I’m mid-migration from a homegrown SPA to a framework setup, and I’m trying to keep a hard performance budget while we swap routing/state patterns and old code paths linger.

If we loosen the budget, we risk shipping slow screens and never clawing it back; if we tighten it, the dashboards light up with “regressions” that are just measurement noise or missing instrumentation. What’s a pragmatic way to set guardrails and observability so we catch real performance hits without blocking the migration on false alarms?

Sarah

I’d split the routes first, honestly. We did something similar and the cleanest thing was tagging each screen as legacy or migrated in the perf dashboard so we weren’t comparing a half-new checkout page to the old one like they’re the same thing.

Then keep one hard budget, but only fail on a real delta over a few deploys, not one noisy run. Missing data should be its own alert, not counted as a slowdown.

Look — the “only fail on a real delta over a few deploys” part is where people accidentally hide regressions. I’ve had better luck failing fast on a small set of canary flows (login/checkout/search) with the budget tied to p95/p99 and a fixed traffic slice, then letting the broader dashboard be trend-only so you’re not arguing with noise.