Hey folks, I’m working on a pixel-art editor SPA and I’m trying to add network retry plus response caching for autosave, but after leaving the tab open all day the memory climbs and the UI starts stuttering.
const inflight = new Map();
const cache = new Map();
export function fetchWithRetry(url, opts = {}, retries = 3) {
if (cache.has(url)) return Promise.resolve(cache.get(url));
if (inflight.has(url)) return inflight.get(url);
const ac = new AbortController();
const p = fetch(url, { ...opts, signal: ac.signal })
.then(r => r.json())
.then(data => {
cache.set(url, data);
return data;
})
.catch(err => {
if (retries > 0) return fetchWithRetry(url, opts, retries - 1);
throw err;
});
inflight.set(url, p);
return p;
}
What’s the cleanest way to structure this so retries don’t leave inflight/cache entries or AbortControllers hanging around (and ideally avoid thundering-herd refetches)?
MechaPrime
Look — the leak is your inflight Map, not the retry itself. You set inflight.set(url, p) and then you never delete it, so every distinct URL you touch can stick around for the life of the tab, and your recursion means you can keep returning the same rejected promise forever.
Make one promise per key, and guarantee cleanup in a finally. Then make cache bounded (TTL or “only cache latest per doc”), because “cache by raw URL” in an autosave loop is basically unbounded growth if the URL ever varies.
const inflight = new Map();
const cache = new Map(); // key -> { value, exp }
function getCache(key) {
const e = cache.get(key);
if (!e) return null;
if (e.exp && e.exp <= Date.now()) {
cache.delete(key);
return null;
}
return e.value;
}
function setCache(key, value, ttlMs) {
cache.set(key, { value, exp: ttlMs ? Date.now() + ttlMs : 0 });
}
export function fetchWithRetry(key, url, opts = {}, retries = 3, ttlMs = 30_000) {
const cached = getCache(key);
if (cached !== null) return Promise.resolve(cached);
const existing = inflight.get(key);
if (existing) return existing;
const p = (async () => {
let err;
for (let attempt = 0; attempt <= retries; attempt++) {
try {
const r = await fetch(url, opts);
if (!r.ok) throw new Error(`HTTP ${r.status}`);
const data = await r.json();
setCache(key, data, ttlMs);
return data;
} catch (e) {
err = e;
}
}
throw err;
})().finally(() => {
inflight.delete(key);
});
inflight.set(key, p);
return p;
}
On the AbortController: don’t allocate one per call unless you’re actually going to abort (route change, new autosave supersedes old, etc.). If you do need it for autosave, I’d keep one controller per document and abort the previous inflight save when a new one starts, so you don’t pile up work when the network gets flaky.
Caching r.json() in an autosave loop can quietly pin a lot of memory — you’re holding the fully-parsed object graph, not some lightweight response blob. I hit this with a “small” draft payload that had nested arrays; memory crept up, GC got spiky, and the UI started feeling sticky even with a TTL because we still had multiple versions alive at once.
I’m not sure that kirupa.com link is really aimed at this exact failure mode, but the practical takeaway is the same: cache deliberately (cap entries, avoid keeping whole parsed payloads when you only need a slice, and watch out for “latest + a few retries” accidentally becoming “latest + N copies”).