Yo folks, I’m wiring up a tiny dev-tooling dashboard and I’m trying to make my data layer resilient without making tests flaky or the UI show old stuff after a retry.
const cache = new Map();
export async function getUser(id) {
const key = `user:${id}`;
if (cache.has(key)) return cache.get(key);
const p = fetch(`/api/users/${id}`)
.then(r => {
if (!r.ok) throw new Error(`HTTP ${r.status}`);
return r.json();
})
.catch(async err => {
await new Promise(r => setTimeout(r, 200));
return fetch(`/api/users/${id}`).then(r => r.json());
});
cache.set(key, p);
return p;
}
If the first request fails and the retry succeeds, what’s a clean pattern to avoid caching a “poisoned” promise (or returning stale data) while still deduping in-flight requests?
Also make sure the value cache key includes any request variant that affects the response (method, headers like auth/accept, query params), otherwise retries can “poison” a shared entry with the wrong payload.
If your cache key is only the URL, a retry can stomp the entry with a totally different response.
Key it on method + query + Accept/Auth headers (and body hash for POST), and only write to cache after a verified 2xx JSON parse so a flaky attempt doesn’t poison it.
Also add a request id or attempt number into your in-flight map so concurrent retries dedupe instead of racing, and use conditional requests (ETag/If-None-Match) so you never overwrite a fresh cache entry with an older payload.
Late retry responses can arrive out of order, so guard the cache write with a monotonic fetchedAt and only commit if it’s newer than what’s already stored.