Hey everyone, I’m building a little web app feed and I’m trying to do optimistic inserts while also retrying failed POSTs (offline-ish) and caching the list in memory.
When a retry succeeds after a timeout, I sometimes end up with both the temp item and the saved item (or two saved items) because the server can’t fully dedupe, so what’s a solid client-side strategy to reconcile this without leaking memory or dropping updates?
Give each create a client-generated requestId and treat that as the stable identity on the client, then map server IDs onto it. Right now your temp ID changes, but nothing ties “this optimistic row” to “that eventual saved row” when retries/timeouts race. In practice, I’d keep byId for rendering, but add a clientKeyToId map (or store clientKey on the item) and always send the same clientKey in the POST body/headers on every retry. When a response comes back, you look up the optimistic item by clientKey, update that same record in place (swap ID, status), and ignore any later responses for a clientKey you already finalized. It’s like putting a claim check on the item before you hand it to the server—no matter how many times it “comes back, ” you know which one it belongs to.
I’d still do the stable clientKey, but I’d add a tiny inflight registry keyed by that clientKey with pending/final, lastSentAt, and an attempt count, plus a TTL so you can GC entries that never resolve.
I’ve watched “pending forever” turn into a slow leak in long-lived sessions on flaky networks, and then you’re debugging why the UI gets weird after 6 hours. One place to decide “we already finalized this key, ignore late responses” and “this has been stuck 10 minutes, mark abandoned” saves you a lot of pain.