How do you stop optimistic UI inserts from duplicating when retries race?

Hey everyone, I’m wiring up an optimistic “add message” flow in a web app and I’m hitting a nasty failure mode where a retry comes back after the first request and I end up with duplicate items in state (and sometimes the order flips, like a jittery tilemap scroll).

const pending = new Map();

export async function addMessage(text) {
  const clientId = crypto.randomUUID();
  dispatch({ type: "addOptimistic", clientId, text, status: "pending" });

  const p = fetch("/api/messages", {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({ clientId, text })
  });

  pending.set(clientId, p);

  const res = await retry(p, 3);
  const serverMsg = await res.json();

  dispatch({ type: "reconcile", clientId, serverMsg });
}

async function retry(promise, times) {
  let last;
  for (let i = 0; i < times; i++) {
    try { return await promise; } catch (e) { last = e; }
  }
  throw last;
}

What’s a solid pattern for idempotent optimistic reconciliation so late retries or duplicate responses can’t create extra items or reorder them unexpectedly?

BayMax

retry(p, 3) is only re-awaiting the same promise, so it never sends a second POST. That’s part of why the behavior feels weird.

Use clientId as the stable key and make reconciliation an upsert by that key. The optimistic row should keep its original position, and reconcile should only patch the existing item in place instead of inserting a new one.

For ordering, keep it tied to the optimistic insert time or a simple sequence number, not whatever the server returns later. Late responses can then update content/status without shuffling the list around like a bad piano score.

The server should dedupe on the same idempotency key too. Otherwise you can still get two real rows if the network retries at the wrong moment.

This is the “slow success arrives after fast success” problem, and it’s maddening because nothing is actually wrong on the wire — your UI just can’t tell which response it should believe.

What’s saved me is keeping clientId as the one true key, but tagging the optimistic item with an attemptId (or incrementing attempt), then only applying a response if it matches the item’s current attemptId. Anything older gets ignored, because it’s answering a question the UI isn’t asking anymore.

And yeah, I’d ditch the generic “optimistic UI” blog post. This comes down to boring rules: upsert by clientId, don’t re-insert, and make the server honour an idempotency key so you don’t end up with two real rows when the network gets creative.

The list jumping around is a separate pain from the duplicate insert, and it usually comes from letting the server’s createdAt (or whatever) take over your sort when you reconcile. I’d keep a frozen client sort key (createdAtClient / optimisticIndex) and treat the server response as “fill in fields on this row,” not “rebuild the list.”

Canceling the previous attempt helps too. Even if you still defensively ignore stale responses by attemptId, aborting makes the whole thing quieter and easier to debug because you stop getting “old answers” delivered after you’ve moved on. I’m not sure it always matters in practice, but when the network is flaky it’s saved me from some really confusing UI states.

And yeah, I’d skip linking random “optimistic UI” articles here — this is just: stable client key, upsert-in-place, and idempotency on the server so you don’t create two real rows when retries overlap.

One extra guardrail that helps: don’t make clientId your server’s idempotency key forever—rotate it per attempt. Keep a separate logicalId for the message, and have the server enforce uniqueness on that logicalId (or an Idempotency-Key header) with a short TTL so overlapping retries can’t create two real rows.

@sarah_connor rotating the attempt key is sensible, but a short TTL feels a bit optimistic if the client can go offline and come back hours later. You end up deduping the “normal” retries and still double-inserting on the weird long tail.

I’d rather treat the logicalId as the durable dedupe key (unique constraint, or at least “insert-or-return-existing”) and use the per-attempt id just for tracing/debugging.