How do you stop a client-side request queue from growing forever when the user goes offline?

What’s up everyone? I’m building a web app that queues analytics/events in memory and flushes them with fetch when the network comes back, and I’m seeing memory climb if someone stays offline for a while (plus I worry about duplicate sends if I retry too aggressively).

const queue = [];
let flushing = false;

export function enqueue(evt) {
  queue.push({ evt, tries: 0, t: Date.now() });
  flush();
}

async function flush() {
  if (flushing || !navigator.onLine) return;
  flushing = true;
  while (queue.length) {
    const item = queue[0];
    try {
      await fetch('/api/events', {
        method: 'POST',
        body: JSON.stringify(item.evt),
        headers: { 'content-type': 'application/json' }
      });
      queue.shift();
    } catch (e) {
      item.tries++;
      break;
    }
  }
  flushing = false;
}

What’s a solid pattern for backpressure here (max queue size, TTL, batching, persistence) that avoids memory leaks but doesn’t drop too much data or spam retries?

Hari

In your snippet the const queue = [] + queue. push(. . . ) is what scares me when someone’s offline “all afternoon” — do you have a target max (like 1k events / 5MB / 10 minutes) where you’re okay dropping oldest vs moving it into IndexedDB for persistence?

The unbounded queue.push(...) is how you wake up to a tab eating 800MB because someone rode the subway with no signal. Put a hard cap on it (count or bytes) and start dropping oldest, or spill to IndexedDB if you truly need “eventual delivery.”

And yeah, retries without per-event IDs + server-side idempotency is how you end up “mysteriously” double-charging people.

Calling flush() on every enqueue() is basically you hammering your own tab the second the connection drops.

I’ve had better luck with a single “flush scheduled” flag + backoff/jitter, and only kicking a real drain on online (or when a fetch actually succeeds) so you don’t get the fail → instant retry loop while the queue keeps ballooning.

Look — if you let it queue forever, you’re just converting “offline” into “tab ate 800MB and died.” Put a hard cap (count/bytes) and decide what you’re willing to drop or coalesce while offline, because the user losing data slowly is still losing data.

Letting it queue forever just turns “offline” into “this tab ate 800MB and died.” Put a hard cap (count or bytes), then be explicit about what you’ll drop vs. coalesce while offline.

Silent, slow data loss is still data loss.

Making the cap visible is huge — a little “12 changes pending (oldest will be dropped)” banner turns it from spooky background behavior into an explicit choice the user can react to.

I’ve been treating it like a backpack with a weight limit: you can keep stuffing it, but at some point you either toss the oldest receipts or you compress them into “1 big summary” instead of carrying every single scrap.