Sentinel scan iterator by harshrai654 · Pull Request #3141 · redis/node-redis

Hey @TheIndra55, thanks for catching this! You are absolutely correct.

Acquiring the master connection for the entire lifespan of the iterator causes a resource starvation deadlock. Since acquire reserves a client from the pool and doesn't release it until the iterator finishes, any operation inside the consumer's loop (like your mGet example) that requires a connection will block indefinitely if the pool is exhausted (e.g., a single-connection pool).

Even with a larger pool, this implementation unnecessarily holds a connection idle while the user's code executes, reducing overall throughput.

The fix can be to do an "Acquire->Scan->Release" loop: we should acquire the connection only for the scan operation itself and release it immediately before yielding to the user. This ensures the pool is free for other operations during the consumer's processing phase.

Here is the proposed implementation:

async *scanIterator(
  this: RedisSentinelType<M, F, S, RESP, TYPE_MAPPING>,
  options?: ScanOptions & ScanIteratorOptions
) {
  let cursor = options?.cursor ?? "0";
  let shouldRestart = false;

  // This listener persists for the entire iterator life to track failovers
  const handleTopologyChange = (event: RedisSentinelEvent) => {
    if (event.type === "MASTER_CHANGE") {
      shouldRestart = true;
    }
  };
  this.on("topology-change", handleTopologyChange);

  try {
    do {
      // 1. Acquire lease JUST for the scan command
      const masterClient = await this.acquire();
      
      let reply;
      try {
        // If master changed since last loop, reset cursor to 0
        if (shouldRestart) {
          cursor = "0";
          shouldRestart = false;
        }
        
        reply = await masterClient.scan(cursor, options);
        
      } finally {
        // 2. Release IMMEDIATELY so the pool is available for the user
        masterClient.release();
      }

      // Handle edge case: Topology changed *during* the scan
      if (shouldRestart) {
        // Data might be from the old master, so we discard this batch 
        // and restart from cursor 0 on the new master.
        cursor = "0"; 
        continue;
      }

      // 3. Yield to user
      // The connection is released, so `await sentinel.mGet()` will succeed
      cursor = reply.cursor;
      yield reply.keys;

    } while (cursor !== "0");

  } finally {
    this.removeListener("topology-change", handleTopologyChange);
  }
}

This ensures we respect the pool limits and handle topology changes correctly by resetting the cursor if a failover occurs between (or during) iterations.

What do you think?