fix(perf): debounce storage writes, batch events, async health checks by chuks-qua · Pull Request #497 · pingdotgg/t3code
added 4 commits
March 8, 2026 06:21Every keystroke triggered full JSON serialization of all composer drafts (including base64 image attachments) and a synchronous localStorage write. At normal typing speed this caused 5+ writes/sec, blocking the main thread and creating noticeable input lag. Wrap the Zustand persist storage with a 300ms debounce. In-memory state updates remain immediate; only the serialization and storage write are deferred. A beforeunload handler flushes pending writes to prevent data loss. The removeItem method cancels any pending setItem to avoid resurrecting cleared drafts. Adds unit tests for the DebouncedStorage utility covering debounce timing, rapid writes, removeItem cancellation, flush, and edge cases.
The useStore subscriber called persistState on every state mutation, triggering JSON.stringify + localStorage.setItem synchronously. It also ran 8 localStorage.removeItem calls for legacy keys on every fire. Wrap the subscriber with a 500ms debounce so rapid state changes batch into a single write. Move legacy key cleanup behind a one-time flag so it runs only once per page load. Add a beforeunload handler to flush the final state.
During active sessions, every domain event triggered a full syncSnapshot (IPC fetch + state rebuild + React re-render cascade) and sometimes a provider query invalidation. Events fire in rapid bursts during AI turns. Replace per-event processing with a throttle-first pattern: schedule a flush on the first event, absorb subsequent events within a 100ms window, then sync once. Provider query invalidation is batched via a flag. Since syncSnapshot fetches the complete snapshot, no events are lost by skipping intermediate syncs.
The ProviderHealth layer blocked server startup with two sequential CLI spawns (codex --version + codex login status), each with a 4-second timeout, delaying startup by up to 8 seconds. Run health checks in the background via Effect.runPromise so the layer resolves immediately with a placeholder status. Add an onReady callback to ProviderHealthShape so wsServer can push the resolved statuses to connected clients once checks complete, preventing early-connecting clients from showing "Checking..." indefinitely.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Drop the unused `onReady` hook from `ProviderHealthShape` - Keep startup health status access focused on `getStatuses`
- Replace manual timeout debounce logic with `@tanstack/react-pacer`'s `Debouncer` - Persist updates via `maybeExecute` to reduce localStorage write thrashing - Flush pending persistence on `beforeunload` to avoid losing recent state
- Replace manual timeout-based domain event batching with `Throttler` - Keep provider query invalidation batched with trailing 100ms flushes - Cancel throttler and reset invalidation flag during EventRouter cleanup
juliusmarminge
changed the title
Fix desktop performance: debounce storage writes, batch events, async health checks
fix(perf): debounce storage writes, batch events, async health checks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters