Comparing falcondev-oss:dev...bertybuttface:dev ยท falcondev-oss/github-actions-cache-server

Commits on Dec 1, 2025

  1. Configuration menu

    Browse the repository at this point in the history

  2. Configuration menu

    Browse the repository at this point in the history

  3. feat(benchmark): add statistical analysis with multiple iterations

    Run benchmarks multiple times (default: 10) to get accurate, reliable
    performance metrics with statistical analysis.
    
    Features:
    - ITERATIONS env var (default: 10) controls number of test runs
    - Calculate mean, median, std dev, min, max for each operation
    - Show coefficient of variation (CV%) for result reliability
    - Individual iteration results plus aggregate statistics
    
    Example output:
      UPLOAD:
        Mean:    127.45 MB/s
        Median:  126.80 MB/s
        Std Dev: 3.21 MB/s (ยฑ2.5%)
        Min:     122.10 MB/s
        Max:     132.50 MB/s
    
    Usage:
      ITERATIONS=10 pnpm benchmark           # 10 iterations (default)
      ITERATIONS=1 pnpm benchmark            # quick single run
      ITERATIONS=20 pnpm benchmark           # more iterations = more accuracy
    
    This helps identify performance variance and gives more confidence
    in benchmark results compared to single-run measurements.
    
    ๐Ÿค– Generated with [Claude Code](https://claude.com/claude-code)
    
    Co-Authored-By: Claude <noreply@anthropic.com>
    Configuration menu

    Browse the repository at this point in the history

  4. Configuration menu

    Browse the repository at this point in the history

  5. Configuration menu

    Browse the repository at this point in the history

  6. Configuration menu

    Browse the repository at this point in the history

  7. perf(filesystem): stream-based part assembly

    Replace memory-buffering readFile/appendFile with streaming pipeline:
    - Write directly to final location (no temp file + copy)
    - Stream parts instead of loading entire buffers into memory
    - Remove double I/O (was: read parts โ†’ temp file โ†’ copy to final)
    - Fix double-delete bug (outputTempFilePath was deleted twice)
    
    This reduces memory usage from O(part_size) to O(chunk_size) and
    eliminates redundant disk I/O.
    
    ๐Ÿค– Generated with [Claude Code](https://claude.com/claude-code)
    
    Co-Authored-By: Claude <noreply@anthropic.com>
    Configuration menu

    Browse the repository at this point in the history

  8. perf(s3): streaming pipeline for part assembly

    Replace random access writes with streaming pipeline:
    - Use createWriteStream + pipeline instead of file.write with offsets
    - Eliminates serialized await inside WritableStream callback
    - Sequential writes (append-only) are more efficient than random access
    - Clean up entire temp directory instead of just the file
    
    This removes the blocking write pattern that was serializing I/O
    and causing throughput bottlenecks.
    
    ๐Ÿค– Generated with [Claude Code](https://claude.com/claude-code)
    
    Co-Authored-By: Claude <noreply@anthropic.com>
    Configuration menu

    Browse the repository at this point in the history

  9. perf(s3): use native multipart upload API

    BREAKING PERFORMANCE WIN: Eliminates downloading and re-uploading for S3.
    
    Before:
    1. Upload parts to S3 as individual objects
    2. Download ALL parts back from S3 (100MB+ network I/O)
    3. Write to local temp file
    4. Re-upload entire file to S3 (another 100MB+ network I/O)
    5. Delete part objects
    
    After:
    1. CreateMultipartUpload to get S3 uploadId
    2. UploadPart for each chunk (returns ETag)
    3. CompleteMultipartUpload - S3 assembles parts server-side
    
    Changes:
    - Added migration to restore driver_upload_id and e_tag columns
    - Updated StorageDriver interface to support native multipart APIs
    - Rewrote S3 driver to use CreateMultipartUpload/UploadPart/CompleteMultipartUpload
    - Updated storage adapter to track driver uploadIds and ETags
    - Updated filesystem and GCS drivers to match new interface (return null)
    - Added tsx dependency for benchmark script
    
    Thread safety: S3 multipart API is designed for concurrent part uploads.
    Database PRIMARY KEY (upload_id, part_number) prevents duplicate parts.
    CompleteMultipartUpload is atomic and idempotent.
    
    Expected impact: ~2x S3 throughput by eliminating redundant network I/O
    
    ๐Ÿค– Generated with [Claude Code](https://claude.com/claude-code)
    
    Co-Authored-By: Claude <noreply@anthropic.com>
    Configuration menu

    Browse the repository at this point in the history

  10. fix(s3): pass stream directly to S3 UploadPart + add error logging

    The previous implementation was trying to buffer entire chunks in memory
    before uploading to S3, which could fail silently for large chunks.
    
    Changes:
    - Pass ReadableStream directly to S3 UploadPartCommand (SDK supports it)
    - Remove memory buffering (lines 110-121 deleted)
    - Add detailed error logging to diagnose upload failures
    - Add error logging to upload route
    
    This should fix silent upload failures and show what's actually going wrong.
    
    ๐Ÿค– Generated with [Claude Code](https://claude.com/claude-code)
    
    Co-Authored-By: Claude <noreply@anthropic.com>
    Configuration menu

    Browse the repository at this point in the history

  11. feat: add comprehensive logging throughout upload flow

    Added extensive logging to help debug upload issues:
    
    - CreateCacheEntry: Log cache reservation and signed_upload_url
    - Upload endpoint: Log all block uploads with chunk details
    - FinalizeCacheEntryUpload: Log TWIRP finalization requests
    - Storage adapter: Enhanced logging in uploadChunk and commitCache
    - All log messages include relevant context (cacheId, key, version, etc)
    
    This provides better visibility into the upload process for debugging
    and monitoring cache operations.
    
    ๐Ÿค– Generated with [Claude Code](https://claude.com/claude-code)
    
    Co-Authored-By: Claude <noreply@anthropic.com>
    Configuration menu

    Browse the repository at this point in the history

  12. fix: convert Web ReadableStream to Node.js Readable for S3 uploads

    Fixes "Unable to calculate hash for flowing readable stream" errors
    when uploading to S3/MinIO. The AWS SDK's flexible checksums middleware
    was trying to calculate hashes on Web ReadableStreams that were already
    flowing from the HTTP request.
    
    Changes:
    - Convert Web ReadableStream to Node.js Readable using Readable.fromWeb()
    - Add requestChecksumCalculation: 'WHEN_REQUIRED' to S3Client config
      (ETags already provide integrity checking for multipart uploads)
    - Import node:stream Readable for the conversion
    
    This allows the AWS SDK to properly handle the stream without trying
    to rewind it for checksum calculation.
    
    ๐Ÿค– Generated with [Claude Code](https://claude.com/claude-code)
    
    Co-Authored-By: Claude <noreply@anthropic.com>
    Configuration menu

    Browse the repository at this point in the history

  13. fix: pass Content-Length to S3 multipart uploads

    S3 multipart uploads require Content-Length header to be set.
    Previously, when converting Web ReadableStream to Node.js Readable,
    we lost the Content-Length information from the HTTP request.
    
    Changes:
    - Extract Content-Length from incoming HTTP request headers
    - Thread contentLength through the upload pipeline:
      * routes/upload/[cacheId].put.ts -> adapter.uploadChunk()
      * lib/storage/index.ts -> driver.uploadPart()
      * lib/storage/drivers/s3.ts -> UploadPartCommand
    - Add contentLength to StorageDriver interface (optional)
    - Pass ContentLength to S3 UploadPartCommand
    
    This fixes "MissingContentLength: You must provide the Content-Length
    HTTP header" errors when uploading to S3/MinIO.
    
    ๐Ÿค– Generated with [Claude Code](https://claude.com/claude-code)
    
    Co-Authored-By: Claude <noreply@anthropic.com>
    Configuration menu

    Browse the repository at this point in the history