thxCode - Overview

Skip to content

Navigation Menu

Sign in

Appearance settings

View thxCode's full-sized avatar

Frank Mai thxCode

  • Seal

  • ShenZhen, GuangDong

Organizations

@seal-io

Block or report thxCode

Pinned Loading

  1. Performance-optimized AI inference on your GPUs. Unlock superior throughput by selecting and tuning engines like vLLM or SGLang.

    Python 4.7k 477

  2. LM inference server implementation based on *.cpp.

    C++ 296 28

  3. Review/Check GGUF files and estimate the memory usage and maximum tokens per second.

    Go 250 24

  4. Deliver LLMs of GGUF format via Dockerfile.

    Go 15 5

  5. Available Terraform Provider network mirroring service.

    Go 48 7

  6. Patch Terraform Resource As Your Mind.

    Go 15