thxCode - Overview
Skip to content
Sign in
AI CODE CREATION
GitHub CopilotWrite better code with AI
GitHub SparkBuild and deploy intelligent apps
GitHub ModelsManage and compare prompts
MCP RegistryNewIntegrate external tools
View all features
Sign up
Seal
Performance-optimized AI inference on your GPUs. Unlock superior throughput by selecting and tuning engines like vLLM or SGLang.
Python 4.7k 477
LM inference server implementation based on *.cpp.
C++ 296 28
Review/Check GGUF files and estimate the memory usage and maximum tokens per second.
Go 250 24
Deliver LLMs of GGUF format via Dockerfile.
Go 15 5
Available Terraform Provider network mirroring service.
Go 48 7
Patch Terraform Resource As Your Mind.
Go 15