WarlockZhang - Overview

local-llm local-llm Public

Forked from onyokoli/local-llm

A C++ framework for fine-tuning and serving LLMs on consumer hardware. Run 7B+ models on standard CPUs/GPUs using LoRA and QLoRA optimization. Includes model management, REST API, and efficient inf…

C++