GitHub - albertstarfield/LLMFineTuningQuantizedUniversal: LLM Finetuning while saving memory without using Nvidia, Intel x86 Exclusive, AMD ROCm, Unsloth, BitsandBytes and convert back into gguf using pytorch

Skip to content

Navigation Menu

Sign in

Appearance settings

LLMFineTuningQuantizedUniversal

LLM Finetuning while saving memory without using Nvidia, Intel x86 Exclusive, AMD ROCm, Unsloth, BitsandBytes and convert back into gguf using pytorch

I'm Designing this for Project Zephy and to be tuneable through portable device espescially ARM or RISCv

About

LLM Finetuning while saving memory without using Nvidia, Intel x86 Exclusive, AMD ROCm, Unsloth, BitsandBytes and convert back into gguf using pytorch

Resources

Readme

License

GPL-2.0 license

Activity

Stars

1 star

Watchers

1 watching

Forks

0 forks