GitHub - akx/ggify: Tool to download models from Huggingface Hub and convert them to GGML/GGUF for llama.cpp

Skip to content

Navigation Menu

Sign in

Appearance settings

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Appearance settings

/ ggify Public

Tool to download models from Huggingface Hub and convert them to GGML/GGUF for llama.cpp

License

MIT license

170 stars 15 forks Branches Tags Activity

Notifications You must be signed in to change notification settings

Repository files navigation

ggify

A small tool that downloads models from the Huggingface Hub and converts them into GGML for use with llama.cpp.

Usage

  • Download and compile llama.cpp.
  • Set up a virtualenv using the requirements from llama.cpp.
  • Install this package in that virtualenv (e.g. pip install -e .).
  • Run e.g. python ggify.py databricks/dolly-v2-12b (nb.: I haven't tried with that particular repo)
  • You'll end up with GGML models under models/....

You can set --llama-cpp-dir (or the LLAMA_CPP_DIR environment variable) to point to the directory where you've compiled llama.cpp.

About

Tool to download models from Huggingface Hub and convert them to GGML/GGUF for llama.cpp

Resources

Readme

License

MIT license

Activity

Stars

170 stars

Watchers

3 watching

Forks

15 forks

Releases

No releases published

Languages