llama.cpp/tools/cli at master · codeanker/llama.cpp
-h, --help, --usage
print usage and exit
--version
show version and build info
--license
show source code license and dependencies
-cl, --cache-list
show list of models in cache
--completion-bash
print source-able bash completion script for llama.cpp
--verbose-prompt
print a verbose prompt before generation (default: false)
-t, --threads N
number of CPU threads to use during generation (default: -1)(env: LLAMA_ARG_THREADS)
-tb, --threads-batch N
number of threads to use during batch and prompt processing (default: same as --threads)
-C, --cpu-mask M
CPU affinity mask: arbitrarily long hex. Complements cpu-range (default: "")
-Cr, --cpu-range lo-hi
range of CPUs for affinity. Complements --cpu-mask
--cpu-strict <0|1>
use strict CPU placement (default: 0)
--prio N
set process/thread priority : low(-1), normal(0), medium(1), high(2), realtime(3) (default: 0)
--poll <0...100>
use polling level to wait for work (0 - no polling, default: 50)
-Cb, --cpu-mask-batch M
CPU affinity mask: arbitrarily long hex. Complements cpu-range-batch (default: same as --cpu-mask)
-Crb, --cpu-range-batch lo-hi
ranges of CPUs for affinity. Complements --cpu-mask-batch
--cpu-strict-batch <0|1>
use strict CPU placement (default: same as --cpu-strict)
--prio-batch N
set process/thread priority : 0-normal, 1-medium, 2-high, 3-realtime (default: 0)
--poll-batch <0|1>
use polling to wait for work (default: same as --poll)
-c, --ctx-size N
size of the prompt context (default: 0, 0 = loaded from model)(env: LLAMA_ARG_CTX_SIZE)
-n, --predict, --n-predict N
number of tokens to predict (default: -1, -1 = infinity)(env: LLAMA_ARG_N_PREDICT)
-b, --batch-size N
logical maximum batch size (default: 2048)(env: LLAMA_ARG_BATCH)
-ub, --ubatch-size N
physical maximum batch size (default: 512)(env: LLAMA_ARG_UBATCH)
--keep N
number of tokens to keep from the initial prompt (default: 0, -1 = all)
--swa-full
use full-size SWA cache (default: false)(more info)
(env: LLAMA_ARG_SWA_FULL)
-fa, --flash-attn [on|off|auto]
set Flash Attention use ('on', 'off', or 'auto', default: 'auto')(env: LLAMA_ARG_FLASH_ATTN)
-p, --prompt PROMPT
prompt to start generation with; for system message, use -sys
--perf, --no-perf
whether to enable internal libllama performance timings (default: false)(env: LLAMA_ARG_PERF)
-f, --file FNAME
a file containing the prompt (default: none)
-bf, --binary-file FNAME
binary file containing the prompt (default: none)
-e, --escape, --no-escape
whether to process escapes sequences (\n, \r, \t, ', ", \) (default: true)
--rope-scaling {none,linear,yarn}
RoPE frequency scaling method, defaults to linear unless specified by the model(env: LLAMA_ARG_ROPE_SCALING_TYPE)
--rope-scale N
RoPE context scaling factor, expands context by a factor of N(env: LLAMA_ARG_ROPE_SCALE)
--rope-freq-base N
RoPE base frequency, used by NTK-aware scaling (default: loaded from model)(env: LLAMA_ARG_ROPE_FREQ_BASE)
--rope-freq-scale N
RoPE frequency scaling factor, expands context by a factor of 1/N(env: LLAMA_ARG_ROPE_FREQ_SCALE)
--yarn-orig-ctx N
YaRN: original context size of model (default: 0 = model training context size)(env: LLAMA_ARG_YARN_ORIG_CTX)
--yarn-ext-factor N
YaRN: extrapolation mix factor (default: -1.00, 0.0 = full interpolation)(env: LLAMA_ARG_YARN_EXT_FACTOR)
--yarn-attn-factor N
YaRN: scale sqrt(t) or attention magnitude (default: -1.00)(env: LLAMA_ARG_YARN_ATTN_FACTOR)
--yarn-beta-slow N
YaRN: high correction dim or alpha (default: -1.00)(env: LLAMA_ARG_YARN_BETA_SLOW)
--yarn-beta-fast N
YaRN: low correction dim or beta (default: -1.00)(env: LLAMA_ARG_YARN_BETA_FAST)
-kvo, --kv-offload, -nkvo, --no-kv-offload
whether to enable KV cache offloading (default: enabled)(env: LLAMA_ARG_KV_OFFLOAD)
--repack, -nr, --no-repack
whether to enable weight repacking (default: enabled)(env: LLAMA_ARG_REPACK)
--no-host
bypass host buffer allowing extra buffers to be used(env: LLAMA_ARG_NO_HOST)
-ctk, --cache-type-k TYPE
KV cache data type for Kallowed values: f32, f16, bf16, q8_0, q4_0, q4_1, iq4_nl, q5_0, q5_1
(default: f16)
(env: LLAMA_ARG_CACHE_TYPE_K)
-ctv, --cache-type-v TYPE
KV cache data type for Vallowed values: f32, f16, bf16, q8_0, q4_0, q4_1, iq4_nl, q5_0, q5_1
(default: f16)
(env: LLAMA_ARG_CACHE_TYPE_V)
-dt, --defrag-thold N
KV cache defragmentation threshold (DEPRECATED)(env: LLAMA_ARG_DEFRAG_THOLD)
-np, --parallel N
number of parallel sequences to decode (default: 1)(env: LLAMA_ARG_N_PARALLEL)
--mlock
force system to keep model in RAM rather than swapping or compressing(env: LLAMA_ARG_MLOCK)
--mmap, --no-mmap
whether to memory-map model. (if mmap disabled, slower load but may reduce pageouts if not using mlock) (default: enabled)(env: LLAMA_ARG_MMAP)
-dio, --direct-io, -ndio, --no-direct-io
use DirectIO if available. (default: disabled)(env: LLAMA_ARG_DIO)
--numa TYPE
attempt optimizations that help on some NUMA systems- distribute: spread execution evenly over all nodes
- isolate: only spawn threads on CPUs on the node that execution started on
- numactl: use the CPU map provided by numactl
if run without this previously, it is recommended to drop the system page cache before using this
see ggml-org#1437
(env: LLAMA_ARG_NUMA)
-dev, --device <dev1,dev2,..>
comma-separated list of devices to use for offloading (none = don't offload)use --list-devices to see a list of available devices
(env: LLAMA_ARG_DEVICE)
--list-devices
print list of available devices and exit
-ot, --override-tensor <tensor name pattern>=<buffer type>,...
override tensor buffer type(env: LLAMA_ARG_OVERRIDE_TENSOR)
-cmoe, --cpu-moe
keep all Mixture of Experts (MoE) weights in the CPU(env: LLAMA_ARG_CPU_MOE)
-ncmoe, --n-cpu-moe N
keep the Mixture of Experts (MoE) weights of the first N layers in the CPU(env: LLAMA_ARG_N_CPU_MOE)
-ngl, --gpu-layers, --n-gpu-layers N
max. number of layers to store in VRAM, either an exact number, 'auto', or 'all' (default: auto)(env: LLAMA_ARG_N_GPU_LAYERS)
-sm, --split-mode {none,layer,row}
how to split the model across multiple GPUs, one of:- none: use one GPU only
- layer (default): split layers and KV across GPUs
- row: split rows across GPUs
(env: LLAMA_ARG_SPLIT_MODE)
-ts, --tensor-split N0,N1,N2,...
fraction of the model to offload to each GPU, comma-separated list of proportions, e.g. 3,1(env: LLAMA_ARG_TENSOR_SPLIT)
-mg, --main-gpu INDEX
the GPU to use for the model (with split-mode = none), or for intermediate results and KV (with split-mode = row) (default: 0)(env: LLAMA_ARG_MAIN_GPU)
-fit, --fit [on|off]
whether to adjust unset arguments to fit in device memory ('on' or 'off', default: 'on')(env: LLAMA_ARG_FIT)
-fitt, --fit-target MiB0,MiB1,MiB2,...
target margin per device for --fit, comma-separated list of values, single value is broadcast across all devices, default: 1024(env: LLAMA_ARG_FIT_TARGET)
-fitc, --fit-ctx N
minimum ctx size that can be set by --fit option, default: 4096(env: LLAMA_ARG_FIT_CTX)
--check-tensors
check model tensor data for invalid values (default: false)
--override-kv KEY=TYPE:VALUE,...
advanced option to override model metadata by key. to specify multiple overrides, either use comma-separated values.types: int, float, bool, str. example: --override-kv tokenizer.ggml.add_bos_token=bool:false,tokenizer.ggml.add_eos_token=bool:false
--op-offload, --no-op-offload
whether to offload host tensor operations to device (default: true)
--lora FNAME
path to LoRA adapter (use comma-separated values to load multiple adapters)
--lora-scaled FNAME:SCALE,...
path to LoRA adapter with user defined scaling (format: FNAME:SCALE,...)note: use comma-separated values
--control-vector FNAME
add a control vectornote: use comma-separated values to add multiple control vectors
--control-vector-scaled FNAME:SCALE,...
add a control vector with user defined scaling SCALEnote: use comma-separated values (format: FNAME:SCALE,...)
--control-vector-layer-range START END
layer range to apply the control vector(s) to, start and end inclusive
-m, --model FNAME
model path to load(env: LLAMA_ARG_MODEL)
-mu, --model-url MODEL_URL
model download url (default: unused)(env: LLAMA_ARG_MODEL_URL)
-dr, --docker-repo [<repo>/]<model>[:quant]
Docker Hub model repository. repo is optional, default to ai/. quant is optional, default to :latest.example: gemma3
(default: unused)
(env: LLAMA_ARG_DOCKER_REPO)
-hf, -hfr, --hf-repo <user>/<model>[:quant]
Hugging Face model repository; quant is optional, case-insensitive, default to Q4_K_M, or falls back to the first file in the repo if Q4_K_M doesn't exist.mmproj is also downloaded automatically if available. to disable, add --no-mmproj
example: unsloth/phi-4-GGUF:q4_k_m
(default: unused)
(env: LLAMA_ARG_HF_REPO)
-hfd, -hfrd, --hf-repo-draft <user>/<model>[:quant]
Same as --hf-repo, but for the draft model (default: unused)(env: LLAMA_ARG_HFD_REPO)
-hff, --hf-file FILE
Hugging Face model file. If specified, it will override the quant in --hf-repo (default: unused)(env: LLAMA_ARG_HF_FILE)
-hfv, -hfrv, --hf-repo-v <user>/<model>[:quant]
Hugging Face model repository for the vocoder model (default: unused)(env: LLAMA_ARG_HF_REPO_V)
-hffv, --hf-file-v FILE
Hugging Face model file for the vocoder model (default: unused)(env: LLAMA_ARG_HF_FILE_V)
-hft, --hf-token TOKEN
Hugging Face access token (default: value from HF_TOKEN environment variable)(env: HF_TOKEN)
--log-disable
Log disable
--log-file FNAME
Log to file(env: LLAMA_LOG_FILE)
--log-colors [on|off|auto]
Set colored logging ('on', 'off', or 'auto', default: 'auto')'auto' enables colors when output is to a terminal
(env: LLAMA_LOG_COLORS)
-v, --verbose, --log-verbose
Set verbosity level to infinity (i.e. log all messages, useful for debugging)
--offline
Offline mode: forces use of cache, prevents network access(env: LLAMA_OFFLINE)
-lv, --verbosity, --log-verbosity N
Set the verbosity threshold. Messages with a higher verbosity will be ignored. Values:- 0: generic output
- 1: error
- 2: warning
- 3: info
- 4: debug
(default: 3)
(env: LLAMA_LOG_VERBOSITY)
--log-prefix
Enable prefix in log messages(env: LLAMA_LOG_PREFIX)
--log-timestamps
Enable timestamps in log messages(env: LLAMA_LOG_TIMESTAMPS)
-ctkd, --cache-type-k-draft TYPE
KV cache data type for K for the draft modelallowed values: f32, f16, bf16, q8_0, q4_0, q4_1, iq4_nl, q5_0, q5_1
(default: f16)
(env: LLAMA_ARG_CACHE_TYPE_K_DRAFT)
-ctvd, --cache-type-v-draft TYPE
KV cache data type for V for the draft modelallowed values: f32, f16, bf16, q8_0, q4_0, q4_1, iq4_nl, q5_0, q5_1
(default: f16)
(env: LLAMA_ARG_CACHE_TYPE_V_DRAFT)