Sampling Parameters#
This doc describes the sampling parameters of the SGLang Runtime. It is the low-level endpoint of the runtime. If you want a high-level endpoint that can automatically handle chat templates, consider using the OpenAI Compatible API.
/generate Endpoint#
The /generate endpoint accepts the following parameters in JSON format. For detailed usage, see the native API doc. The object is defined at io_struct.py::GenerateReqInput. You can also read the source code to find more arguments and docs.
Sampling parameters#
The object is defined at sampling_params.py::SamplingParams. You can also read the source code to find more arguments and docs.
Note on defaults#
By default, SGLang initializes several sampling parameters from the model’s generation_config.json (when the server is launched with --sampling-defaults model, which is the default). To use SGLang/OpenAI constant defaults instead, start the server with --sampling-defaults openai. You can always override any parameter per request via sampling_params.
# Use model-provided defaults from generation_config.json (default behavior) python -m sglang.launch_server --model-path <MODEL> --sampling-defaults model # Use SGLang/OpenAI constant defaults instead python -m sglang.launch_server --model-path <MODEL> --sampling-defaults openai
Core parameters#
Penalizers#
Constrained decoding#
Please refer to our dedicated guide on constrained decoding for the following parameters.
Other options#
Examples#
Normal#
Launch a server:
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --port 30000
Send a request:
import requests response = requests.post( "http://localhost:30000/generate", json={ "text": "The capital of France is", "sampling_params": { "temperature": 0, "max_new_tokens": 32, }, }, ) print(response.json())
Detailed example in send request.
Streaming#
Send a request and stream the output:
import requests, json response = requests.post( "http://localhost:30000/generate", json={ "text": "The capital of France is", "sampling_params": { "temperature": 0, "max_new_tokens": 32, }, "stream": True, }, stream=True, ) prev = 0 for chunk in response.iter_lines(decode_unicode=False): chunk = chunk.decode("utf-8") if chunk and chunk.startswith("data:"): if chunk == "data: [DONE]": break data = json.loads(chunk[5:].strip("\n")) output = data["text"].strip() print(output[prev:], end="", flush=True) prev = len(output) print("")
Detailed example in openai compatible api.
Multimodal#
Launch a server:
python3 -m sglang.launch_server --model-path lmms-lab/llava-onevision-qwen2-7b-ov
Download an image:
curl -o example_image.png -L https://github.com/sgl-project/sglang/blob/main/examples/assets/example_image.png?raw=true
Send a request:
import requests response = requests.post( "http://localhost:30000/generate", json={ "text": "<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n" "<|im_start|>user\n<image>\nDescribe this image in a very short sentence.<|im_end|>\n" "<|im_start|>assistant\n", "image_data": "example_image.png", "sampling_params": { "temperature": 0, "max_new_tokens": 32, }, }, ) print(response.json())
The image_data can be a file name, a URL, or a base64 encoded string. See also python/sglang/srt/utils.py:load_image.
Streaming is supported in a similar manner as above.
Detailed example in OpenAI API Vision.
Structured Outputs (JSON, Regex, EBNF)#
You can specify a JSON schema, regular expression or EBNF to constrain the model output. The model output will be guaranteed to follow the given constraints. Only one constraint parameter (json_schema, regex, or ebnf) can be specified for a request.
SGLang supports two grammar backends:
XGrammar (default): Supports JSON schema, regular expression, and EBNF constraints.
XGrammar currently uses the GGML BNF format.
Outlines: Supports JSON schema and regular expression constraints.
If instead you want to initialize the Outlines backend, you can use --grammar-backend outlines flag:
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-8B-Instruct \ --port 30000 --host 0.0.0.0 --grammar-backend [xgrammar|outlines] # xgrammar or outlines (default: xgrammar)
import json import requests json_schema = json.dumps({ "type": "object", "properties": { "name": {"type": "string", "pattern": "^[\\w]+$"}, "population": {"type": "integer"}, }, "required": ["name", "population"], }) # JSON (works with both Outlines and XGrammar) response = requests.post( "http://localhost:30000/generate", json={ "text": "Here is the information of the capital of France in the JSON format.\n", "sampling_params": { "temperature": 0, "max_new_tokens": 64, "json_schema": json_schema, }, }, ) print(response.json()) # Regular expression (Outlines backend only) response = requests.post( "http://localhost:30000/generate", json={ "text": "Paris is the capital of", "sampling_params": { "temperature": 0, "max_new_tokens": 64, "regex": "(France|England)", }, }, ) print(response.json()) # EBNF (XGrammar backend only) response = requests.post( "http://localhost:30000/generate", json={ "text": "Write a greeting.", "sampling_params": { "temperature": 0, "max_new_tokens": 64, "ebnf": 'root ::= "Hello" | "Hi" | "Hey"', }, }, ) print(response.json())
Detailed example in structured outputs.
Custom logit processor#
Launch a server with --enable-custom-logit-processor flag on.
python -m sglang.launch_server \ --model-path meta-llama/Meta-Llama-3-8B-Instruct \ --port 30000 \ --enable-custom-logit-processor
Define a custom logit processor that will always sample a specific token id.
from sglang.srt.sampling.custom_logit_processor import CustomLogitProcessor class DeterministicLogitProcessor(CustomLogitProcessor): """A dummy logit processor that changes the logits to always sample the given token id. """ def __call__(self, logits, custom_param_list): # Check that the number of logits matches the number of custom parameters assert logits.shape[0] == len(custom_param_list) key = "token_id" for i, param_dict in enumerate(custom_param_list): # Mask all other tokens logits[i, :] = -float("inf") # Assign highest probability to the specified token logits[i, param_dict[key]] = 0.0 return logits
Send a request:
import requests response = requests.post( "http://localhost:30000/generate", json={ "text": "The capital of France is", "custom_logit_processor": DeterministicLogitProcessor().to_str(), "sampling_params": { "temperature": 0.0, "max_new_tokens": 32, "custom_params": {"token_id": 5}, }, }, ) print(response.json())
Send an OpenAI chat completion request:
import openai from sglang.utils import print_highlight client = openai.Client(base_url="http://127.0.0.1:30000/v1", api_key="None") response = client.chat.completions.create( model="meta-llama/Meta-Llama-3-8B-Instruct", messages=[ {"role": "user", "content": "List 3 countries and their capitals."}, ], temperature=0.0, max_tokens=32, extra_body={ "custom_logit_processor": DeterministicLogitProcessor().to_str(), "custom_params": {"token_id": 5}, }, ) print_highlight(f"Response: {response}")