Skip to content

Commit cdb7de6

Browse files
committed
[docs] Fix several consistency issues in sampling_params.md
Signed-off-by: windsonsea <[email protected]>
1 parent 072df75 commit cdb7de6

File tree

1 file changed

+25
-19
lines changed

1 file changed

+25
-19
lines changed

docs/backend/sampling_params.md

Lines changed: 25 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -6,27 +6,27 @@ If you want a high-level endpoint that can automatically handle chat templates,
66

77
## `/generate` Endpoint
88

9-
The `/generate` endpoint accepts the following parameters in JSON format. For in detail usage see the [native api doc](./native_api.ipynb).
9+
The `/generate` endpoint accepts the following parameters in JSON format. For detailed usage, see the [native API doc](./native_api.ipynb).
1010

1111
* `text: Optional[Union[List[str], str]] = None` The input prompt. Can be a single prompt or a batch of prompts.
1212
* `input_ids: Optional[Union[List[List[int]], List[int]]] = None` Alternative to `text`. Specify the input as token IDs instead of text.
1313
* `sampling_params: Optional[Union[List[Dict], Dict]] = None` The sampling parameters as described in the sections below.
1414
* `return_logprob: Optional[Union[List[bool], bool]] = None` Whether to return log probabilities for tokens.
15-
* `logprob_start_len: Optional[Union[List[int], int]] = None` If returning log probabilities, specifies the start position in the prompt. Default is "-1" which returns logprobs only for output tokens.
15+
* `logprob_start_len: Optional[Union[List[int], int]] = None` If returning log probabilities, specifies the start position in the prompt. Default is "-1", which returns logprobs only for output tokens.
1616
* `top_logprobs_num: Optional[Union[List[int], int]] = None` If returning log probabilities, specifies the number of top logprobs to return at each position.
1717
* `stream: bool = False` Whether to stream the output.
1818
* `lora_path: Optional[Union[List[Optional[str]], Optional[str]]] = None` Path to LoRA weights.
1919
* `custom_logit_processor: Optional[Union[List[Optional[str]], str]] = None` Custom logit processor for advanced sampling control. For usage see below.
20-
* `return_hidden_states: bool = False` Whether to return hidden states of the model. Note that each time it changes, the cuda graph will be recaptured, which might lead to a performance hit. See the [examples](https://github.com/sgl-project/sglang/blob/main/examples/runtime/hidden_states) for more information.
20+
* `return_hidden_states: bool = False` Whether to return hidden states of the model. Note that each time it changes, the CUDA graph will be recaptured, which might lead to a performance hit. See the [examples](https://github.com/sgl-project/sglang/blob/main/examples/runtime/hidden_states) for more information.
2121

22-
## Sampling params
22+
## Sampling parameters
2323

24-
### Core Parameters
24+
### Core parameters
2525

2626
* `max_new_tokens: int = 128` The maximum output length measured in tokens.
2727
* `stop: Optional[Union[str, List[str]]] = None` One or multiple [stop words](https://platform.openai.com/docs/api-reference/chat/create#chat-create-stop). Generation will stop if one of these words is sampled.
28-
* `stop_token_ids: Optional[List[int]] = None` Provide stop words in form of token ids. Generation will stop if one of these token ids is sampled.
29-
* `temperature: float = 1.0` [Temperature](https://platform.openai.com/docs/api-reference/chat/create#chat-create-temperature) when sampling the next token. `temperature = 0` corresponds to greedy sampling, higher temperature leads to more diversity.
28+
* `stop_token_ids: Optional[List[int]] = None` Provide stop words in the form of token IDs. Generation will stop if one of these token IDs is sampled.
29+
* `temperature: float = 1.0` [Temperature](https://platform.openai.com/docs/api-reference/chat/create#chat-create-temperature) when sampling the next token. `temperature = 0` corresponds to greedy sampling, a higher temperature leads to more diversity.
3030
* `top_p: float = 1.0` [Top-p](https://platform.openai.com/docs/api-reference/chat/create#chat-create-top_p) selects tokens from the smallest sorted set whose cumulative probability exceeds `top_p`. When `top_p = 1`, this reduces to unrestricted sampling from all tokens.
3131
* `top_k: int = -1` [Top-k](https://developer.nvidia.com/blog/how-to-get-better-outputs-from-your-large-language-model/#predictability_vs_creativity) randomly selects from the `k` highest-probability tokens.
3232
* `min_p: float = 0.0` [Min-p](https://github.com/huggingface/transformers/issues/27670) samples from tokens with probability larger than `min_p * highest_token_probability`.
@@ -36,7 +36,7 @@ The `/generate` endpoint accepts the following parameters in JSON format. For in
3636
* `frequency_penalty: float = 0.0`: Penalizes tokens based on their frequency in generation so far. Must be between `-2` and `2` where negative numbers encourage repeatment of tokens and positive number encourages sampling of new tokens. The scaling of penalization grows linearly with each appearance of a token.
3737
* `presence_penalty: float = 0.0`: Penalizes tokens if they appeared in the generation so far. Must be between `-2` and `2` where negative numbers encourage repeatment of tokens and positive number encourages sampling of new tokens. The scaling of the penalization is constant if a token occured.
3838
* `repetition_penalty: float = 0.0`: Penalizes tokens if they appeared in prompt or generation so far. Must be between `0` and `2` where numbers smaller than `1` encourage repeatment of tokens and numbers larger than `1` encourages sampling of new tokens. The penalization scales multiplicatively.
39-
* `min_new_tokens: int = 0`: Forces the model to generate at least `min_new_tokens` until a stop word or EOS token is sampled. Note that this might lead to unintended behavior for example if the distribution is highly skewed towards these tokens.
39+
* `min_new_tokens: int = 0`: Forces the model to generate at least `min_new_tokens` until a stop word or EOS token is sampled. Note that this might lead to unintended behavior, for example, if the distribution is highly skewed towards these tokens.
4040

4141
### Constrained decoding
4242

@@ -48,20 +48,20 @@ Please refer to our dedicated guide on [constrained decoding](./structured_outpu
4848

4949
### Other options
5050

51-
* `n: int = 1`: Specifies the number of output sequences to generate per request. (Generating multiple outputs in one request (n > 1) is discouraged; repeat the same prompts for several times offer better control and efficiency.)
51+
* `n: int = 1`: Specifies the number of output sequences to generate per request. (Generating multiple outputs in one request (n > 1) is discouraged; repeating the same prompts several times offers better control and efficiency.)
5252
* `spaces_between_special_tokens: bool = True`: Whether or not to add spaces between special tokens during detokenization.
5353
* `no_stop_trim: bool = False`: Don't trim stop words or EOS token from the generated text.
5454
* `ignore_eos: bool = False`: Don't stop generation when EOS token is sampled.
5555
* `skip_special_tokens: bool = True`: Remove special tokens during decoding.
56-
* `custom_params: Optional[List[Optional[Dict[str, Any]]]] = None`: Used when employing `CustomLogitProcessor`. For usage see below.
56+
* `custom_params: Optional[List[Optional[Dict[str, Any]]]] = None`: Used when employing `CustomLogitProcessor`. For usage, see below.
5757

5858
## Examples
5959

6060
### Normal
6161

6262
Launch a server:
6363

64-
```
64+
```bash
6565
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --port 30000
6666
```
6767

@@ -120,17 +120,17 @@ print("")
120120

121121
Detailed example in [openai compatible api](https://docs.sglang.ai/backend/openai_api_completions.html#id2).
122122

123-
### Multi modal
123+
### Multimodal
124124

125125
Launch a server:
126126

127-
```
127+
```bash
128128
python3 -m sglang.launch_server --model-path lmms-lab/llava-onevision-qwen2-7b-ov --chat-template chatml-llava
129129
```
130130

131131
Download an image:
132132

133-
```
133+
```bash
134134
curl -o example_image.png -L https://github.com/sgl-project/sglang/blob/main/test/lang/example_image.png?raw=true
135135
```
136136

@@ -169,9 +169,10 @@ SGLang supports two grammar backends:
169169

170170
- [Outlines](https://github.com/dottxt-ai/outlines) (default): Supports JSON schema and regular expression constraints.
171171
- [XGrammar](https://github.com/mlc-ai/xgrammar): Supports JSON schema, regular expression, and EBNF constraints.
172-
- XGrammar currently uses the [GGML BNF format](https://github.com/ggerganov/llama.cpp/blob/master/grammars/README.md)
172+
- XGrammar currently uses the [GGML BNF format](https://github.com/ggerganov/llama.cpp/blob/master/grammars/README.md).
173+
174+
Initialize the XGrammar backend using `--grammar-backend xgrammar` flag:
173175

174-
Initialize the XGrammar backend using `--grammar-backend xgrammar` flag
175176
```bash
176177
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-8B-Instruct \
177178
--port 30000 --host 0.0.0.0 --grammar-backend [xgrammar|outlines] # xgrammar or outlines (default: outlines)
@@ -234,13 +235,17 @@ print(response.json())
234235
```
235236

236237
Detailed example in [structured outputs](./structured_outputs.ipynb).
237-
### Custom Logit Processor
238+
239+
### Custom logit processor
240+
238241
Launch a server with `--enable-custom-logit-processor` flag on.
239-
```
242+
243+
```bash
240244
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --port 30000 --enable-custom-logit-processor
241245
```
242246

243247
Define a custom logit processor that will always sample a specific token id.
248+
244249
```python
245250
from sglang.srt.sampling.custom_logit_processor import CustomLogitProcessor
246251

@@ -262,7 +267,8 @@ class DeterministicLogitProcessor(CustomLogitProcessor):
262267
return logits
263268
```
264269

265-
Send a request
270+
Send a request:
271+
266272
```python
267273
import requests
268274

0 commit comments

Comments
 (0)