@@ -119,14 +119,15 @@ def create(
119
119
As an example, you can pass `{"50256": -100}` to prevent the <|endoftext|> token
120
120
from being generated.
121
121
122
- logprobs: Include the log probabilities on the `logprobs` most likely tokens, as well the
123
- chosen tokens. For example, if `logprobs` is 5, the API will return a list of
124
- the 5 most likely tokens. The API will always return the `logprob` of the
125
- sampled token, so there may be up to `logprobs+1` elements in the response.
122
+ logprobs: Include the log probabilities on the `logprobs` most likely output tokens, as
123
+ well the chosen tokens. For example, if `logprobs` is 5, the API will return a
124
+ list of the 5 most likely tokens. The API will always return the `logprob` of
125
+ the sampled token, so there may be up to `logprobs+1` elements in the response.
126
126
127
127
The maximum value for `logprobs` is 5.
128
128
129
- max_tokens: The maximum number of [tokens](/tokenizer) to generate in the completion.
129
+ max_tokens: The maximum number of [tokens](/tokenizer) that can be generated in the
130
+ completion.
130
131
131
132
The token count of your prompt plus `max_tokens` cannot exceed the model's
132
133
context length.
@@ -288,14 +289,15 @@ def create(
288
289
As an example, you can pass `{"50256": -100}` to prevent the <|endoftext|> token
289
290
from being generated.
290
291
291
- logprobs: Include the log probabilities on the `logprobs` most likely tokens, as well the
292
- chosen tokens. For example, if `logprobs` is 5, the API will return a list of
293
- the 5 most likely tokens. The API will always return the `logprob` of the
294
- sampled token, so there may be up to `logprobs+1` elements in the response.
292
+ logprobs: Include the log probabilities on the `logprobs` most likely output tokens, as
293
+ well the chosen tokens. For example, if `logprobs` is 5, the API will return a
294
+ list of the 5 most likely tokens. The API will always return the `logprob` of
295
+ the sampled token, so there may be up to `logprobs+1` elements in the response.
295
296
296
297
The maximum value for `logprobs` is 5.
297
298
298
- max_tokens: The maximum number of [tokens](/tokenizer) to generate in the completion.
299
+ max_tokens: The maximum number of [tokens](/tokenizer) that can be generated in the
300
+ completion.
299
301
300
302
The token count of your prompt plus `max_tokens` cannot exceed the model's
301
303
context length.
@@ -450,14 +452,15 @@ def create(
450
452
As an example, you can pass `{"50256": -100}` to prevent the <|endoftext|> token
451
453
from being generated.
452
454
453
- logprobs: Include the log probabilities on the `logprobs` most likely tokens, as well the
454
- chosen tokens. For example, if `logprobs` is 5, the API will return a list of
455
- the 5 most likely tokens. The API will always return the `logprob` of the
456
- sampled token, so there may be up to `logprobs+1` elements in the response.
455
+ logprobs: Include the log probabilities on the `logprobs` most likely output tokens, as
456
+ well the chosen tokens. For example, if `logprobs` is 5, the API will return a
457
+ list of the 5 most likely tokens. The API will always return the `logprob` of
458
+ the sampled token, so there may be up to `logprobs+1` elements in the response.
457
459
458
460
The maximum value for `logprobs` is 5.
459
461
460
- max_tokens: The maximum number of [tokens](/tokenizer) to generate in the completion.
462
+ max_tokens: The maximum number of [tokens](/tokenizer) that can be generated in the
463
+ completion.
461
464
462
465
The token count of your prompt plus `max_tokens` cannot exceed the model's
463
466
context length.
@@ -687,14 +690,15 @@ async def create(
687
690
As an example, you can pass `{"50256": -100}` to prevent the <|endoftext|> token
688
691
from being generated.
689
692
690
- logprobs: Include the log probabilities on the `logprobs` most likely tokens, as well the
691
- chosen tokens. For example, if `logprobs` is 5, the API will return a list of
692
- the 5 most likely tokens. The API will always return the `logprob` of the
693
- sampled token, so there may be up to `logprobs+1` elements in the response.
693
+ logprobs: Include the log probabilities on the `logprobs` most likely output tokens, as
694
+ well the chosen tokens. For example, if `logprobs` is 5, the API will return a
695
+ list of the 5 most likely tokens. The API will always return the `logprob` of
696
+ the sampled token, so there may be up to `logprobs+1` elements in the response.
694
697
695
698
The maximum value for `logprobs` is 5.
696
699
697
- max_tokens: The maximum number of [tokens](/tokenizer) to generate in the completion.
700
+ max_tokens: The maximum number of [tokens](/tokenizer) that can be generated in the
701
+ completion.
698
702
699
703
The token count of your prompt plus `max_tokens` cannot exceed the model's
700
704
context length.
@@ -856,14 +860,15 @@ async def create(
856
860
As an example, you can pass `{"50256": -100}` to prevent the <|endoftext|> token
857
861
from being generated.
858
862
859
- logprobs: Include the log probabilities on the `logprobs` most likely tokens, as well the
860
- chosen tokens. For example, if `logprobs` is 5, the API will return a list of
861
- the 5 most likely tokens. The API will always return the `logprob` of the
862
- sampled token, so there may be up to `logprobs+1` elements in the response.
863
+ logprobs: Include the log probabilities on the `logprobs` most likely output tokens, as
864
+ well the chosen tokens. For example, if `logprobs` is 5, the API will return a
865
+ list of the 5 most likely tokens. The API will always return the `logprob` of
866
+ the sampled token, so there may be up to `logprobs+1` elements in the response.
863
867
864
868
The maximum value for `logprobs` is 5.
865
869
866
- max_tokens: The maximum number of [tokens](/tokenizer) to generate in the completion.
870
+ max_tokens: The maximum number of [tokens](/tokenizer) that can be generated in the
871
+ completion.
867
872
868
873
The token count of your prompt plus `max_tokens` cannot exceed the model's
869
874
context length.
@@ -1018,14 +1023,15 @@ async def create(
1018
1023
As an example, you can pass `{"50256": -100}` to prevent the <|endoftext|> token
1019
1024
from being generated.
1020
1025
1021
- logprobs: Include the log probabilities on the `logprobs` most likely tokens, as well the
1022
- chosen tokens. For example, if `logprobs` is 5, the API will return a list of
1023
- the 5 most likely tokens. The API will always return the `logprob` of the
1024
- sampled token, so there may be up to `logprobs+1` elements in the response.
1026
+ logprobs: Include the log probabilities on the `logprobs` most likely output tokens, as
1027
+ well the chosen tokens. For example, if `logprobs` is 5, the API will return a
1028
+ list of the 5 most likely tokens. The API will always return the `logprob` of
1029
+ the sampled token, so there may be up to `logprobs+1` elements in the response.
1025
1030
1026
1031
The maximum value for `logprobs` is 5.
1027
1032
1028
- max_tokens: The maximum number of [tokens](/tokenizer) to generate in the completion.
1033
+ max_tokens: The maximum number of [tokens](/tokenizer) that can be generated in the
1034
+ completion.
1029
1035
1030
1036
The token count of your prompt plus `max_tokens` cannot exceed the model's
1031
1037
context length.
0 commit comments