File tree Expand file tree Collapse file tree 2 files changed +2
-2
lines changed Expand file tree Collapse file tree 2 files changed +2
-2
lines changed Original file line number Diff line number Diff line change @@ -1200,7 +1200,7 @@ def generate(
1200
1200
input_ids_seq_length = input_ids .shape [- 1 ]
1201
1201
if max_length is None and max_new_tokens is None :
1202
1202
warnings .warn (
1203
- "Neither `max_length` nor `max_new_tokens` have been set, `max_length` will default to "
1203
+ "Neither `max_length` nor `max_new_tokens` has been set, `max_length` will default to "
1204
1204
f"{ self .config .max_length } (`self.config.max_length`). Controlling `max_length` via the config is "
1205
1205
"deprecated and `max_length` will be removed from the config in v5 of Transformers -- we recommend "
1206
1206
"using `max_new_tokens` to control the maximum length of the generation." ,
Original file line number Diff line number Diff line change 220
220
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
221
221
Indices of input sequence tokens in the vocabulary.
222
222
223
- IIndices can be obtained using [`FSTMTokenizer`]. See [`PreTrainedTokenizer.encode`] and
223
+ Indices can be obtained using [`FSTMTokenizer`]. See [`PreTrainedTokenizer.encode`] and
224
224
[`PreTrainedTokenizer.__call__`] for details.
225
225
226
226
[What are input IDs?](../glossary#input-ids)
You can’t perform that action at this time.
0 commit comments