Skip to content

Commit 63cf323

Browse files
authored
fix: max_tokens typo in Mistral Chat (#740)
1 parent 31e61b7 commit 63cf323

File tree

1 file changed

+1
-1
lines changed
  • integrations/amazon_bedrock/src/haystack_integrations/components/generators/amazon_bedrock/chat

1 file changed

+1
-1
lines changed

integrations/amazon_bedrock/src/haystack_integrations/components/generators/amazon_bedrock/chat/adapters.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -336,7 +336,7 @@ def __init__(self, generation_kwargs: Dict[str, Any]):
336336
self.prompt_handler = DefaultPromptHandler(
337337
tokenizer=tokenizer,
338338
model_max_length=model_max_length,
339-
max_length=self.generation_kwargs.get("max_gen_len") or 512,
339+
max_length=self.generation_kwargs.get("max_tokens") or 512,
340340
)
341341

342342
def prepare_body(self, messages: List[ChatMessage], **inference_kwargs) -> Dict[str, Any]:

0 commit comments

Comments
 (0)