Custom request headers | trust_remote_code param fix #3069
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR includes:
Custom Headers
An option was included to pass custom headers to
local-chat-completions
andlocal-completions
endpoints. It is done through themodel_args
parameter:lm_eval \ ..... \ --model_args '{"base_url":"https://some.ednpoint.com", .... ,"headers":{"authorization": "some_custom_string". "other_header":"some_other_data"}}'
This is useful on some cases where the endpoint needs custom authentication or... headers. Also was asked for here: #2782 .
Trust Remote Code Bug
Finally, while making these changes I found that the parameter
--trust_remote_code
was not working correctly. The behavior of--model_args
was changed in #2629 but the flag behavior was left unchanged. This means that when--model_args
is a proper json and it is converted into adict()
the current implementation will fail (seelm-evaluation-harness/lm_eval/__main__.py
Line 438 in 9fbe48c
This PR includes a conditional handling of the
args.model_args
variable to account for itsstr
ordict
nature.EDIT: Removed
custom_model_name
header description, it was removed from the PR because it was redundant to the use oftokenizer
.