Skip to content

Custom request headers | trust_remote_code param fix #3069

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

RawthiL
Copy link
Contributor

@RawthiL RawthiL commented Jun 18, 2025

This PR includes:

Custom Headers

An option was included to pass custom headers to local-chat-completions and local-completions endpoints. It is done through the model_args parameter:

lm_eval \
..... \
--model_args '{"base_url":"https://some.ednpoint.com", .... ,"headers":{"authorization": "some_custom_string". "other_header":"some_other_data"}}'

This is useful on some cases where the endpoint needs custom authentication or... headers. Also was asked for here: #2782 .

Trust Remote Code Bug

Finally, while making these changes I found that the parameter --trust_remote_code was not working correctly. The behavior of --model_args was changed in #2629 but the flag behavior was left unchanged. This means that when --model_args is a proper json and it is converted into a dict() the current implementation will fail (see

args.model_args = args.model_args + ",trust_remote_code=True"
).
This PR includes a conditional handling of the args.model_args variable to account for its str or dict nature.

EDIT: Removed custom_model_name header description, it was removed from the PR because it was redundant to the use of tokenizer.

@baberabb
Copy link
Contributor

Hi! seem appropriate, but wouldn't it make more sense to return the custom_headers (if not None; can just call it headers) in the base method?

@cached_property
def header(self) -> dict:
"""Override this property to return the headers for the API request."""
return {"Authorization": f"Bearer {self.api_key}"}

Will also be more maintainable.

wrt the model_name, you can pass a different tokenizer string to instantiate the tokenizer. Is that not sufficient?

@RawthiL
Copy link
Contributor Author

RawthiL commented Jun 19, 2025

Thanks for the review!

I tried not to touch TemplateAPI because it seemed to be very OpenAI-centric, however according to the API guide, the intended use of it is to be a versatile superclass, and says that any API should override the header method. In this spirit I modified the base class method to remove the OpenAI header:

return {"Authorization": f"Bearer {self.api_key}"}

and make it return None if not implemented by the subclass.

Then, I modified the LocalCompletionsAPI class, which is the base class for LocalChatCompletion, OpenAICompletionsAPI and OpenAIChatCompletion. I included the override there:

    @cached_property
    def header(self) -> dict:
        """Set API headers."""
        return self.headers

Now, all the classes inherited from LocalCompletionsAPI can set the headers to any passed value (using headers parameter) and the OpenAI classes just set their own header after initialization of the superclass. Also, if no header is passed, now the request wont include any headers (as expected), before it included the "Authorization: Berarer " at all times.

wrt custom_model_name, I removed the feature from this PR.
You are right, I can pass a path to tokenizer and set the model to any string name, somehow I missed this workaround.

@RawthiL RawthiL changed the title Custom request headers | Custom model name | trust_remote_code param fix Custom request headers | trust_remote_code param fix Jun 19, 2025
@RawthiL
Copy link
Contributor Author

RawthiL commented Jul 7, 2025

any updates on this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants