-
Notifications
You must be signed in to change notification settings - Fork 6k
fix(llm): ensure base_url has protocol prefix for model info fetch when using LiteLLM #7782
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(llm): ensure base_url has protocol prefix for model info fetch when using LiteLLM #7782
Conversation
…ch when using LiteLLM
…MConfig when using LiteLLM
Thank you! You may want to run |
Thanks! But I found CI failure is because of I forget the base_url sometimes will be None and cannot use stripe.. |
frontend/src/components/features/payment/setup-payment-modal.tsx
Outdated
Show resolved
Hide resolved
…-in-base-url' into fix/trim-whitespace-in-base-url
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, though it would also be good practice to trim the string from the frontend before it is sent to the server. See
OpenHands/frontend/src/routes/account-settings.tsx
Lines 140 to 142 in 35d49f6
const finalLlmBaseUrl = shouldHandleSpecialSaasCase | |
? undefined | |
: llmBaseUrl; |
Co-authored-by: sp.wack <[email protected]>
Thanks! At first I thought the problem was caused by the initialization according to config, so I neglected the frontend when reproducing the bug. But maybe it's better to be here at llm now? This allows it to handle frontend sending and initialization. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, thanks a lot for resolving this issue!
End-user friendly description of the problem this fixes or functionality that this introduces.
Fixes a bug where users could accidentally provide a base URL without the required protocol (e.g., http:// or https://) when using model from LiteLLM. This caused runtime failures in LLM model info fetching
Give a summary of what the PR does, explaining any non-trivial design decisions.
In LLM.init_model_info(), I prepend http:// to base_url if the user has forgotten to include a protocol when using LiteLLM model. This helps avoid confusing runtime UnsupportedProtocol or ConnectError exceptions.
Added a unit test to confirm that missing protocols are corrected before the HTTP request is sent.
Fix i18n issue SetupPaymenyModal
Link of any specific issues this addresses.
Resolves #7760