-
Notifications
You must be signed in to change notification settings - Fork 2.5k
feature(community): Enable using chat and llms without providing project/space/development id (lightweight engine IBM) #7781
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
* Rename auth method in docs * Rename auth method in docs * Add deployment chat to chat class * Upadate Watsonx sdk * Rework interfaces in llms as well * Bump watsonx-ai sdk version * Remove unused code * Add fake auth
if (!("projectId" in fields || "spaceId" in fields || "idOrName" in fields)) | ||
throw new Error( | ||
"No id specified! At least id of 1 type has to be specified" | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we keep this check and only bypass it when the lightweight engine is in use?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So that would enforce us to use a flag ex. lightweight: true
. Unless user uses a lightweight engine the API is going to throw an error "No spaceId nor projectId is present" so this is taken care of there.
Also, we want to match with our Python Langchain SDK implementation. Assuming whenever id is not passed user chooses lightweight engine. If id of any type is mistakenly not passed the API will throw an error letting user know.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@FilipZmijewski just to clarify, are you saying that the API does check this and therefore the check in our code is unnecessary? The benefit of having the check in our code is that it causes us to throw a nice/readable error. Is there no other way that we can infer that the user is using the lightweight engine?
By the way, feel free to reply in your native language if you're more comfortable that way. LLMs are very good at translating these days 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@benjamincburns We would like to match the solution with what we have in SDK (both python and node.js) and also langchain IBM python meaning "no id provided -> lightweight engine in use". In case of lightweight engine unavailable the API will throw an error, quite clear to understand, Missing either space_id or project_id or wml_instance_crn
. One thing that could be added is disabling error check depending on serviceUrl
provided but this is not ideal since not all IBM watsonx.ai software
instalations will allow user to use lightweight engine.
Haha, thanks. I enjoy speaking/writing in English so that works!
if (!("projectId" in fields || "spaceId" in fields || "idOrName" in fields)) | ||
throw new Error( | ||
"No id specified! At least id of 1 type has to be specified" | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same question here
// @ts-expect-error Intentionally passing not enough parameters | ||
const testPropsServiceUrl: WatsonxInputLLM = { | ||
serviceUrl: process.env.WATSONX_AI_SERVICE_URL as string, | ||
}; | ||
expect( | ||
() => | ||
new WatsonxLLM({ | ||
...testPropsServiceUrl, | ||
...fakeAuthProp, | ||
}) | ||
).toThrowError(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Assuming it's possible to check for the lightweight engine, you could keep this logic here, but pass whatever is necessary to detect that use
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the clarifications!
The latest updates on your projects. Learn more about Vercel for Git ↗︎
1 Skipped Deployment
|
61f672a
to
ccbe52d
Compare
@benjamincburns previous tests failed for some reason, I merged with main and hopefully this is fine now. |
Thank you! |
Providing small changes in order to allow user to use inference methods in both LLM and chat class