You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
I am setting up an Active-to-Active (A2A) demo environment. I have an existing vLLM deployment with a model accessible within this environment.
The current A2A samples in this repository appear to be primarily designed for use with Gemini models. While I understand this may not be the central focus of the samples, it would be nice to be able to simplify it for users like myself who wish to quickly show a demo with the services integrated to their own enterprise models or alternative large language models (LLMs) other than Gemini.
Describe the solution you'd like
I think it would be nice and relatively easy adding a small amount of additional logic to the A2A samples to allow for more flexible model endpoint configuration. Specifically, it would be beneficial to support the inclusion of OPENAI_URL and OPENAI_TOKEN environment variables. If these variables are present and a Gemini API key is not found, the samples should ideally default to using the specified OpenAI-compatible endpoint. This would significantly ease the deployment and testing of A2A scenarios with custom or self-hosted LLMs.
Describe alternatives you've considered
No response
Additional context
No response
Code of Conduct
I agree to follow this project's Code of Conduct
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
I am setting up an Active-to-Active (A2A) demo environment. I have an existing vLLM deployment with a model accessible within this environment.
The current A2A samples in this repository appear to be primarily designed for use with Gemini models. While I understand this may not be the central focus of the samples, it would be nice to be able to simplify it for users like myself who wish to quickly show a demo with the services integrated to their own enterprise models or alternative large language models (LLMs) other than Gemini.
Describe the solution you'd like
I think it would be nice and relatively easy adding a small amount of additional logic to the A2A samples to allow for more flexible model endpoint configuration. Specifically, it would be beneficial to support the inclusion of OPENAI_URL and OPENAI_TOKEN environment variables. If these variables are present and a Gemini API key is not found, the samples should ideally default to using the specified OpenAI-compatible endpoint. This would significantly ease the deployment and testing of A2A scenarios with custom or self-hosted LLMs.
Describe alternatives you've considered
No response
Additional context
No response
Code of Conduct
The text was updated successfully, but these errors were encountered: