-
-
Notifications
You must be signed in to change notification settings - Fork 2
#21 - Support Gemini models #22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Refactored the provider selection process to support multiple LLM providers. - Added utility functions for managing workspace configuration related to LLM providers, API keys, base URLs, and models. - Implemented comprehensive error handling and user feedback during provider setup. - Introduced unit tests for the new configuration utilities to ensure reliability and correctness.
… Add functionality - Integrated dynamic LLM provider selection into the Smart Add feature, allowing users to configure and select their preferred LLM provider at runtime. - Added error handling for scenarios where no provider is configured, providing user feedback and options to set up a provider. - Refactored the provider creation logic to support multiple LLM providers, including OpenAI and Gemini, with appropriate configuration management. - Updated related tests to ensure the new provider selection and configuration functionalities are covered.
- Updated tests for the addFilesSmart function to ensure proper handling of LLM provider configuration, including scenarios where no provider is set up. - Added checks for user feedback when provider setup is cancelled or fails, ensuring robust error handling. - Improved test coverage for cases with multiple folder URIs and workspace root usage. - Refactored existing tests to streamline setup and improve clarity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds support for Gemini models by implementing dynamic LLM provider configuration and enhancing the Smart Add functionality. Key changes include:
- Replacement of legacy OpenAI provider tests with unified provider implementation using OpenAICompatibleProvider.
- Introduction of GeminiProvider that extends OpenAICompatibleProvider with forced endpoint configurations.
- Updates to configuration management and provider selection commands to accommodate multi-provider support.
Reviewed Changes
Copilot reviewed 26 out of 29 changed files in this pull request and generated no comments.
Show a summary per file
File | Description |
---|---|
src/core/llm/providers/openai/tests/types.test.ts | Removed tests for OpenAI provider type definitions |
src/core/llm/providers/openai/tests/index.test.ts | Removed integration tests for the legacy OpenAI provider |
src/core/llm/providers/openai-compatible/index.ts | New provider implementation with dynamic configuration |
src/core/llm/providers/gemini/index.ts | New GeminiProvider implementation inheriting from OpenAICompatibleProvider |
src/core/llm/index.ts | Updated provider factory to support multiple provider codes |
src/core/llm/constants.ts | Refactored and expanded constants to support new provider configurations |
src/commands/providerCommands.ts | Updated provider selection and configuration commands |
src/commands/addToCody.ts | Updated Smart Add command to retrieve and validate provider configuration |
.vscode-test.mjs | Minor formatting adjustments |
Files not reviewed (3)
- .eslintrc.json: Language not supported
- package.json: Language not supported
- pnpm-lock.yaml: Language not supported
Comments suppressed due to low confidence (3)
src/core/llm/providers/openai/tests/index.test.ts:1
- The removal of the OpenAIProvider integration tests may reduce test coverage for core provider functionality. Consider adding equivalent tests to verify the expected behavior of the provider.
import * as assert from 'assert'
src/core/llm/providers/gemini/index.ts:1
- There are no dedicated tests for GeminiProvider. Consider adding unit tests to validate that the forced configurations (baseUrl, model, endpoints) behave as intended.
import * as vscode from 'vscode'
src/commands/addToCody.ts:130
- Ensure that the value returned by getProviderConfig is a valid SUPPORTED_PROVIDER_CODES string since createProvider now expects a specific provider code. Adding type validation or a conversion step could prevent potential runtime errors.
const llm = createProvider(currentProvider)
- Removed deprecated dependencies related to langchain, including `@langchain/google-genai`, `@langchain/core`, and `@langchain/openai` from `package.json` and `pnpm-lock.yaml`. - Cleaned up the lock files to reflect the removal of these packages, ensuring a more streamlined dependency tree. - Added unit tests for provider commands to enhance coverage and ensure proper functionality of the LLM provider selection process.
Description
Type of Change
Proof of works