Skip to content

#21 - Support Gemini models #22

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Mar 30, 2025
Merged

Conversation

mnismt
Copy link
Owner

@mnismt mnismt commented Mar 30, 2025

Description

  • The changes implemented in this pull request involve the implementation of dynamic LLM provider configuration and enhancements to the Smart Add functionality.

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update
  • Test improvements

Proof of works

CleanShot 2025-03-30 at 16 46 11

mnismt added 6 commits March 30, 2025 14:16
- Refactored the provider selection process to support multiple LLM providers.
- Added utility functions for managing workspace configuration related to LLM providers, API keys, base URLs, and models.
- Implemented comprehensive error handling and user feedback during provider setup.
- Introduced unit tests for the new configuration utilities to ensure reliability and correctness.
… Add functionality

- Integrated dynamic LLM provider selection into the Smart Add feature, allowing users to configure and select their preferred LLM provider at runtime.
- Added error handling for scenarios where no provider is configured, providing user feedback and options to set up a provider.
- Refactored the provider creation logic to support multiple LLM providers, including OpenAI and Gemini, with appropriate configuration management.
- Updated related tests to ensure the new provider selection and configuration functionalities are covered.
- Updated tests for the addFilesSmart function to ensure proper handling of LLM provider configuration, including scenarios where no provider is set up.
- Added checks for user feedback when provider setup is cancelled or fails, ensuring robust error handling.
- Improved test coverage for cases with multiple folder URIs and workspace root usage.
- Refactored existing tests to streamline setup and improve clarity.
@mnismt mnismt added the feature New feature or request label Mar 30, 2025
@mnismt mnismt requested a review from Copilot March 30, 2025 09:46
@mnismt mnismt self-assigned this Mar 30, 2025
@mnismt mnismt added this to Cody++ Mar 30, 2025
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds support for Gemini models by implementing dynamic LLM provider configuration and enhancing the Smart Add functionality. Key changes include:

  • Replacement of legacy OpenAI provider tests with unified provider implementation using OpenAICompatibleProvider.
  • Introduction of GeminiProvider that extends OpenAICompatibleProvider with forced endpoint configurations.
  • Updates to configuration management and provider selection commands to accommodate multi-provider support.

Reviewed Changes

Copilot reviewed 26 out of 29 changed files in this pull request and generated no comments.

Show a summary per file
File Description
src/core/llm/providers/openai/tests/types.test.ts Removed tests for OpenAI provider type definitions
src/core/llm/providers/openai/tests/index.test.ts Removed integration tests for the legacy OpenAI provider
src/core/llm/providers/openai-compatible/index.ts New provider implementation with dynamic configuration
src/core/llm/providers/gemini/index.ts New GeminiProvider implementation inheriting from OpenAICompatibleProvider
src/core/llm/index.ts Updated provider factory to support multiple provider codes
src/core/llm/constants.ts Refactored and expanded constants to support new provider configurations
src/commands/providerCommands.ts Updated provider selection and configuration commands
src/commands/addToCody.ts Updated Smart Add command to retrieve and validate provider configuration
.vscode-test.mjs Minor formatting adjustments
Files not reviewed (3)
  • .eslintrc.json: Language not supported
  • package.json: Language not supported
  • pnpm-lock.yaml: Language not supported
Comments suppressed due to low confidence (3)

src/core/llm/providers/openai/tests/index.test.ts:1

  • The removal of the OpenAIProvider integration tests may reduce test coverage for core provider functionality. Consider adding equivalent tests to verify the expected behavior of the provider.
import * as assert from 'assert'

src/core/llm/providers/gemini/index.ts:1

  • There are no dedicated tests for GeminiProvider. Consider adding unit tests to validate that the forced configurations (baseUrl, model, endpoints) behave as intended.
import * as vscode from 'vscode'

src/commands/addToCody.ts:130

  • Ensure that the value returned by getProviderConfig is a valid SUPPORTED_PROVIDER_CODES string since createProvider now expects a specific provider code. Adding type validation or a conversion step could prevent potential runtime errors.
const llm = createProvider(currentProvider)

- Removed deprecated dependencies related to langchain, including `@langchain/google-genai`, `@langchain/core`, and `@langchain/openai` from `package.json` and `pnpm-lock.yaml`.
- Cleaned up the lock files to reflect the removal of these packages, ensuring a more streamlined dependency tree.
- Added unit tests for provider commands to enhance coverage and ensure proper functionality of the LLM provider selection process.
@mnismt mnismt merged commit 120ae73 into main Mar 30, 2025
2 checks passed
@github-project-automation github-project-automation bot moved this to Done in Cody++ Mar 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New feature or request
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

1 participant