Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provided model name is ignored when using TGI #4968

Open
3 tasks done
HenFo opened this issue Apr 3, 2025 · 0 comments
Open
3 tasks done

Provided model name is ignored when using TGI #4968

HenFo opened this issue Apr 3, 2025 · 0 comments
Assignees
Labels
area:autocomplete Relates to the auto complete feature ide:vscode Relates specifically to VS Code extension kind:bug Indicates an unexpected problem or unintended behavior "needs-triage" priority:medium Indicates medium priority

Comments

@HenFo
Copy link

HenFo commented Apr 3, 2025

Before submitting your bug report

Relevant environment info

- OS: Windows_NT x64 10.0.26100
- Continue version: 1.1.19
- IDE version: vscode 1.98.2
- Model: qwen2.5 coder 3b
- config:
  
name: Local Assistant
version: 1.0.0
schema: v1
models: 
  - name: Qwen2.5-coder 3b
    provider: huggingface-tgi
    model: qwen2.5-coder:3b
    apiBase: <url>
    roles:
      - autocomplete
context:
  - provider: code
  - provider: docs
  - provider: diff
  - provider: terminal
  - provider: problems
  - provider: folder
  - provider: codebase

Description

I host the model using huggingface tgi on a server under a specific name which did not include the "coder" in qwen.
I noticed that the wrong fim template was used, although I specified the model name in my config. While debugging into the extension, I noticed that the model name was overwritten with the abstract name I used to host the model.
Further, I noticed that this does not happen in the sandbox environment, but only when I open a new folder in the extension development host vscode window. My user config is as the same as the debug config.

I guess this could be called a feature, but from my perspective it is unexpected behavior.
Maybe a flag like "infereModelName" could be added, but by default the provided model name should be used.

To reproduce

  1. host a local model using huggingface-tgi with a custom name, i.e. modelId.
  2. setup the config as provided using the url to your tgi instance.
  3. launch continue in debug mode and log the lowerCaseModel in getTemplateForModel in AutocompleteTemplate.ts.
  4. Trigger a suggestion in the sandbox example and the provided model name will be logged.
  5. Change the folder to a different project or whatever and trigger a new suggestion. The modelId as provided by tgi should be logged.

Log output

@sestinj sestinj self-assigned this Apr 3, 2025
@dosubot dosubot bot added area:autocomplete Relates to the auto complete feature ide:vscode Relates specifically to VS Code extension kind:bug Indicates an unexpected problem or unintended behavior priority:medium Indicates medium priority labels Apr 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:autocomplete Relates to the auto complete feature ide:vscode Relates specifically to VS Code extension kind:bug Indicates an unexpected problem or unintended behavior "needs-triage" priority:medium Indicates medium priority
Projects
None yet
Development

No branches or pull requests

2 participants