Regarding Custom Model Integration in Copilot #154717
Replies: 2 comments 2 replies
-
Great questions...
I have only used Custom Models (Sonnet 3.7) with the Ask feature. In your experience, does using the Agent feature with Custom Models behave differently than using the normal Copilot API (free or subscription)? In theory, the Copilot extension just sends the same custom prompts, but to your Custom Model API, so it shouldn't behave any differently. |
Beta Was this translation helpful? Give feedback.
-
Thanks for your feedback and for using Copilot in VS Code! For any feature requests on Copilot in VS Code like this, it'd be a huge help if you could file them in https://github.com/microsoft/vscode-copilot-release/issues. Please also feel free to tag me in any issues you open. |
Beta Was this translation helpful? Give feedback.
-
Select Topic Area
Question
Body
Hi copilot team,
I'm exploring the custom model feature in Copilot VS Code Insider, which is fantastic! I have a few questions regarding its current capabilities and potential future enhancements:
Caching: How does Copilot handle caching when using custom models? Is there a way for users to manage their own caching, either through the API or through Copilot settings, to optimize performance and reduce API costs?
Ollama Integration: The screenshot shows various model paths. I'm curious if this feature supports using LLMs running locally via Ollama. Could I point Copilot to a model served by Ollama on my local network? If so, are there any specific configuration steps or considerations?
Also let us know whom we can tag to communicate on this.
Awaiting your response. Thanks
Context
microsoft/vscode-copilot-release#2627 (comment)
Beta Was this translation helpful? Give feedback.
All reactions