Skip to content
milanglacier edited this page Apr 2, 2025 · 2 revisions

This page contains sample prompts that incorporate VectorCode with different LLMs. You may either use these directly or modify them to fit your need. Even if your model is not listed here, it might still be worthy trying some of the prompts because a lot of the LLMs can understand things even if they weren't specifically trained on them.

The prompts in this page is tailored for code completion, but the vectorcode context section should be reusable for a chat-based use case. I tested in completion workloads because code completion is sensitive to input in the sense that if the model doesn't understand the prompt properly, it may fail to generate codes that are usable as completion results (like Markdown code blocks, etc.). This gives me a (somehow) measurable way to decide whether the model properly understood the extra context in the prompt.

The main difference for the prompt construction is that some models use a special token to denote the start of a project context (qwen2.5-coder, Gemini, etc.), and there's no standard for such a token. For models that don't implement such tokens (Codestral, etc.), we may need to write extra prompt to tell the model how to use the extra context.

If you successfully get VectorCode working with a model that is not listed here, feel free to contribute!

Note

In a plugin that supports the use of fill-in-middle (FIM) API, there're often separate options/constructors for the prefix and the suffix. We need to disable the suffix option because the prompt template may mess up with the file context. For example, in minuet-ai's openai_fim_compatible backend, you can do this by setting provider_options.openai_fim_compatible.template.suffix to false.

prompt = function(pref, suff)
  local prompt_message = ""
  local cache_result = vectorcode_cacher.query_from_cache(0)
  for _, file in ipairs(cache_result) do
    prompt_message = prompt_message .. "<|file_sep|>" .. file.path .. "\n" .. file.document
  end
  return prompt_message
    .. "<|fim_prefix|>"
    .. pref
    .. "<|fim_suffix|>"
    .. suff
    .. "<|fim_middle|>"
end

References: QwenLM/Qwen2.5-Coder

prompt = function(pref, suff)
  local prompt_message = ([[Perform fill-in-middle from the following snippet of a %s code. Respond with only the filled-in code.]]):format(vim.bo.filetype)
  local cache_result = vectorcode_cacher.query_from_cache(0)
  for _, file in ipairs(cache_result) do
    prompt_message = prompt_message .. "<|file_sep|>" .. file.path .. "\n" .. file.document
  end
  return prompt_message
    .. "<|fim_begin|>"
    .. pref
    .. "<|fim_hole|>"
    .. suff
    .. "<|fim_end|>"
end

Reference: DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence

prompt = function(pref, suff)
  local prompt_message = ([[Perform fill-in-middle from the following snippet of a %s code. Respond with only the filled-in code.]]):format(vim.bo.filetype)
  local cache_result = vectorcode_cacher.query_from_cache(0)
  for _, file in ipairs(cache_result) do
    prompt_message = prompt_message .. "<|file_separator|>" .. file.path .. "\n" .. file.document
  end
  return prompt_message
    .. "<|fim_prefix|>"
    .. pref
    .. "<|fim_suffix|>"
    .. suff
    .. "<|fim_middle|>"
end

References: Gemma formatting and system instructions

prompt = function(pref, suff)
  local prompt_message = ""
  local cache_result = vectorcode_cacher.query_from_cache(0)
  for _, file in ipairs(cache_result) do
    prompt_message = prompt_message .. "<file_sep>" .. file.path .. "\n" .. file.document
  end
  return prompt_message
    .. "<fim_prefix>"
    .. pref
    .. "<fim_suffix>"
    .. suff
    .. "<fim_middle>"
end

Reference: StarCoder 2 and The Stack v2: The Next Generation

prompt = function(pref, suff)
  local prompt_message = ""
  local cache_result = vectorcode_cacher.query_from_cache(0)
  for _, file in ipairs(cache_result) do
    prompt_message = prompt_message .. "<CONTEXT>" .. file.path .. "\n" .. file.document
  end
  return prompt_message
    .. " <PRE> "
    .. pref
    .. " <SUF> "
    .. suff
    .. " <MID>"
end

Reference: Ollama Codellama model page

prompt = function(pref, suff)
  local prompt_message =
    "Perform fill in the middle completion based on the following content. Content that follows `[CONTEXT]` is a file in the repository that may be useful. Make use of them where necessary."
  local cache_result = vectorcode_cacher.query_from_cache(0)
  for _, file in ipairs(cache_result) do
    prompt_message = prompt_message .. "[CONTEXT]" .. file.path .. "\n" .. file.document
  end
  return prompt_message
    .. " [PREFIX] "
    .. pref
    .. " [SUFFIX] "
    .. suff
    .. " [MIDDLE]"
end

Reference: Mistral AI documentation.

Clone this wiki locally