Skip to content

kevincojean/vim-ai-provider-openai-responses

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

vim-ai provider OpenAI Reponse API

vim-ai provider plugin for OpenAI's Response API.

Installation

  1. Ensure madox2/vim-ai is installed.
  2. Add kevincojean/vim-ai-provider-openai-responses to your Vim plugin manager.
     Plug 'madox2/vim-ai'
     Plug 'kevincojean/vim-ai-provider-openai-responses'
  3. The plugin will automatically install the openai python plugin after installation.

    You may disable this behaviour by setting g:vim_ai_openai_responses_enable_autoinstall = 0 in your vimrc. Then you may call the VimAiOpenAiResponsesInstallDependencies command to install the dependency manually.

API key

Set your OpenAI API Key either by:
exporting

export OPENAI_API_KEY="YOUR_OPENAI_API_KEY"

or using token_file_path as an option in your configuration:

options.token_file_path = ~/.config/openai.token

Configuration

You can configure a custom role or use vim-ai's configuration variable for the OpenAI Responses API g:vim_ai_openai_responses_config.

The list of options you may pass are numerous and documented by OpenAI:
https://platform.openai.com/docs/api-reference/responses

The following flags may be set in your vimrc to influence specific behaviours.

  • g:vim_ai_openai_responses_enable_autoinstall: 1 to enable auto-install at start, this is slow.
  • g:vim_ai_openai_responses_config: overridable configuration for the openai responses api.
  • g:vim_ai_openai_responses_logging: set to 1 to enable logging from this plugin, logging will slow down the streaming responses.
  • g:vim_ai_openai_responses_logging_file: set to the absolute file path to the log file.
  • g:vim_ai_openai_responses_ai_logging: set to 1 to enable logging of OpenAI response, logging will slow down the streaming responses.
  • g:vim_ai_openai_responses_ai_logging_file: set to the absolute file path to the log file.

Example configuration

let g:vim_ai_openai_responses_enable_autoinstall = 0
let g:vim_ai_openai_responses_logging = 1
let g:vim_ai_openai_responses_ai_logging = 1
let g:vim_ai_openai_responses_logging_file = "/tmp/vim-ai-openai-responses.log"
let g:vim_ai_openai_responses_ai_logging_file = "/tmp/vim-ai-openai-responses-ai.log"

VimAI configuration example

let g:vim_ai_openai_responses_config = {
\  "model": "gpt-4o-mini",
\  "endpoint_url": "https://api.openai.com/v1/responses",
\  "stream": "True",
\  "token_file_path": "~/.config/openai.token",
\ }

Role configuration example

Making an assistant for generating text UML diagrams.
It is connected to a vector store which has a 600 page pdf documentation of the plantuml syntax. You force the assistant to use the vector store with the options.tools.type = file_search option.

[umldiagram.assistant]
provider = openai_responses
options.model = gpt-4.1
options.stream = true
options.tools.type = file_search
options.tools.vector_store_ids = [your_vector_store_uid]
options.store = false
prompt =
    You help me write UML diagrams in Vim.
    Your answers should be enclosed in the following block of markdown syntax:
    ```plantuml
    @startuml
    [your answers go here]
    @enduml
    ```
    Before giving me a diagram, you follow the following steps:
    1. You proactively identify parts of the diagram which need clarification and you ask me those questions sequentially.
    2. You suggest the best kind of PlantUML compatible diagram for the use case.
    3. You generate a diagram. You use the markdown ```plantuml``` syntax.
    4. Iterate until the user is completely satisfied.

License

MIT License

About

Integrates OpenAI responses and assistants directly into Vim.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 87.1%
  • Vim Script 12.9%