Skip to content

thiscantbeserious/paperless-gpt-prompts

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 

Repository files navigation

paperless-gpt-prompts

These are my prompt drafts for paperless-gpt.

I'm using a self-hosted ollama instance with a single RTX3090, currently with qwen2.5:32b-instruct(albeit 7b and 14b version works quite good too, just a little bit less reliable), that's where I'm coming from ... right now I'm not using any external API, if I will I will seperate that into different prompt because my ollama prompts are quite large.

DO NOT USE OLLAMA PROMPTS WITH APIS LIMITED IN TOKEN INPUT SIZE - AS THEY COULD BE PRETTY EXPENSIVE!

ENV-Vars are set too:

  • ollama: OLLAMA_CONTEXT_LENGTH=8092
  • paperless-gpt: TOKEN_LIMIT=3000.

Settings a manual default max token size is is quite important because the context-length cannot be defered with the go-versions of langchain and its token-dependency together with Ollama as such it will not be able to calculate any token size itself. So if you don't setup these you will likely get very subpar results or stuck work.

About

Template repository for paperless-gpt prompts

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published