feat: Add litellm and configurable service loading #74
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This commit introduces LiteLLM as a new optional service and provides a mechanism to selectively load Docker services.
Key changes:
litellm
service todocker-compose.yml
.ghcr.io/berriai/litellm:main-stable
image../litellm/config.yaml
.ollama
andredis
.litellm/config.yaml
with a default configuration to connect to a local Ollama model.Caddyfile
to include a reverse proxy forlitellm
(defaulting to port 8009).start_services.py
:--services
command-line argument to specify a comma-separated list of services to start (e.g.,litellm,ollama,open-webui
).--services
is not provided, all services are started by default.open-webui
will also startollama
. Selectinglitellm
will startollama
andredis
.supabase
services are always started.caddy
service is automatically started if any other application services are selected.README.md
to:--services
flag and its usage.litellm
usinglitellm/config.yaml
.litellm
via Caddy.