SaaS • Private Deployment • Docs • Discord
Deploy agents in your own data center or VPC and retain complete data security & control.
Including support for RAG, API-calling and vision support. Build and deploy LLM apps, aka agents by writing a helix.yaml.
Our GPU scheduler packs models efficiently into available GPU memory and dynamically loads and unloads models depending on demand.
Use our quickstart installer:
curl -sL -O https://get.helixml.tech/install.sh
chmod +x install.sh
sudo ./install.sh
The installer will prompt you before making changes to your system. By default, the dashboard will be available on http://localhost:8080
.
For setting up a deployment with a DNS name, see ./install.sh --help
or read the detailed docs. We've documented easy TLS termination for you.
Attach your own GPU runners per runners docs or use any external OpenAI-compatible LLM.
Use our helm charts:
For local development, refer to the Helix local development guide.
Helix is licensed under a similar license to Docker Desktop. You can run the source code (in this repo) for free for:
- Personal Use: individuals or people personally experimenting
- Educational Use: schools/universities
- Small Business Use: companies with under $10M annual revenue and less than 250 employees
If you fall outside of these terms, please use the Launchpad to purchase a license for large commercial use. Trial licenses are available for experimentation.
You are not allowed to use our code to build a product that competes with us.
Contributions to the source code are welcome, and by contributing you confirm that your changes will fall under the same license and be owned by HelixML, Inc.
- We generate revenue to support the development of Helix. We are an independent software company.
- We don't want cloud providers to take our open source code and build a rebranded service on top of it.
If you would like to use some part of this code under a more permissive license, please get in touch.