You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Model to use - llama3 or llama3.1 or 3.2 works well for local usage. In the UI you will have a list of popular models to choose from so the model here is just a starting point
Copy file name to clipboardExpand all lines: README.md
+36-4
Original file line number
Diff line number
Diff line change
@@ -91,7 +91,7 @@ is in system PATH or whatever version you downloaded
91
91
92
92
### Optional - Download Checkpoints - ONLY IF YOU ARE USING THE LOCAL TTS
93
93
94
-
If you are only using speech with Openai or Elevenlabs then you don't need this. To use the local TTS download the checkpoints for the models used in this project ( the docker image has the local xtts in it already ). You can download them from the GitHub releases page and extract the zip and put into the project folder.
94
+
If you are only using speech with Openai or Elevenlabs then you don't need this. To use the local TTS download the checkpoints for the models used in this project ( the docker image has the local xtts and checkpoints in it already ). You can download them from the GitHub releases page and extract the zip and put into the project folder.
This is for running with an Nvidia GPU and you have Nvidia toolkit and cudnn installed.
136
136
137
-
This image is huge when built because of all the checkpoints, cuda base image, build tools and audio tools - So there is no need to download the checkpoints and XTTS as they are in the image. This is all setup to use XTTS, if your not using XTTS for speech it should still work but it is just a large docker image and will take awhile, if you don't want to deal with that then run the app natively and don't use docker.
137
+
This image is huge when built because of all the checkpoints, cuda base image, build tools and audio tools - So there is no need to download the checkpoints and XTTS as they are in the image. This is all setup to use XTTS, if your not using XTTS for speech it should still work but it is just a large docker image and will take awhile, if you don't want to deal with that then run the app natively or build your own image without the xtts and checkpoints folders, if you are not using the local TTS.
138
138
139
139
This guide will help you quickly set up and run the **Voice Chat AI** Docker container. Ensure you have Docker installed and that your `.env` file is placed in the same directory as the commands are run. If you get cuda errors make sure to install nvidia toolkit for docker and cudnn is installed in your path.
140
140
@@ -146,7 +146,7 @@ This guide will help you quickly set up and run the **Voice Chat AI** Docker con
146
146
147
147
---
148
148
149
-
## 🖥️ Run on Windows using docker desktop
149
+
## 🖥️ Run on Windows using docker desktop - prebuilt image
150
150
On windows using docker desktop - run in Windows terminal:
151
151
make sure .env is in same folder you are running this from
152
152
```bash
@@ -201,7 +201,7 @@ docker stop voice-chat-ai
201
201
docker rm voice-chat-ai
202
202
```
203
203
204
-
## Build it yourself:
204
+
## Build it yourself with cuda:
205
205
206
206
```bash
207
207
docker build -t voice-chat-ai .
@@ -218,6 +218,36 @@ Running from wsl
218
218
docker run -d --gpus all -e "PULSE_SERVER=/mnt/wslg/PulseServer" -v \\wsl$\Ubuntu\mnt\wslg:/mnt/wslg/ --env-file .env --name voice-chat-ai -p 8000:8000 voice-chat-ai:latest
1. Rename the .env.sample to `.env` in the root directory of the project and configure it with the necessary environment variables: - The app is controlled based on the variables you add.
0 commit comments