Skip to content

Commit dde0224

Browse files
mtuliojaluma
andauthored
fix(docs): Fix concepts.mdx referencing to installation page (#1779)
* Fix/update concepts.mdx referencing to installation page The link for `/installation` is broken in the "Main Concepts" page. The correct path would be `./installation` or maybe `/installation/getting-started/installation` * fix: docs --------- Co-authored-by: Javier Martinez <[email protected]>
1 parent 067a5f1 commit dde0224

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

fern/docs/pages/installation/concepts.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -15,13 +15,13 @@ You get to decide the setup for these 3 main components:
1515
There is an extra component that can be enabled or disabled: the UI. It is a Gradio UI that allows to interact with the API in a more user-friendly way.
1616

1717
### Setups and Dependencies
18-
Your setup will be the combination of the different options available. You'll find recommended setups in the [installation](/installation) section.
18+
Your setup will be the combination of the different options available. You'll find recommended setups in the [installation](./installation) section.
1919
PrivateGPT uses poetry to manage its dependencies. You can install the dependencies for the different setups by running `poetry install --extras "<extra1> <extra2>..."`.
2020
Extras are the different options available for each component. For example, to install the dependencies for a a local setup with UI and qdrant as vector database, Ollama as LLM and HuggingFace as local embeddings, you would run
2121

2222
`poetry install --extras "ui vector-stores-qdrant llms-ollama embeddings-huggingface"`.
2323

24-
Refer to the [installation](/installation) section for more details.
24+
Refer to the [installation](./installation) section for more details.
2525

2626
### Setups and Configuration
2727
PrivateGPT uses yaml to define its configuration in files named `settings-<profile>.yaml`.
@@ -57,4 +57,4 @@ For local LLM there are two options:
5757
In order for LlamaCPP powered LLM to work (the second option), you need to download the LLM model to the `models` folder. You can do so by running the `setup` script:
5858
```bash
5959
poetry run python scripts/setup
60-
```
60+
```

0 commit comments

Comments
 (0)