Skip to content

various docs fixes #5120

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Apr 11, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 6 additions & 5 deletions docs/docs/customize/deep-dives/docs.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ To add a single documentation site, we recommend using the **Add Documentation**

In the **Add Documentation** Form, enter a `Title` and `Start URL` for the site.

- `Title`: The name of the documentation site, used for identification in the UI
- `Title`: The name of the documentation site, used for identification in the UI.
- `Start URL`: The URL where the indexing process should begin.

Indexing will begin upon submission. Progress can be viewed in the form or later in the `@docs indexes` section of the `More` page.
Expand Down Expand Up @@ -154,9 +154,9 @@ As with [@Codebase context provider configuration](./codebase#configuration), yo
</TabItem>
</Tabs>

### Github
### GitHub

The Github API rate limits public requests to 60 per hour. If you want to reliably index Github repos, you can add a github token to your config file:
The GitHub API rate limits public requests to 60 per hour. If you want to reliably index GitHub repos, you can add a github token to your config file:

<Tabs groupId="config-example">
<TabItem value="yaml" label="YAML">
Expand Down Expand Up @@ -223,7 +223,7 @@ Chromium crawling has been deprecated
Further notes:

- If the site is only locally accessible, the default crawler will fail anyways and fall back to the local crawler. `useLocalCrawling` is especially useful if the URL itself is confidential.
- For Github Repos this has no effect because only the Github Crawler will be used, and if the repo is private it can only be accessed with a priveleged Github token anyways.
- For GitHub Repos this has no effect because only the GitHub Crawler will be used, and if the repo is private it can only be accessed with a priveleged GitHub token anyways.

## Managing your docs indexes

Expand All @@ -237,6 +237,7 @@ You can view indexing statuses and manage your documentation sites from the `@do
![More Page @docs indexes section](/img/docs-indexes.png)

You can also view the overall status of currently indexing docs from a hideable progress bar at the bottom of the chat page

![Documentation indexing peek](/img/docs-indexing-peek.png)

You can also use the following IDE command to force a re-index of all docs: `Continue: Docs Force Re-Index`.
Expand Down Expand Up @@ -327,7 +328,7 @@ The following configuration example includes:
- Examples of both public and private documentation sources
- A custom embeddings provider
- A reranker model available, with reranking parameters customized
- A Github token to enable Github crawling
- A GitHub token to enable GitHub crawling

<Tabs groupId="config-example">
<TabItem value="yaml" label="YAML">
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/customize/model-providers/more/asksage.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ import TabItem from "@theme/TabItem";
import Tabs from "@theme/Tabs";

:::info
To get an Ask Sage API key login to the Ask Sage platform (If you don't have an account, you can create one [here](https://chat.asksage.ai/)) and follow the instructions in the Ask Sage Docs:[Ask Sage API Key](https://docs.asksage.ai/docs/api-documentation/api-documentation.html)
To get an Ask Sage API key login to the Ask Sage platform (If you don't have an account, you can create one [here](https://chat.asksage.ai/)) and follow the instructions in the Ask Sage Docs: [Ask Sage API Key](https://docs.asksage.ai/docs/api-documentation/api-documentation.html)
:::

## Configuration
Expand Down Expand Up @@ -55,7 +55,7 @@ Currently, the setup for the models provided by Ask Sage is to support the follo

More models, functionalities and documentation will be added in the future for Ask Sage Integration.

> We recommend to utilize the`OpenAI` or `Anthropic` models for the best performance and results for the `Chat` and `Edit` functionalities.
> We recommend to utilize the `OpenAI` or `Anthropic` models for the best performance and results for the `Chat` and `Edit` functionalities.

## Ask Sage Documentation

Expand Down
4 changes: 2 additions & 2 deletions docs/docs/customize/model-providers/more/siliconflow.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -81,8 +81,8 @@ We recommend configuring **Qwen/Qwen2.5-Coder-7B-Instruct** as your autocomplete

## Embeddings model

SiliconFlow provide some embeddings models, [Click here](https://siliconflow.cn/models) to see a list of embeddings models.
SiliconFlow provide some embeddings models. [Click here](https://siliconflow.cn/models) to see a list of embeddings models.

## Reranking model

SiliconFlow provide some reranking models, [Click here](https://siliconflow.cn/models) to see a list of reranking models.
SiliconFlow provide some reranking models. [Click here](https://siliconflow.cn/models) to see a list of reranking models.
42 changes: 42 additions & 0 deletions docs/docs/customize/model-roles/chat.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -257,6 +257,14 @@ If your local machine can run an 8B parameter model, then we recommend running L
model: llama3.1:8b
```
</TabItem>
<TabItem value="Msty">
```yaml title="config.yaml"
models:
- name: Llama 3.1 8B
provider: msty
model: llama3.1:8b
```
</TabItem>
</Tabs>
</TabItem>
<TabItem value="json" label="JSON">
Expand Down Expand Up @@ -287,6 +295,19 @@ If your local machine can run an 8B parameter model, then we recommend running L
}
```
</TabItem>
<TabItem value="Msty">
```json title="config.json"
{
"models": [
{
"title": "Llama 3.1 8B",
"provider": "msty",
"model": "llama3.1-8b"
}
]
}
```
</TabItem>
</Tabs>
</TabItem>
</Tabs>
Expand Down Expand Up @@ -325,6 +346,14 @@ If your local machine can run a 16B parameter model, then we recommend running D
model: deepseek-coder-v2:16b
```
</TabItem>
<TabItem value="Msty">
```yaml title="config.yaml"
models:
- name: DeepSeek Coder 2 16B
provider: msty
model: deepseek-coder-v2:16b
```
</TabItem>
</Tabs>
</TabItem>
<TabItem value="json" label="JSON">
Expand Down Expand Up @@ -356,6 +385,19 @@ If your local machine can run a 16B parameter model, then we recommend running D
}
```
</TabItem>
<TabItem value="Msty">
```json title="config.json"
{
"models": [
{
"title": "DeepSeek Coder 2 16B",
"provider": "msty",
"model": "deepseek-coder-v2:16b"
}
]
}
```
</TabItem>
</Tabs>
</TabItem>
</Tabs>
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/customize/model-roles/edit.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,8 @@ In Continue, you can add `edit` to a model's roles to specify that it can be use
Set the `experimental.modelRoles.inlineEdit` property in `config.json`.
```json title="config.json"
{
"models": {
"name": "Claude 3.5 Sonnet",
"models": {
"name": "Claude 3.5 Sonnet",
"provider": "anthropic",
"model": "claude-3-5-sonnet-latest",
"apiKey": "<YOUR_ANTHROPIC_API_KEY>"
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/customize/tutorials/custom-code-rag.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Most embeddings models can only handle a limited amount of text at once. To get
If you use `voyage-code-3`, it has a maximum context length of 16,000 tokens, which is enough to fit most files. This means that in the beginning you can get away with a more naive strategy of truncating files that exceed the limit. In order of easiest to most comprehensive, 3 chunking strategies you can use are:

1. Truncate the file when it goes over the context length: in this case you will always have 1 chunk per file.
2. Split the file into chunks of a fixed length: starting at the top of the file, add lines you your current chunk until it reaches the limit, then start a new chunk.
2. Split the file into chunks of a fixed length: starting at the top of the file, add lines in your current chunk until it reaches the limit, then start a new chunk.
3. Use a recursive, abstract syntax tree (AST)-based strategy: this is the most exact, but most complex. In most cases you can achieve high quality results by using (1) or (2), but if you'd like to try this you can find a reference example in [our code chunker](https://github.com/continuedev/continue/blob/main/core/indexing/chunk/code.ts) or in [LlamaIndex](https://docs.llamaindex.ai/en/stable/api_reference/node_parsers/code/).

As usual in this guide, we recommend starting with the strategy that gives 80% of the benefit with 20% of the effort.
Expand Down Expand Up @@ -79,7 +79,7 @@ Regardless of which database or model you have chosen, your script should iterat

In a perfect production version, you would want to build "automatic, incremental indexing", so that you whenever a file changes, that file and nothing else is automatically re-indexed. This has the benefits of perfectly up-to-date embeddings and lower cost.

That said, we highly recommend first building and testing the pipeline before attempting this. Unless your codebase is being entirely rewritten frequently, a full refresh of the index is likely to be sufficient and reasonably cheap.
That said, we highly recommend first building and testing the pipeline before attempting this. Unless your codebase is being entirely rewritten frequently, an incremental refresh of the index is likely to be sufficient and reasonably cheap.
:::

At this point, you've written your indexing script and tested that you can make queries from your vector database. Now, you'll want a plan for when to run the indexing script.
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/customize/tutorials/llama3.1.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,7 @@ Cerebras Inference uses specialized silicon to provides fast inference for the L
- name: Cerebras Llama 3.1 70B
provider: cerebras
model: llama3.1-70b
apiKey: <YOUR_ANTHROPIC_API_KEY>
apiKey: <YOUR_CEREBRAS_API_KEY>
```
</TabItem>
<TabItem value="json" label="JSON">
Expand All @@ -209,7 +209,7 @@ Cerebras Inference uses specialized silicon to provides fast inference for the L
"title": "Cerebras Llama 3.1 70B",
"provider": "cerebras",
"model": "llama3.1-70b",
"apiKey": "<YOUR_ANTHROPIC_API_KEY>"
"apiKey": "<YOUR_CEREBRAS_API_KEY>"
}
]
}
Expand Down
3 changes: 0 additions & 3 deletions docs/docs/customize/tutorials/set-up-codestral.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,3 @@ import TabItem from "@theme/TabItem";

5. If you run into any issues or have any questions, please join our Discord and post in the `#help` channel [here](https://discord.gg/EfJEfdFnDQ)

### Ask for help on Discord

Please join our Discord and post in the `#help` channel [here](https://discord.gg/EfJEfdFnDQ) if you are having problems using Codestral
2 changes: 1 addition & 1 deletion docs/docs/hub/blocks/block-types.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Continue supports [many model providers](../../customize/model-providers), inclu

## Context

Context blocks define a context provider which can be referenced in Chat with `@` to pull in data from external sources such as files and folders, a URL, Jira or Confluence, and Github issues, among others. [Explore context provider blocks](https://hub.continue.dev/explore/context) on the hub.
Context blocks define a context provider which can be referenced in Chat with `@` to pull in data from external sources such as files and folders, a URL, Jira or Confluence, and GitHub issues, among others. [Explore context provider blocks](https://hub.continue.dev/explore/context) on the hub.

Learn more about context providers [here](../../reference.md#context), and check out [this guide](../../customize/tutorials/build-your-own-context-provider.mdx) to creating your own custom context provider. The `config.yaml` spec for context can be found [`here`](../../reference.md#context).

Expand Down
Loading