Skip to content

Docs: upload images to r2 #548

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion en/community/docs-contribution.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ We categorize documentation issues into two main types:

If you encounter errors while reading a document or wish to suggest modifications, please use the **"Edit on GitHub"** button located in the table of contents on the right side of the document page. Utilize GitHub's built-in online editor to make your changes, then submit a pull request with a concise description of your edits. Please format your pull request title as `Fix: Update xxx`. We'll review your submission and merge the changes if everything looks good.

![](../.gitbook/assets/docs-contribution.png)
![](https://assets-docs.dify.ai/img/en/community/e9fca78c743762d464eb146fbc3d879d.webp)

Alternatively, you can post the document link on our [Issues page](https://github.com/langgenius/dify-docs/issues) with a brief description of the necessary modifications. We'll address these promptly upon receipt.

Expand Down
4 changes: 2 additions & 2 deletions en/development/models-integration/gpustack.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ Using a LLM hosted on GPUStack as an example:

3. Click `Save` to deploy the model.

![gpustack-deploy-llm](../../.gitbook/assets/gpustack-deploy-llm.png)
![gpustack-deploy-llm](https://assets-docs.dify.ai/img/en/models-integration/9ba129b9bae6e6698217b9207c4ec911.webp)

## Create an API Key

Expand All @@ -60,6 +60,6 @@ Using a LLM hosted on GPUStack as an example:

Click "Save" to use the model in the application.

![add-gpustack-llm](../../.gitbook/assets/add-gpustack-llm.png)
![add-gpustack-llm](https://assets-docs.dify.ai/img/en/models-integration/ef6f8cfd721943783d1a3b6122256624.webp)

For more information about GPUStack, please refer to [Github Repo](https://github.com/gpustack/gpustack).
18 changes: 9 additions & 9 deletions en/development/models-integration/hugging-face.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ The specific steps are as follows:
2. Set the API key of Hugging Face ([obtain address](https://huggingface.co/settings/tokens)).
3. Select a model to enter the [Hugging Face model list page](https://huggingface.co/models?pipeline\_tag=text-generation\&sort=trending).

<figure><img src="../../.gitbook/assets/image (14) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="https://assets-docs.dify.ai/img/en/models-integration/dafa87c38d57e81d4b9e71e221b8a42d.webp" alt=""><figcaption></figcaption></figure>

Dify supports accessing models on Hugging Face in two ways:

Expand All @@ -24,17 +24,17 @@ Dify supports accessing models on Hugging Face in two ways:

Hosted inference API is supported only when there is an area containing Hosted inference API on the right side of the model details page. As shown in the figure below:

<figure><img src="../../.gitbook/assets/check-hosted-api.png" alt=""><figcaption></figcaption></figure>
<figure><img src="https://assets-docs.dify.ai/img/en/models-integration/2dab3b4e18ba2142888bb3164d891787.webp" alt=""><figcaption></figcaption></figure>

On the model details page, you can get the name of the model.

<figure><img src="../../.gitbook/assets/get-model-name.png" alt=""><figcaption></figcaption></figure>
<figure><img src="https://assets-docs.dify.ai/img/en/models-integration/79678881bbf8773154bc72288e9921dd.webp" alt=""><figcaption></figcaption></figure>

#### 2 Using access models in Dify

Select Hosted Inference API for Endpoint Type in `Settings > Model Provider > Hugging Face > Model Type`. As shown below:

<figure><img src="../../.gitbook/assets/create-model.png" alt=""><figcaption></figcaption></figure>
<figure><img src="https://assets-docs.dify.ai/img/en/models-integration/90075a56ed15d952a3ac04d8cd678882.webp" alt=""><figcaption></figcaption></figure>

API Token is the API Key set at the beginning of the article. The model name is the model name obtained in the previous step.

Expand All @@ -44,26 +44,26 @@ API Token is the API Key set at the beginning of the article. The model name is

Inference Endpoint is only supported for models with the Inference Endpoints option under the Deploy button on the right side of the model details page. As shown below:

<figure><img src="../../.gitbook/assets/select-model-deploy.png" alt=""><figcaption></figcaption></figure>
<figure><img src="https://assets-docs.dify.ai/img/en/models-integration/ddd118e18fc0b57323b757d6605bcf65.webp" alt=""><figcaption></figcaption></figure>

#### 2 Deployment model

Click the Deploy button for the model and select the Inference Endpoint option. If you have not bound a bank card before, you will need to bind the card. Just follow the process. After binding the card, the following interface will appear: modify the configuration according to the requirements, and click Create Endpoint in the lower left corner to create an Inference Endpoint.

<figure><img src="../../.gitbook/assets/deploy-model.png" alt=""><figcaption></figcaption></figure>
<figure><img src="https://assets-docs.dify.ai/img/en/models-integration/9dd475f2a873a4f14bcd6b5b178314da.webp" alt=""><figcaption></figcaption></figure>

After the model is deployed, you can see the Endpoint URL.

<figure><img src="../../.gitbook/assets/endpoint-url.png" alt=""><figcaption></figcaption></figure>
<figure><img src="https://assets-docs.dify.ai/img/en/models-integration/98dccb0e2519e1f0c6183c03dc5306b3.webp" alt=""><figcaption></figcaption></figure>

#### 3 Using access models in Dify

Select Inference Endpoints for Endpoint Type in `Settings > Model Provider > Hugging face > Model Type`. As shown below:

<figure><img src="../../.gitbook/assets/use-model-in-dify.png" alt=""><figcaption></figcaption></figure>
<figure><img src="https://assets-docs.dify.ai/img/en/models-integration/afe36ac5b91357687cae775c7d79bfe7.webp" alt=""><figcaption></figcaption></figure>

The API Token is the API Key set at the beginning of the article. `The name of the Text-Generation model can be arbitrary, but the name of the Embeddings model needs to be consistent with Hugging Face.` The Endpoint URL is the Endpoint URL obtained after the successful deployment of the model in the previous step.

<figure><img src="../../.gitbook/assets/endpoint-url-2.png" alt=""><figcaption></figcaption></figure>
<figure><img src="https://assets-docs.dify.ai/img/en/models-integration/d50aa4a34851bd159140034d42a6c5b8.webp" alt=""><figcaption></figcaption></figure>

> Note: The "User name / Organization Name" for Embeddings needs to be filled in according to your deployment method on Hugging Face's [Inference Endpoints](https://huggingface.co/docs/inference-endpoints/guides/access), with either the ''[User name](https://huggingface.co/settings/account)'' or the "[Organization Name](https://ui.endpoints.huggingface.co/)".
2 changes: 1 addition & 1 deletion en/development/models-integration/litellm.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ On success, the proxy will start running on `http://localhost:4000`

In `Settings > Model Providers > OpenAI-API-compatible`, fill in:

<figure><img src="../../.gitbook/assets/image (115).png" alt=""><figcaption></figcaption></figure>
<figure><img src="https://assets-docs.dify.ai/img/en/models-integration/f69083ad920029a8141ca3f5db031379.webp" alt=""><figcaption></figcaption></figure>

* Model Name: `gpt-4`
* Base URL: `http://localhost:4000`
Expand Down
4 changes: 2 additions & 2 deletions en/development/models-integration/replicate.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,8 @@ Specific steps are as follows:
3. Pick a model. Select the model under [Language models](https://replicate.com/collections/language-models) and [Embedding models](https://replicate.com/collections/embedding-models) .
4. Add models in Dify's `Settings > Model Provider > Replicate`.

<figure><img src="../../.gitbook/assets/set-up-replicate.png" alt=""><figcaption></figcaption></figure>
<figure><img src="https://assets-docs.dify.ai/img/en/models-integration/6cd6a9e4ae99fdbb87ff96a98a8b36b3.webp" alt=""><figcaption></figcaption></figure>

The API key is the API Key set in step 2. Model Name and Model Version can be found on the model details page:

<figure><img src="../../.gitbook/assets/replicate-version.png" alt=""><figcaption></figcaption></figure>
<figure><img src="https://assets-docs.dify.ai/img/en/models-integration/301090201162d1eba3554ae36b39a355.webp" alt=""><figcaption></figcaption></figure>
2 changes: 1 addition & 1 deletion en/development/models-integration/xinference.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ There are two ways to deploy Xinference, namely [local deployment](https://githu

Visit `http://127.0.0.1:9997`, select the model and specification you need to deploy, as shown below:

<figure><img src="../../.gitbook/assets/image (16) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="https://assets-docs.dify.ai/img/en/models-integration/5db924ca8cb21a4916c698818202421c.webp" alt=""><figcaption></figcaption></figure>

As different models have different compatibility on different hardware platforms, please refer to [Xinference built-in models](https://inference.readthedocs.io/en/latest/models/builtin/index.html) to ensure the created model supports the current hardware platform.
4. Obtain the model UID
Expand Down
22 changes: 11 additions & 11 deletions en/guides/annotation/annotation-reply.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,13 +17,13 @@ The annotated replies feature essentially provides another set of retrieval-enha
4. If no match is found, the question will continue through the regular process (passing to LLM or RAG).
5. Once the annotated replies feature is disabled, the system will no longer match responses from annotations.

<figure><img src="../../.gitbook/assets/image (130).png" alt="" width="563"><figcaption><p>Annotated Replies Workflow</p></figcaption></figure>
<figure><img src="https://assets-docs.dify.ai/img/en/annotation/392c0d2847ce07c31d054f32c1103e4d.webp" alt="" width="563"><figcaption><p>Annotated Replies Workflow</p></figcaption></figure>

### Enabling Annotated Replies in Prompt Orchestration

Enable the annotated replies switch by navigating to **“Orchestrate -> Add Features”**:

<figure><img src="../../.gitbook/assets/annotated-replies.png" alt=""><figcaption><p>Enabling Annotated Replies in Prompt Orchestration</p></figcaption></figure>
<figure><img src="https://assets-docs.dify.ai/img/en/annotation/11d3c1b21e275834befd34df0d74bfd0.webp" alt=""><figcaption><p>Enabling Annotated Replies in Prompt Orchestration</p></figcaption></figure>

When enabling, you need to set the parameters for annotated replies, which include: Score Threshold and Embedding Model.

Expand All @@ -33,27 +33,27 @@ When enabling, you need to set the parameters for annotated replies, which inclu

Click save and enable, and the settings will take effect immediately. The system will generate embeddings for all saved annotations using the embedding model.

<figure><img src="../../.gitbook/assets/setting-parameters-for-annotated-replies.png" alt=""><figcaption><p>Setting Parameters for Annotated Replies</p></figcaption></figure>
<figure><img src="https://assets-docs.dify.ai/img/en/annotation/483f9e6e1b8a222868ac32e9b0b12350.webp" alt=""><figcaption><p>Setting Parameters for Annotated Replies</p></figcaption></figure>

### Adding Annotations in the Conversation Debug Page

You can directly add or edit annotations on the model response information in the debug and preview pages.

<figure><img src="../../.gitbook/assets/add-annotation-reply.png" alt=""><figcaption><p>Adding Annotated Replies</p></figcaption></figure>
<figure><img src="https://assets-docs.dify.ai/img/en/annotation/c753c1e2babd3cd4e40f349c53d03390.webp" alt=""><figcaption><p>Adding Annotated Replies</p></figcaption></figure>

Edit the response to the high-quality reply you need and save it.

<figure><img src="../../.gitbook/assets/editing-annotated-replies.png" alt=""><figcaption><p>Editing Annotated Replies</p></figcaption></figure>
<figure><img src="https://assets-docs.dify.ai/img/en/annotation/1cb0f1a4819287ca89c8e6ce3b56bbff.webp" alt=""><figcaption><p>Editing Annotated Replies</p></figcaption></figure>

Re-enter the same user question, and the system will use the saved annotation to reply to the user's question directly.

<figure><img src="../../.gitbook/assets/annotaiton-reply.png" alt=""><figcaption><p>Replying to User Questions with Saved Annotations</p></figcaption></figure>
<figure><img src="https://assets-docs.dify.ai/img/en/annotation/6350513833017c827660c273cd3dcdba.webp" alt=""><figcaption><p>Replying to User Questions with Saved Annotations</p></figcaption></figure>

### Enabling Annotated Replies in Logs and Annotations

Enable the annotated replies switch by navigating to “Logs & Ann. -> Annotations”:

<figure><img src="../../.gitbook/assets/logs-annotation-switch.png" alt=""><figcaption><p>Enabling Annotated Replies in Logs and Annotations</p></figcaption></figure>
<figure><img src="https://assets-docs.dify.ai/img/en/annotation/07c57ea858385985fa83ac30289cc138.webp" alt=""><figcaption><p>Enabling Annotated Replies in Logs and Annotations</p></figcaption></figure>

### Setting Parameters for Annotated Replies in the Annotation Backend

Expand All @@ -63,22 +63,22 @@ The parameters that can be set for annotated replies include: Score Threshold an

**Embedding Model:** This is used to vectorize the annotated text. Changing the model will regenerate the embeddings.

<figure><img src="../../.gitbook/assets/annotated-replies-initial.png" alt=""><figcaption><p>Setting Parameters for Annotated Replies</p></figcaption></figure>
<figure><img src="https://assets-docs.dify.ai/img/en/annotation/2eef1ac7dfeae549201c9e5e6ebbcdba.webp" alt=""><figcaption><p>Setting Parameters for Annotated Replies</p></figcaption></figure>

### Bulk Import of Annotated Q\&A Pairs

In the bulk import feature, you can download the annotation import template, edit the annotated Q\&A pairs according to the template format, and then import them in bulk.

<figure><img src="../../.gitbook/assets/bulk-import-annotated.png" alt=""><figcaption><p>Bulk Import of Annotated Q&A Pairs</p></figcaption></figure>
<figure><img src="https://assets-docs.dify.ai/img/en/annotation/a362886fc1f3f1e05fc0386950bb5a0f.webp" alt=""><figcaption><p>Bulk Import of Annotated Q&A Pairs</p></figcaption></figure>

### Bulk Export of Annotated Q\&A Pairs

Through the bulk export feature, you can export all saved annotated Q\&A pairs in the system at once.

<figure><img src="../../.gitbook/assets/bulk-export-annotations.png" alt=""><figcaption><p>Bulk Export of Annotated Q&A Pairs</p></figcaption></figure>
<figure><img src="https://assets-docs.dify.ai/img/en/annotation/2bd8b91e75d8754d944095d76e295508.webp" alt=""><figcaption><p>Bulk Export of Annotated Q&A Pairs</p></figcaption></figure>

### Viewing Annotation Hit History

In the annotation hit history feature, you can view the edit history of all hits on the annotation, the user's hit questions, the response answers, the source of the hits, the matching similarity scores, the hit time, and other information. You can use this information to continuously improve your annotated content.

<figure><img src="../../.gitbook/assets/view-annotation-hit-history.png" alt=""><figcaption><p>Viewing Annotation Hit History</p></figcaption></figure>
<figure><img src="https://assets-docs.dify.ai/img/en/annotation/5b04cde5481067b07edbda3083fa9c8b.webp" alt=""><figcaption><p>Viewing Annotation Hit History</p></figcaption></figure>
2 changes: 1 addition & 1 deletion en/guides/annotation/logs.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ The logs currently do not include interaction records from the Prompt debugging
These annotations will be used for model fine-tuning in future versions of Dify to improve model accuracy and response style. The current preview version only supports annotations.
{% endhint %}

<figure><img src="../../.gitbook/assets/app-logs-ann.png" alt=""><figcaption><p>Mark logs to improve your app</p></figcaption></figure>
<figure><img src="https://assets-docs.dify.ai/img/en/annotation/950c84b05c12f92a8235e3265e42d200.webp" alt=""><figcaption><p>Mark logs to improve your app</p></figcaption></figure>

Clicking on a log entry will open the log details panel on the right side of the interface. In this panel, operators can annotate an interaction:

Expand Down
6 changes: 3 additions & 3 deletions en/guides/application-orchestrate/app-toolkits/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,19 +4,19 @@ In **Application Orchestration**, click **Add Feature** to open the application

The application toolbox provides various additional features for Dify's [applications](../#application_type):

<figure><img src="../../../.gitbook/assets/content_moderation (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="https://assets-docs.dify.ai/img/en/app-toolkits/497e3742914867ca48658fa2334f1a6d.webp" alt=""><figcaption></figcaption></figure>

### Conversation Opening

In conversational applications, the AI will proactively say the first sentence or ask a question. You can edit the content of the opening, including the initial question. Using conversation openings can guide users to ask questions, explain the application background, and lower the barrier for initiating a conversation.

<figure><img src="../../../.gitbook/assets/image (240).png" alt=""><figcaption><p>Conversation Opening</p></figcaption></figure>
<figure><img src="https://assets-docs.dify.ai/img/en/app-toolkits/03ec96d77980fb7de26f478f9a47dbb5.webp" alt=""><figcaption><p>Conversation Opening</p></figcaption></figure>

### Next Step Question Suggestions

Setting next step question suggestions allows the AI to generate 3 follow-up questions based on the previous conversation, guiding the next round of interaction.

<figure><img src="../../../.gitbook/assets/image (241).png" alt=""><figcaption></figcaption></figure>
<figure><img src="https://assets-docs.dify.ai/img/en/app-toolkits/ac8c64dcb98e22a22a80b9eeb2712014.webp" alt=""><figcaption></figcaption></figure>

### Citation and Attribution

Expand Down
Loading
Loading