Skip to content

Commit 3a60167

Browse files
committed
update LLMs docs
1 parent 1b47722 commit 3a60167

File tree

7 files changed

+479
-2
lines changed

7 files changed

+479
-2
lines changed

docs/doc/assets/ssd_car.jpg

67.3 KB
Loading

docs/doc/en/mllm/llm_deepseek.md

Lines changed: 120 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,18 @@ The 1.5B version is essentially a distilled version based on Qwen2.5. That means
2626

2727
## Running DeepSeek R1 on MaixPy MaixCAM
2828

29-
As mentioned above, since the network structure is the same as Qwen2.5, please refer to the [Qwen Documentation](./llm_qwen.md). Below is a usage example:
29+
As mentioned above, since the network structure is the same as Qwen2.5, please refer to the [Qwen Documentation](./llm_qwen.md).
30+
31+
### Models and Download Links
32+
33+
By default, if `/root/models` directory in the system have no model, you can download it manually.
34+
35+
* **1.5B**:
36+
37+
* Memory requirement: CMM memory 1.8GiB. See [Memory Usage Documentation](../pro/memory.md) for explanation.
38+
* Download link: https://huggingface.co/sipeed/deepseek-r1-distill-qwen-1.5B-maixcam2
39+
40+
Download method please refer to [Qwen Documentation](./llm_qwen.md).
3041

3142
### Running the Model
3243

@@ -74,3 +85,111 @@ err.check_raise(resp.err_code)
7485
# print(resp.msg)
7586
```
7687

88+
Result:
89+
```
90+
>> Hello, please introduce yourself.
91+
<think>
92+
Alright, the user sent "Hello, please introduce yourself." and then added a bot message saying, "You are Qwen, created by Alibaba Cloud."
93+
94+
I should respond warmly in Chinese. I need let them know I'm here to help with any questions they have and also mention how I can assist them further.
95+
96+
I should phrase it politely, making sure it's clear and friendly. Also, it's good to offer to help them with whatever they're curious about.
97+
98+
I think that covers it. I'll keep it concise and positive.
99+
</think>
100+
101+
你好!我是AI助手,由阿里巴巴AI研究有限公司开发,很高兴能为您提供帮助。有什么我可以帮助你的吗?
102+
103+
>> Please calculate 1990 + 35 and provide the calculation steps.
104+
<think>
105+
好,让我看看用户的查询。用户给了一个计算题:“请计算1990加35,并提供计算步骤。”看起来用户是一位学生,可能刚刚学过数学,需要练习加法。用户希望得到详细的计算步骤。
106+
107+
首先,我需要确认用户的具体需求。用户明确要求计算步骤,所以我要仔细检查计算是否正确,以及步骤是否清晰。计算1990加35,首先可以从个位开始相加,然后是十位,再是百位和千位,如果有进位的话。
108+
109+
我得确保每个步骤都准确无误,确保用户理解每一步是怎么进行的。另外,我要注意数字的排列是否正确,避免笔误。如果用户需要,我可以补充其他数学问题,帮助他们进一步学习。
110+
111+
总之,我的回答应该包括计算过程的详细说明,确保用户能够一步步理解如何进行加法操作,同时也提供一些额外的信息和可能的补充帮助。
112+
</think>
113+
114+
好的,让我来计算一下1990加35的步骤吧!
115+
116+
首先,写下这两个数字:
117+
```
118+
1990
119+
+ 35
120+
-------
121+
```
122+
123+
**步骤1:对齐数字**
124+
将两个数字对齐,确保它们的位数对齐。也就是说,个位对齐,十位对齐,百位对齐,千位对齐。
125+
126+
```
127+
1990
128+
+ 35
129+
-------
130+
```
131+
132+
**步骤2:从个位开始相加**
133+
个位的数字是0和5,相加的结果是5。写下这个结果,个位是5。
134+
135+
```
136+
1990
137+
+ 35
138+
-------
139+
5
140+
```
141+
142+
**步骤3:处理十位**
143+
十位的数字是9和3相加,再加上前面个位的进位(但这里没有进位)。9加3等于12。写下2,并将1作为进位。
144+
145+
```
146+
1990
147+
+ 35
148+
-------
149+
25
150+
```
151+
152+
**步骤4:处理百位**
153+
百位的数字是9和0相加,再加上前面的进位(1)。9加0等于9,加上进位的1,等于10。写下0,并将1作为进位到千位。
154+
155+
```
156+
1990
157+
+ 35
158+
-------
159+
125
160+
```
161+
162+
**步骤5:处理千位**
163+
千位的数字是1和0,再加上前面的进位(1)。1加0等于1,加上进位的1,等于2。写下2。
164+
165+
```
166+
1990
167+
+ 35
168+
-------
169+
2025
170+
```
171+
172+
所以,最终的计算结果是:
173+
```
174+
1990 + 35 = 2025
175+
```
176+
177+
>> please calculate 1990 + 35
178+
<think>
179+
Okay, so I need to calculate 1,9990 plus 35. Wait, that doesn't seem right. 1,990 sounds like a number with a comma in it, but I'm not sure. Maybe the comma is a thousands separator? So, 1,990 would be 1,990, right? Hmm, that makes more sense. So, I'm supposed to add 1,990 and 35.
180+
181+
Let me try that again. 1,900 plus 90 is 1,990. Yeah, okay, that's correct. So, 1,990 plus 35 would be adding 35 to 1,990. So, 1,990 plus 30 is 2,020, and then plus 5 makes 2,025. So, the answer should be 2,025. Wait, but I'm not a math expert, so maybe I should double-check that. 1,990 is the same as 1990, right? Yeah, 1,000 plus 990 is 1,990. Adding 35, so 1,990 plus 35 equals 2,025. Yeah, that makes sense.
182+
</think>
183+
184+
The sum of 1,990 and 35 is calculated as follows:
185+
186+
1,990 + 35 = 2,025.
187+
188+
**Answer:** 2,025
189+
```
190+
191+
192+
## Modifying Parameters
193+
194+
Refer to [Qwen Documentation](./llm_qwen.md)
195+

docs/doc/en/mllm/llm_qwen.md

Lines changed: 91 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -135,6 +135,37 @@ err.check_raise(resp.err_code)
135135
# print(resp.msg)
136136
```
137137

138+
Result:
139+
```
140+
>> 你好,请介绍你自己
141+
你好!我是Qwen,我是一个由阿里云开发的预训练语言模型。我的设计目的是尽可能多地模拟人类语言的复杂性和多样性。虽然我没有个人记忆或情感,但我可以生成连贯和有意义的文本。如果你有任何问题或需要帮助,请告诉我!
142+
143+
>> 请计算 1990 + 35的值,并给出计算过程
144+
计算的过程如下:
145+
146+
1. 首先,将两个数相加,即 1990 + 35。
147+
148+
2. 将 1990 和 35 对齐数字位数,如下:
149+
150+
1 9 9 0
151+
+ 3 5
152+
-------
153+
2 0 2 5
154+
155+
3. 按照从右向左加起来:
156+
157+
0 + 5 = 5
158+
2 + 9 = 11(进一,写 2 进 10)
159+
1 + 9 = 10(进一,写 0 进 10)
160+
1 + 1 = 2
161+
162+
所以,1990 + 35 的结果是 2025。
163+
164+
>> please calculate 1990 + 35
165+
1990 + 35 = 2025
166+
167+
```
168+
138169
### Context
139170

140171
Due to limited resources, the context length is also limited. For example, the default model supports about 512 tokens, and at least 128 free tokens must remain to continue the dialogue. For instance, if the historical tokens reach 500 (which is less than 512 but not enough free tokens), further dialogue is not possible.
@@ -143,6 +174,66 @@ When the context is full, you currently must call `clear_context()` to clear the
143174

144175
Of course, this context length can be modified, but doing so requires re-quantizing the model. Also, longer context length can slow down model performance. If needed, you can convert the model yourself as described below.
145176

177+
## Modifying Parameters
178+
179+
The Qwen model allows certain parameters to be modified, which can change the model's behavior. Default values are typically set within the `model.mud` file,
180+
Of course, you can also set these values programmatically, for example:
181+
182+
```python
183+
qwen.post_config.temperature = 0.9
184+
```
185+
186+
configs for example:
187+
188+
```ini
189+
[post_config]
190+
enable_temperature = true
191+
temperature = 0.9
192+
193+
enable_repetition_penalty = false
194+
repetition_penalty = 1.2
195+
penalty_window = 20
196+
197+
enable_top_p_sampling = false
198+
top_p = 0.8
199+
200+
enable_top_k_sampling = true
201+
top_k = 10
202+
```
203+
204+
These parameters are used to **control the text generation behavior** of the Qwen model (or other large language models) through sampling strategies. They affect the **diversity, randomness, and repetition** of the output. Below is an explanation of each parameter:
205+
206+
* `enable_temperature = true`
207+
* `temperature = 0.9`
208+
* **Meaning**: Enables the "temperature sampling" strategy and sets the temperature value to 0.9.
209+
* **Explanation**:
210+
* Temperature controls **randomness**. Lower values (e.g., 0.1) result in more deterministic outputs (similar to greedy search), while higher values (e.g., 1.5) increase randomness.
211+
* A recommended range is typically between `0.7 ~ 1.0`.
212+
* A value of 0.9 means a moderate increase in diversity without making the output too chaotic.
213+
* `enable_repetition_penalty = false`
214+
* `repetition_penalty = 1.2`
215+
* `penalty_window = 20`
216+
* **Meaning**:
217+
* Repetition penalty is disabled, so the value `repetition_penalty = 1.2` has no effect.
218+
* If enabled, this mechanism reduces the probability of repeating tokens from the most recent `20` tokens.
219+
* **Explanation**:
220+
* Helps prevent the model from being verbose or getting stuck in repetitive loops (e.g., “hello hello hello…”).
221+
* A penalty factor > 1 suppresses repetition. A common recommended range is `1.1 ~ 1.3`.
222+
* `enable_top_p_sampling = false`
223+
* `top_p = 0.8`
224+
* **Meaning**:
225+
* Top-p (nucleus) sampling is disabled.
226+
* If enabled, the model samples from **the smallest set of tokens whose cumulative probability exceeds p**, instead of from all tokens.
227+
* **Explanation**:
228+
* `top_p = 0.8` means sampling from tokens whose cumulative probability just reaches 0.8.
229+
* More flexible than top-k, as it adapts the candidate set dynamically based on the token distribution in each generation step.
230+
* `enable_top_k_sampling = true`
231+
* `top_k = 10`
232+
* **Meaning**: Enables top-k sampling, where the model selects output tokens from the **top 10 most probable tokens**.
233+
* **Explanation**:
234+
* This is a way to constrain the sampling space and control output diversity.
235+
* `top_k = 1` approximates greedy search (most deterministic), while `top_k = 10` allows for a moderate level of diversity.
236+
146237
## Custom Quantized Models
147238

148239
The models provided above are quantized specifically for MaixCAM2. If you need to quantize your own models, refer to:

docs/doc/en/mllm/vlm_internvl.md

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,9 +17,28 @@ update:
1717
## Introduction to InternVL
1818

1919
VLM (Vision-Language Model) refers to a vision-language model that allows AI to generate text output based on both text and image input, such as describing the content of an image, meaning the AI has learned to interpret images.
20+
InternVL supports multiple languages, such as Chinese and English.
21+
2022

2123
MaixPy has integrated [InternVL2.5](https://huggingface.co/OpenGVLab/InternVL2_5-1B), which is based on Qwen2.5 with added image support. Therefore, some basic concepts are not detailed here. It is recommended to read the [Qwen](./llm_qwen.md) introduction first.
2224

25+
26+
For example, with this image, using the system prompt
27+
`你是由上海人工智能实验室联合商汤科技开发的书生多模态大模型,英文名叫InternVL, 是一个有用无害的人工智能助手。`
28+
and the user prompt
29+
`Describe the picture`
30+
on MaixCAM2 with InternVL2.5-1B (which is also the result of the code below):
31+
![ssd\_car.jpg](../../assets/ssd_car.jpg)
32+
```
33+
>> 请描述图中有什么
34+
图中有一个红色双层巴士停在马路上,前面是一辆黑色的小轿车。一位穿黑色夹克的人站在巴士前面,脸上带着微笑。背景是城市建筑,有商店和多幅广告牌。路上的画面上有一个行人图案。
35+
36+
>> Describe the picture
37+
In the image, we see a vibrant street scene featuring a classic double-decker bus in red with "Things Get New Look!" written on its side. It’s parked on the street, where a woman stands smiling at the camera. Behind the bus, a row of classic buildings with large windows lines the street, contributing to the urban atmosphere. A black van is parked nearby, and there are a few people and street signs indicating traffic regulations. The overall scene captures a typical day in a historic city.
38+
```
39+
This is the result with a casually set prompt. You can adjust the system and user prompts according to the actual situation.
40+
41+
2342
## Using InternVL in MaixPy MaixCAM
2443

2544
### Model and Download Link
@@ -110,6 +129,15 @@ err.check_raise(resp.err_code)
110129
# print(resp.msg)
111130
```
112131

132+
Result:
133+
```
134+
>> 请描述图中有什么
135+
图中有一个红色双层巴士停在马路上,前面是一辆黑色的小轿车。一位穿黑色夹克的人站在巴士前面,脸上带着微笑。背景是城市建筑,有商店和多幅广告牌。路上的画面上有一个行人图案。
136+
137+
>> Describe the picture
138+
In the image, we see a vibrant street scene featuring a classic double-decker bus in red with "Things Get New Look!" written on its side. It’s parked on the street, where a woman stands smiling at the camera. Behind the bus, a row of classic buildings with large windows lines the street, contributing to the urban atmosphere. A black van is parked nearby, and there are a few people and street signs indicating traffic regulations. The overall scene captures a typical day in a historic city.
139+
```
140+
113141
This loads an image from the system and asks the model to describe what’s in the image. Note that this model **does not support context**, meaning each call to the `send` function is a brand-new conversation and does not remember the content from previous `send` calls.
114142

115143
Additionally, the default model supports image input resolution of `364 x 364`. So when calling `set_image`, if the resolution doesn't match, it will automatically call `img.resize` to resize the image using the method specified by `fit`, such as `image.Fit.FIT_CONTAIN`, which resizes while maintaining the original aspect ratio and fills the surrounding space with black.
@@ -118,6 +146,10 @@ Additionally, the default model supports image input resolution of `364 x 364`.
118146

119147
Note that the length of the input text after being tokenized is limited. For example, the default 1B model supports 256 tokens, and the total tokens for input and output should not exceed 1023.
120148

149+
## Modifying Parameters
150+
151+
Refer to [Qwen Documentation](./llm_qwen.md)
152+
121153
## Custom Quantized Model
122154

123155
The model provided above is a quantized model for MaixCAM2. If you want to quantize your own model, refer to:

0 commit comments

Comments
 (0)