Skip to content

Commit 590140b

Browse files
qiulangqinxuye
andauthored
DOC: update troubleshooting.rst for the launch error caused by numpy (#3342)
Co-authored-by: qinxuye <[email protected]>
1 parent 2b796a9 commit 590140b

File tree

2 files changed

+83
-4
lines changed

2 files changed

+83
-4
lines changed

doc/source/getting_started/troubleshooting.rst

+19
Original file line numberDiff line numberDiff line change
@@ -107,3 +107,22 @@ Missing ``model_engine`` parameter when launching LLM models
107107

108108
Since version ``v0.11.0``, launching LLM models requires an additional ``model_engine`` parameter.
109109
For specific information, please refer to :ref:`here <about_model_engine>`.
110+
111+
Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp-a34b3233.so.1 library.
112+
================================================================================================================
113+
114+
When start Xinference server and you hit the error "ValueError: Model architectures ['Qwen2ForCausalLM'] failed to be inspected. Please check the logs for more details. "
115+
116+
The logs shows the error, ``"Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp-a34b3233.so.1 library. Try to import numpy first or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it."``
117+
118+
This is mostly because your NumPy is installed by conda and conda's Numpy is built with Intel MKL optimizations, which is causing a conflict with the GNU OpenMP library (libgomp) that's already loaded in the environment.
119+
120+
.. code-block:: text
121+
122+
MKL_THREADING_LAYER=GNU xinference-local
123+
124+
Setting ``MKL_THREADING_LAYER=GNU`` forces Intel's Math Kernel Library to use GNU's OpenMP implementation instead of Intel's own implementation.
125+
126+
Or you can uninstall conda's numpy and reinstall with pip.
127+
128+
On a related subject, if you use vllm, do not install pytorch with conda, check https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html for detailed information.

doc/source/locale/zh_CN/LC_MESSAGES/getting_started/troubleshooting.po

+64-4
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ msgid ""
88
msgstr ""
99
"Project-Id-Version: Xinference \n"
1010
"Report-Msgid-Bugs-To: \n"
11-
"POT-Creation-Date: 2024-05-11 10:26+0800\n"
11+
"POT-Creation-Date: 2025-04-28 18:35+0800\n"
1212
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
1313
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
1414
"Language: zh_CN\n"
@@ -17,7 +17,7 @@ msgstr ""
1717
"MIME-Version: 1.0\n"
1818
"Content-Type: text/plain; charset=utf-8\n"
1919
"Content-Transfer-Encoding: 8bit\n"
20-
"Generated-By: Babel 2.11.0\n"
20+
"Generated-By: Babel 2.14.0\n"
2121

2222
#: ../../source/getting_started/troubleshooting.rst:5
2323
msgid "Troubleshooting"
@@ -202,5 +202,65 @@ msgid ""
202202
"``model_engine`` parameter. For specific information, please refer to "
203203
":ref:`here <about_model_engine>`."
204204
msgstr ""
205-
"自 ``v0.11.0`` 版本开始,加载 LLM 模型时需要传入额外参数 ``model_engine`` 。"
206-
"具体信息请参考 :ref:`这里 <about_model_engine>` 。"
205+
"自 ``v0.11.0`` 版本开始,加载 LLM 模型时需要传入额外参数 ``model_engine``"
206+
" 。具体信息请参考 :ref:`这里 <about_model_engine>` 。"
207+
208+
#: ../../source/getting_started/troubleshooting.rst:112
209+
msgid ""
210+
"Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is "
211+
"incompatible with libgomp-a34b3233.so.1 library."
212+
msgstr ""
213+
"错误:mkl-service + Intel(R) MKL:MKL_THREADING_LAYER=INTEL 与 libgomp-a34b3233.so.1 库不兼容。"
214+
215+
#: ../../source/getting_started/troubleshooting.rst:114
216+
msgid ""
217+
"When start Xinference server and you hit the error \"ValueError: Model "
218+
"architectures ['Qwen2ForCausalLM'] failed to be inspected. Please check "
219+
"the logs for more details. \""
220+
msgstr ""
221+
"在启动 Xinference 服务器时,如果遇到错误:“ValueError: Model architectures "
222+
"['Qwen2ForCausalLM'] failed to be inspected. Please check the logs for more details.”"
223+
224+
#: ../../source/getting_started/troubleshooting.rst:116
225+
msgid ""
226+
"The logs shows the error, ``\"Error: mkl-service + Intel(R) MKL: "
227+
"MKL_THREADING_LAYER=INTEL is incompatible with libgomp-a34b3233.so.1 "
228+
"library. Try to import numpy first or set the threading layer "
229+
"accordingly. Set MKL_SERVICE_FORCE_INTEL to force it.\"``"
230+
msgstr ""
231+
"日志中显示错误:Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL "
232+
"is incompatible with libgomp-a34b3233.so.1 library. Try to import numpy first "
233+
"or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it."
234+
235+
236+
#: ../../source/getting_started/troubleshooting.rst:118
237+
msgid ""
238+
"This is mostly because your NumPy is installed by conda and conda's Numpy"
239+
" is built with Intel MKL optimizations, which is causing a conflict with "
240+
"the GNU OpenMP library (libgomp) that's already loaded in the "
241+
"environment."
242+
msgstr ""
243+
"这通常是因为你的 NumPy 是通过 conda 安装的,而 conda 的 NumPy 是使用 Intel MKL 优化构建的,"
244+
"这导致它与环境中已加载的 GNU OpenMP 库(libgomp)产生冲突。"
245+
246+
#: ../../source/getting_started/troubleshooting.rst:124
247+
msgid ""
248+
"Setting ``MKL_THREADING_LAYER=GNU`` forces Intel's Math Kernel Library to"
249+
" use GNU's OpenMP implementation instead of Intel's own implementation."
250+
msgstr ""
251+
"设置 MKL_THREADING_LAYER=GNU 可以强制 Intel 数学核心库(MKL)使用 GNU 的 OpenMP 实现,而不是使用 Intel 自己的实现。"
252+
253+
#: ../../source/getting_started/troubleshooting.rst:126
254+
msgid "Or you can uninstall conda's numpy and reinstall with pip."
255+
msgstr ""
256+
"或者你也可以卸载 conda 安装的 numpy,然后使用 pip 重新安装。"
257+
258+
#: ../../source/getting_started/troubleshooting.rst:128
259+
msgid ""
260+
"On a related subject, if you use vllm, do not install pytorch with conda,"
261+
" check "
262+
"https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html for "
263+
"detailed information."
264+
msgstr ""
265+
"相关地,如果你使用 vllm,不要通过 conda 安装 pytorch,详细信息请参考:https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html 。"
266+

0 commit comments

Comments
 (0)