Closed
Description
Describe the bug
I started from this table, where it seems that some inference providers are compatible with text_generation
.
Then, taking nebius as an example, I filtered text-generation models served by Nebius.
But when I try to call the client, I get
ValueError: Model Qwen/Qwen3-4B is not supported for task text-generation and provider nebius. Supported task: conversational.
This also applies to other models and Inference providers.
Reproduction
import os
from huggingface_hub import InferenceClient
client = InferenceClient(
model="Qwen/Qwen3-4B",
provider="nebius",
api_key=os.environ["HF_TOKEN"],
)
text = client.text_generation(
prompt="What is the capital of France?",
)
print(text)
Logs
Send: curl -X GET -H 'Accept: */*' -H 'Accept-Encoding: gzip, deflate' -H 'Connection: keep-alive' -H 'authorization: <TOKEN>' -H 'user-agent: unknown/None; hf_hub/0.33.2; python/3.12.8; torch/2.6.0' 'https://huggingface.co/api/models/Qwen/Qwen3-4B?expand=inferenceProviderMapping'
Request 52984ed5-11a7-4a01-a565-d0f14d6d01c0: GET https://huggingface.co/api/models/Qwen/Qwen3-4B?expand=inferenceProviderMapping (authenticated: True)
Traceback (most recent call last):
File ".../try.py", line 10, in <module>
text = client.text_generation(
^^^^^^^^^^^^^^^^^^^^^^^
File ".../test/lib/python3.12/site-packages/huggingface_hub/inference/_client.py", line 2297, in text_generation
request_parameters = provider_helper.prepare_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../test/lib/python3.12/site-packages/huggingface_hub/inference/_providers/_common.py", line 93, in prepare_request
provider_mapping_info = self._prepare_mapping_info(model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../test/lib/python3.12/site-packages/huggingface_hub/inference/_providers/_common.py", line 162, in _prepare_mapping_info
raise ValueError(
ValueError: Model Qwen/Qwen3-4B is not supported for task text-generation and provider nebius. Supported task: conversational.
System info
- huggingface_hub version: 0.33.2
- Platform: macOS-15.5-arm64-arm-64bit
- Python version: 3.12.8
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Running in Google Colab Enterprise ?: No
- Token path ?: ...
- Has saved token ?: True
- Who am I ?: anakin87
- Configured git credential helpers: osxkeychain, store
- FastAI: N/A
- Tensorflow: N/A
- Torch: 2.6.0
- Jinja2: 3.1.6
- Graphviz: N/A
- keras: N/A
- Pydot: N/A
- Pillow: 11.3.0
- hf_transfer: N/A
- gradio: N/A
- tensorboard: N/A
- numpy: 2.2.6
- pydantic: 2.11.7
- aiohttp: 3.12.13
- hf_xet: 1.1.5
- ENDPOINT: https://huggingface.co
- HF_HUB_CACHE: ...
- HF_ASSETS_CACHE: ...
- HF_TOKEN_PATH: ...
- HF_STORED_TOKENS_PATH: ...
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
- HF_HUB_ETAG_TIMEOUT: 10
- HF_HUB_DOWNLOAD_TIMEOUT: 10