Skip to content

Commit 500325f

Browse files
Merge branch 'main' into feature/proper-mps-handling-for-macos
2 parents 38782af + e05ae18 commit 500325f

File tree

2 files changed

+151
-18
lines changed

2 files changed

+151
-18
lines changed

inference_experimental/README.md

Lines changed: 149 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -178,17 +178,17 @@ like TensorRT engines) or additional **models**.
178178
Below there is a table showcasing models that are supported, with the hints regarding extra dependencies that
179179
are required.
180180

181-
| Architecture | Task Type | Supported variants |
181+
| Architecture | Task Type | Supported backends |
182182
|--------------------|-------------------------|--------------------|
183-
| RFDetr | `object-detection` | TRT, Torch |
184-
| YOLO v8 | `object-detection` | ONNX, TRT |
185-
| YOLO v8 | `instance-segmentation` | ONNX, TRT |
186-
| YOLO v9 | `object-detection` | ONNX, TRT |
187-
| YOLO v10 | `object-detection` | ONNX, TRT |
188-
| YOLO v11 | `object-detection` | ONNX, TRT |
189-
| YOLO v11 | `instance-segmentation` | ONNX, TRT |
190-
| Perception Encoder | `embedding` | Torch |
191-
| CLIP | `embedding` | Torch, ONNX |
183+
| RFDetr | `object-detection` | `trt`, `torch` |
184+
| YOLO v8 | `object-detection` | `onnx`, `trt` |
185+
| YOLO v8 | `instance-segmentation` | `onnx`, `trt` |
186+
| YOLO v9 | `object-detection` | `onnx`, `trt` |
187+
| YOLO v10 | `object-detection` | `onnx`, `trt` |
188+
| YOLO v11 | `object-detection` | `onnx`, `trt` |
189+
| YOLO v11 | `instance-segmentation` | `onnx`, `trt` |
190+
| Perception Encoder | `embedding` | `torch` |
191+
| CLIP | `embedding` | `torch`, `onnx` |
192192

193193

194194
### Registered pre-trained weights
@@ -201,12 +201,145 @@ Below you can find a list of model IDs registered in Roboflow weights provider (
201201

202202
**Models:**
203203

204-
* **RFDetr:** `rfdetr-base` (COCO), `rfdetr-large` (COCO) - all `public-open` - [license](./inference_exp/models/rfdetr/LICENSE.txt)
205-
* **YOLO v8 (object-detection):** `yolov8n-640` (COCO), `yolov8n-1280` (COCO), `yolov8s-640` (COCO), `yolov8s-1280` (COCO), `yolov8m-640` (COCO), `yolov8m-1280` (COCO), `yolov8l-640` (COCO), `yolov8l-1280` (COCO), `yolov8x-640` (COCO), `yolov8x-1280` (COCO) - all `public-open` - [license](./inference_exp/models/yolov8/LICENSE.txt)
206-
* **YOLO v8 (instance-segmentation):** `yolov8n-seg-640` (COCO), `yolov8n-seg-1280` (COCO), `yolov8s-seg-640` (COCO), `yolov8s-seg-1280` (COCO), `yolov8m-seg-640` (COCO), `yolov8m-seg-1280` (COCO), `yolov8l-seg-640` (COCO), `yolov8l-seg-1280` (COCO), `yolov8x-seg-640` (COCO), `yolov8x-seg-1280` (COCO) - all `public-open` - [license](./inference_exp/models/yolov8/LICENSE.txt)
207-
* **YOLO v10 (object-detection):** `yolov10n-640` (COCO), `yolov10s-640` (COCO), `yolov10m-640` (COCO), `yolov10b-640` (COCO), `yolov10l-640` (COCO), `yolov10x-640` (COCO) - all `public-open` - [license](./inference_exp/models/yolov10/LICENSE.txt)
208-
* **Perception Encoder:** `perception-encoder/PE-Core-B16-224`, `perception-encoder/PE-Core-G14-448`, `perception-encoder/PE-Core-L14-336` - all `public-open` - [license](./inference_exp/models/perception_encoder/vision_encoder/LICENSE.weigths.txt)
209-
* **CLIP:** `clip/RN50`, `clip/RN101`, `clip/RN50x16`, `clip/RN50x4`, `clip/RN50x64`, `clip/ViT-B-16`, `clip/ViT-B-32`, `clip/ViT-L-14-336px`, `clip/ViT-L-14` - all `public-open` - [license](./inference_exp/models/clip/LICENSE.txt)
204+
<details>
205+
<summary>👉 <b>RFDetr</b></summary>
206+
207+
**Access level:** `public-open`
208+
209+
**License:** [Apache 2.0](./inference_exp/models/rfdetr/LICENSE.txt)
210+
211+
The following model IDs are registered:
212+
213+
* `rfdetr-base` (trained on COCO dataset)
214+
215+
* `rfdetr-base` (trained on COCO dataset)
216+
217+
</details>
218+
219+
<details>
220+
<summary>👉 <b>YOLO v8</b></summary>
221+
222+
**Access level:** `public-open`
223+
224+
**License:** [AGPL](./inference_exp/models/yolov8/LICENSE.txt)
225+
226+
The following model IDs are registered for **object detection** task:
227+
228+
* `yolov8n-640` (trained on COCO dataset)
229+
230+
* `yolov8n-1280` (trained on COCO dataset)
231+
232+
* `yolov8s-640` (trained on COCO dataset)
233+
234+
* `yolov8s-1280` (trained on COCO dataset)
235+
236+
* `yolov8m-640` (trained on COCO dataset)
237+
238+
* `yolov8m-1280` (trained on COCO dataset)
239+
240+
* `yolov8l-640` (trained on COCO dataset)
241+
242+
* `yolov8l-1280` (trained on COCO dataset)
243+
244+
* `yolov8x-640` (trained on COCO dataset)
245+
246+
* `yolov8x-1280` (trained on COCO dataset)
247+
248+
249+
The following model IDs are registered for **instance segmentation** task:
250+
251+
* `yolov8n-seg-640` (trained on COCO dataset)
252+
253+
* `yolov8n-seg-1280` (trained on COCO dataset)
254+
255+
* `yolov8s-seg-640` (trained on COCO dataset)
256+
257+
* `yolov8s-seg-1280` (trained on COCO dataset)
258+
259+
* `yolov8m-seg-640` (trained on COCO dataset)
260+
261+
* `yolov8m-seg-1280` (trained on COCO dataset)
262+
263+
* `yolov8l-seg-640` (trained on COCO dataset)
264+
265+
* `yolov8l-seg-1280` (trained on COCO dataset)
266+
267+
* `yolov8x-seg-640` (trained on COCO dataset)
268+
269+
* `yolov8x-seg-1280` (trained on COCO dataset)
270+
271+
</details>
272+
273+
274+
<details>
275+
<summary>👉 <b>YOLO v10</b></summary>
276+
277+
**Access level:** `public-open`
278+
279+
**License:** [AGPL](./inference_exp/models/yolov10/LICENSE.txt)
280+
281+
The following model IDs are registered for **object detection** task:
282+
283+
* `yolov10n-640` (trained on COCO dataset)
284+
285+
* `yolov10s-640` (trained on COCO dataset)
286+
287+
* `yolov10m-640` (trained on COCO dataset)
288+
289+
* `yolov10b-640` (trained on COCO dataset)
290+
291+
* `yolov10l-640` (trained on COCO dataset)
292+
293+
* `yolov10x-640` (trained on COCO dataset)
294+
295+
</details>
296+
297+
298+
<details>
299+
<summary>👉 <b>Perception Encoder</b></summary>
300+
301+
**Access level:** `public-open`
302+
303+
**License:** [FAIR Noncommercial Research License](./inference_exp/models/perception_encoder/vision_encoder/LICENSE.weigths.txt)
304+
305+
The following model IDs are registered:
306+
307+
* `perception-encoder/PE-Core-B16-224`
308+
309+
* `perception-encoder/PE-Core-G14-448`
310+
311+
* `perception-encoder/PE-Core-L14-336`
312+
313+
</details>
314+
315+
<details>
316+
<summary>👉 <b>CLIP</b></summary>
317+
318+
**Access level:** `public-open`
319+
320+
**License:** [FAIR Noncommercial Research License](./inference_exp/models/perception_encoder/vision_encoder/LICENSE.weigths.txt)
321+
322+
The following model IDs are registered:
323+
324+
* `clip/RN50`
325+
326+
* `clip/RN101`
327+
328+
* `clip/RN50x16`
329+
330+
* `clip/RN50x4`
331+
332+
* `clip/RN50x64`
333+
334+
* `clip/ViT-B-16`
335+
336+
* `clip/ViT-B-32`
337+
338+
* `clip/ViT-L-14-336px`
339+
340+
* `clip/ViT-L-14`
341+
342+
</details>
210343

211344
## 📜 Citations
212345

inference_experimental/inference_exp/models/auto_loaders/auto_negotiation.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -140,7 +140,7 @@ def negotiate_model_packages(
140140
)
141141
raise NoModelPackagesAvailableError(
142142
message=f"Auto-negotiation protocol could not select model packages. This situation may be caused by "
143-
f"several reasons, with the most common being missing dependencies or too strict requirements "
143+
f"several issues, with the most common being missing dependencies or too strict requirements "
144144
f"stated as parameters of loading function. Below you can find reasons why specific model "
145145
f"packages were rejected:\n{rejections_summary}\n",
146146
help_url="https://todo",
@@ -981,7 +981,7 @@ def parse_batch_size(
981981

982982
def parse_backend_type(value: str) -> BackendType:
983983
try:
984-
return BackendType(value)
984+
return BackendType(value.lower())
985985
except ValueError as error:
986986
supported_backends = [e.value for e in BackendType]
987987
raise UnknownBackendTypeError(

0 commit comments

Comments
 (0)