You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: site/content/docs/user/configuration.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -293,7 +293,7 @@ There are two ways to map GPUs in to a KinD cluster. The first is using the `dev
293
293
294
294
As a pre-requisite you install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html) installed on the host.
295
295
296
-
Using `devices` for GPU support requires Docker v25 or later. See notes on CDI Container Support [here.](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#container-device-interface-cdi-support)
296
+
Using `devices` for GPU support requires Docker v25 or later. A [CDI specification](https://github.com/container-orchestrated-devices/container-device-interface) will need to be generated for your device. For Nvidia GPU devices see notes [here.](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#container-device-interface-cdi-support)
297
297
298
298
GPU devices can be mapped to Kind node copntainers with the devices API:
299
299
@@ -329,7 +329,7 @@ GPUs can also be mapped using the `extraMounts` API. This method passes a list o
329
329
330
330
Steps to enable this:
331
331
332
-
1. Add nvidia as your default runtime in /etc/docker/daemon.json
332
+
1. Add nvidia as your default runtime in `/etc/docker/daemon.json` If you have the [NVIDIA Container Toolkit installed](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) this can be done with: `sudo nvidia-ctk runtime configure --runtime=docker --set-as-default`
333
333
1. Restart docker (as necessary)
334
334
1. Set `accept-nvidia-visible-devices-as-volume-mounts = true` in `/etc/nvidia-container-runtime/config.toml`
335
335
1. Add the `extraMounts` to any kind nodes you want to have access to all GPUs in the system:
0 commit comments