@@ -143,68 +143,3 @@ spec:
143
143
144
144
This will ensure that the pod will be scheduled to a node that has the GPU type
145
145
you specified.
146
-
147
- ## v1.6 and v1.7
148
- To enable GPU support in 1.6 and 1.7, a special **alpha** feature gate
149
- ` Accelerators` has to be set to true across the system:
150
- ` --feature-gates="Accelerators=true"` . It also requires using the Docker
151
- Engine as the container runtime.
152
-
153
- Further, the Kubernetes nodes have to be pre-installed with NVIDIA drivers.
154
- Kubelet will not detect NVIDIA GPUs otherwise.
155
-
156
- When you start Kubernetes components after all the above conditions are true,
157
- Kubernetes will expose `alpha.kubernetes.io/nvidia-gpu` as a schedulable
158
- resource.
159
-
160
- You can consume these GPUs from your containers by requesting
161
- ` alpha.kubernetes.io/nvidia-gpu` just like you request `cpu` or `memory`.
162
- However, there are some limitations in how you specify the resource requirements
163
- when using GPUs :
164
- - GPUs are only supposed to be specified in the `limits` section, which means :
165
- * You can specify GPU `limits` without specifying `requests` because
166
- Kubernetes will use the limit as the request value by default.
167
- * You can specify GPU in both `limits` and `requests` but these two values
168
- must be equal.
169
- * You cannot specify GPU `requests` without specifying `limits`.
170
- - Containers (and pods) do not share GPUs. There's no overcommitting of GPUs.
171
- - Each container can request one or more GPUs. It is not possible to request a
172
- fraction of a GPU.
173
-
174
- When using `alpha.kubernetes.io/nvidia-gpu` as the resource, you also have to
175
- mount host directories containing NVIDIA libraries (libcuda.so, libnvidia.so
176
- etc.) to the container.
177
-
178
- Here's an example :
179
-
180
- ` ` ` yaml
181
- apiVersion: v1
182
- kind: Pod
183
- metadata:
184
- name: cuda-vector-add
185
- spec:
186
- restartPolicy: OnFailure
187
- containers:
188
- - name: cuda-vector-add
189
- # https://github.com/kubernetes/kubernetes/blob/v1.7.11/test/images/nvidia-cuda/Dockerfile
190
- image: "k8s.gcr.io/cuda-vector-add:v0.1"
191
- resources:
192
- limits:
193
- alpha.kubernetes.io/nvidia-gpu: 1 # requesting 1 GPU
194
- volumeMounts:
195
- - name: "nvidia-libraries"
196
- mountPath: "/usr/local/nvidia/lib64"
197
- volumes:
198
- - name: "nvidia-libraries"
199
- hostPath:
200
- path: "/usr/lib/nvidia-375"
201
- ` ` `
202
-
203
- The `Accelerators` feature gate and `alpha.kubernetes.io/nvidia-gpu` resource
204
- works on 1.8 and 1.9 as well. It will be deprecated in 1.10 and removed in
205
- 1.11.
206
-
207
- # # Future
208
- - Support for hardware accelerators in Kubernetes is still in alpha.
209
- - Better APIs will be introduced to provision and consume accelerators in a scalable manner.
210
- - Kubernetes will automatically ensure that applications consuming GPUs get the best possible performance.
0 commit comments