Skip to content

Commit cb264f8

Browse files
committed
Remove docs related to in-tree support to GPU
The in-tree support to GPU is completely removed in release 1.11. This PR removes the related docs in release-1.11 branch. xref: kubernetes/kubernetes#61498
1 parent 8cc303d commit cb264f8

File tree

3 files changed

+0
-69
lines changed

3 files changed

+0
-69
lines changed

docs/concepts/configuration/manage-compute-resources-container.md

-2
Original file line numberDiff line numberDiff line change
@@ -206,12 +206,10 @@ $ kubectl describe nodes e2e-test-minion-group-4lw4
206206
Name: e2e-test-minion-group-4lw4
207207
[ ... lines removed for clarity ...]
208208
Capacity:
209-
alpha.kubernetes.io/nvidia-gpu: 0
210209
cpu: 2
211210
memory: 7679792Ki
212211
pods: 110
213212
Allocatable:
214-
alpha.kubernetes.io/nvidia-gpu: 0
215213
cpu: 1800m
216214
memory: 7474992Ki
217215
pods: 110

docs/tasks/administer-cluster/extended-resource-node.md

-2
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,6 @@ The output shows that the Node has a capacity of 4 dongles:
8282

8383
```
8484
"capacity": {
85-
"alpha.kubernetes.io/nvidia-gpu": "0",
8685
"cpu": "2",
8786
"memory": "2049008Ki",
8887
"example.com/dongle": "4",
@@ -98,7 +97,6 @@ Once again, the output shows the dongle resource:
9897

9998
```yaml
10099
Capacity:
101-
alpha.kubernetes.io/nvidia-gpu: 0
102100
cpu: 2
103101
memory: 2049008Ki
104102
example.com/dongle: 4

docs/tasks/manage-gpus/scheduling-gpus.md

-65
Original file line numberDiff line numberDiff line change
@@ -143,68 +143,3 @@ spec:
143143
144144
This will ensure that the pod will be scheduled to a node that has the GPU type
145145
you specified.
146-
147-
## v1.6 and v1.7
148-
To enable GPU support in 1.6 and 1.7, a special **alpha** feature gate
149-
`Accelerators` has to be set to true across the system:
150-
`--feature-gates="Accelerators=true"`. It also requires using the Docker
151-
Engine as the container runtime.
152-
153-
Further, the Kubernetes nodes have to be pre-installed with NVIDIA drivers.
154-
Kubelet will not detect NVIDIA GPUs otherwise.
155-
156-
When you start Kubernetes components after all the above conditions are true,
157-
Kubernetes will expose `alpha.kubernetes.io/nvidia-gpu` as a schedulable
158-
resource.
159-
160-
You can consume these GPUs from your containers by requesting
161-
`alpha.kubernetes.io/nvidia-gpu` just like you request `cpu` or `memory`.
162-
However, there are some limitations in how you specify the resource requirements
163-
when using GPUs:
164-
- GPUs are only supposed to be specified in the `limits` section, which means:
165-
* You can specify GPU `limits` without specifying `requests` because
166-
Kubernetes will use the limit as the request value by default.
167-
* You can specify GPU in both `limits` and `requests` but these two values
168-
must be equal.
169-
* You cannot specify GPU `requests` without specifying `limits`.
170-
- Containers (and pods) do not share GPUs. There's no overcommitting of GPUs.
171-
- Each container can request one or more GPUs. It is not possible to request a
172-
fraction of a GPU.
173-
174-
When using `alpha.kubernetes.io/nvidia-gpu` as the resource, you also have to
175-
mount host directories containing NVIDIA libraries (libcuda.so, libnvidia.so
176-
etc.) to the container.
177-
178-
Here's an example:
179-
180-
```yaml
181-
apiVersion: v1
182-
kind: Pod
183-
metadata:
184-
name: cuda-vector-add
185-
spec:
186-
restartPolicy: OnFailure
187-
containers:
188-
- name: cuda-vector-add
189-
# https://github.com/kubernetes/kubernetes/blob/v1.7.11/test/images/nvidia-cuda/Dockerfile
190-
image: "k8s.gcr.io/cuda-vector-add:v0.1"
191-
resources:
192-
limits:
193-
alpha.kubernetes.io/nvidia-gpu: 1 # requesting 1 GPU
194-
volumeMounts:
195-
- name: "nvidia-libraries"
196-
mountPath: "/usr/local/nvidia/lib64"
197-
volumes:
198-
- name: "nvidia-libraries"
199-
hostPath:
200-
path: "/usr/lib/nvidia-375"
201-
```
202-
203-
The `Accelerators` feature gate and `alpha.kubernetes.io/nvidia-gpu` resource
204-
works on 1.8 and 1.9 as well. It will be deprecated in 1.10 and removed in
205-
1.11.
206-
207-
## Future
208-
- Support for hardware accelerators in Kubernetes is still in alpha.
209-
- Better APIs will be introduced to provision and consume accelerators in a scalable manner.
210-
- Kubernetes will automatically ensure that applications consuming GPUs get the best possible performance.

0 commit comments

Comments
 (0)