Skip to content

Commit fbf5c26

Browse files
tengqmMisty Linville
authored and
Misty Linville
committed
Remove docs related to in-tree support to GPU (#8294)
* Remove docs related to in-tree support to GPU The in-tree support to GPU is completely removed in release 1.11. This PR removes the related docs in release-1.11 branch. xref: kubernetes/kubernetes#61498 * Update content updated by PR to Hugo syntax Signed-off-by: Misty Stanley-Jones <[email protected]>
1 parent 88e0fba commit fbf5c26

File tree

3 files changed

+4
-82
lines changed

3 files changed

+4
-82
lines changed

content/en/docs/concepts/configuration/manage-compute-resources-container.md

+4-10
Original file line numberDiff line numberDiff line change
@@ -144,9 +144,7 @@ When using Docker:
144144
multiplied by 100. The resulting value is the total amount of CPU time that a container can use
145145
every 100ms. A container cannot use more than its share of CPU time during this interval.
146146

147-
{{< note >}}
148-
**Note**: The default quota period is 100ms. The minimum resolution of CPU quota is 1ms.
149-
{{< /note >}}
147+
{{< note >}}**Note**: The default quota period is 100ms. The minimum resolution of CPU quota is 1ms.{{ {{</ note >}}}
150148

151149
- The `spec.containers[].resources.limits.memory` is converted to an integer, and
152150
used as the value of the
@@ -209,12 +207,10 @@ $ kubectl describe nodes e2e-test-minion-group-4lw4
209207
Name: e2e-test-minion-group-4lw4
210208
[ ... lines removed for clarity ...]
211209
Capacity:
212-
alpha.kubernetes.io/nvidia-gpu: 0
213210
cpu: 2
214211
memory: 7679792Ki
215212
pods: 110
216213
Allocatable:
217-
alpha.kubernetes.io/nvidia-gpu: 0
218214
cpu: 1800m
219215
memory: 7474992Ki
220216
pods: 110
@@ -300,10 +296,10 @@ Container in the Pod was terminated and restarted five times.
300296
You can call `kubectl get pod` with the `-o go-template=...` option to fetch the status
301297
of previously terminated Containers:
302298

303-
```shell
299+
```shell{% raw %}
304300
[13:59:01] $ kubectl get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-hra99
305301
Container Name: simmemleak
306-
LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]]
302+
LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]]{% endraw %}
307303
```
308304

309305
You can see that the Container was terminated because of `reason:OOM Killed`,
@@ -545,6 +541,4 @@ consistency across providers and platforms.
545541

546542
* [ResourceRequirements](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcerequirements-v1-core)
547543

548-
{{% /capture %}}
549-
550-
544+
{{% /capture %}}

content/en/docs/tasks/administer-cluster/extended-resource-node.md

-5
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,6 @@ The output shows that the Node has a capacity of 4 dongles:
8383

8484
```
8585
"capacity": {
86-
"alpha.kubernetes.io/nvidia-gpu": "0",
8786
"cpu": "2",
8887
"memory": "2049008Ki",
8988
"example.com/dongle": "4",
@@ -99,7 +98,6 @@ Once again, the output shows the dongle resource:
9998

10099
```yaml
101100
Capacity:
102-
alpha.kubernetes.io/nvidia-gpu: 0
103101
cpu: 2
104102
memory: 2049008Ki
105103
example.com/dongle: 4
@@ -205,6 +203,3 @@ kubectl describe node <your-node-name> | grep dongle
205203

206204

207205
{{% /capture %}}
208-
209-
210-

content/en/docs/tasks/manage-gpus/scheduling-gpus.md

-67
Original file line numberDiff line numberDiff line change
@@ -152,70 +152,3 @@ spec:
152152
153153
This will ensure that the pod will be scheduled to a node that has the GPU type
154154
you specified.
155-
156-
## v1.6 and v1.7
157-
To enable GPU support in 1.6 and 1.7, a special **alpha** feature gate
158-
`Accelerators` has to be set to true across the system:
159-
`--feature-gates="Accelerators=true"`. It also requires using the Docker
160-
Engine as the container runtime.
161-
162-
Further, the Kubernetes nodes have to be pre-installed with NVIDIA drivers.
163-
Kubelet will not detect NVIDIA GPUs otherwise.
164-
165-
When you start Kubernetes components after all the above conditions are true,
166-
Kubernetes will expose `alpha.kubernetes.io/nvidia-gpu` as a schedulable
167-
resource.
168-
169-
You can consume these GPUs from your containers by requesting
170-
`alpha.kubernetes.io/nvidia-gpu` just like you request `cpu` or `memory`.
171-
However, there are some limitations in how you specify the resource requirements
172-
when using GPUs:
173-
- GPUs are only supposed to be specified in the `limits` section, which means:
174-
* You can specify GPU `limits` without specifying `requests` because
175-
Kubernetes will use the limit as the request value by default.
176-
* You can specify GPU in both `limits` and `requests` but these two values
177-
must be equal.
178-
* You cannot specify GPU `requests` without specifying `limits`.
179-
- Containers (and pods) do not share GPUs. There's no overcommitting of GPUs.
180-
- Each container can request one or more GPUs. It is not possible to request a
181-
fraction of a GPU.
182-
183-
When using `alpha.kubernetes.io/nvidia-gpu` as the resource, you also have to
184-
mount host directories containing NVIDIA libraries (libcuda.so, libnvidia.so
185-
etc.) to the container.
186-
187-
Here's an example:
188-
189-
```yaml
190-
apiVersion: v1
191-
kind: Pod
192-
metadata:
193-
name: cuda-vector-add
194-
spec:
195-
restartPolicy: OnFailure
196-
containers:
197-
- name: cuda-vector-add
198-
# https://github.com/kubernetes/kubernetes/blob/v1.7.11/test/images/nvidia-cuda/Dockerfile
199-
image: "k8s.gcr.io/cuda-vector-add:v0.1"
200-
resources:
201-
limits:
202-
alpha.kubernetes.io/nvidia-gpu: 1 # requesting 1 GPU
203-
volumeMounts:
204-
- name: "nvidia-libraries"
205-
mountPath: "/usr/local/nvidia/lib64"
206-
volumes:
207-
- name: "nvidia-libraries"
208-
hostPath:
209-
path: "/usr/lib/nvidia-375"
210-
```
211-
212-
The `Accelerators` feature gate and `alpha.kubernetes.io/nvidia-gpu` resource
213-
works on 1.8 and 1.9 as well. It will be deprecated in 1.10 and removed in
214-
1.11.
215-
216-
## Future
217-
- Support for hardware accelerators in Kubernetes is still in alpha.
218-
- Better APIs will be introduced to provision and consume accelerators in a scalable manner.
219-
- Kubernetes will automatically ensure that applications consuming GPUs get the best possible performance.
220-
221-
{{% /capture %}}

0 commit comments

Comments
 (0)