Closed
Description
Info
$ cat /etc/docker/daemon.json
{
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
},
"exec-opts": ["native.cgroupdriver=cgroupfs"],
"bip": "192.168.99.1/24",
"default-shm-size": "1G",
"log-driver": "json-file",
"log-opts": {
"max-size": "100m",
"max-file": "1"
},
"default-ulimits": {
"memlock": {
"hard": -1,
"name": "memlock",
"soft": -1
},
"stack": {
"hard": 67108864,
"name": "stack",
"soft": 67108864
}
}
}
$ kind --version
kind version 0.23.0
$ docker --version
Docker version 26.1.4, build 5650f9b
$ docker info
Client: Docker Engine - Community
Version: 26.1.4
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.14.1
Path: /usr/libexec/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.27.1
Path: /usr/libexec/docker/cli-plugins/docker-compose
Server:
Containers: 81
Running: 4
Paused: 0
Stopped: 77
Images: 111
Server Version: 26.1.4
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
Swarm: inactive
Runtimes: runc io.containerd.runc.v2 nvidia
Default Runtime: nvidia
Init Binary: docker-init
containerd version: d2d58213f83a351ca8f528a95fbd145f5654e957
runc version: v1.1.12-0-g51d5e94
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: builtin
cgroupns
Kernel Version: 6.5.0-35-generic
Operating System: Ubuntu 22.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 12
Total Memory: 31.33GiB
Name: mbana-1
ID: 26df3d83-eb15-4d8c-914e-4284e0aca1b6
Docker Root Dir: /var/lib/docker
Debug Mode: false
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Config
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
# Control Plane
- role: control-plane
# Version list and SHA hashes available at https://github.com/kubernetes-sigs/kind/releases.
image: &image kindest/node:v1.30.0@sha256:047357ac0cfea04663786a612ba1eaba9702bef25227a794b52890dd8bcd692e
kubeadmConfigPatches:
- |
kind: KubeletConfiguration
CgroupDriver: cgroupfs
cgroupDriver: cgroupfs
kubeletExtraArgs:
CgroupDriver: cgroupfs
cgroupDriver: cgroupfs
# Misc worker node
- role: worker
image: *image
kubeadmConfigPatches:
- |
kind: KubeletConfiguration
CgroupDriver: cgroupfs
cgroupDriver: cgroupfs
kubeletExtraArgs:
CgroupDriver: cgroupfs
cgroupDriver: cgroupfs
- &worker
role: worker
labels:
kind.bana.io/nodes: e2e
image: *image
kubeadmConfigPatches:
- |
kind: KubeletConfiguration
CgroupDriver: cgroupfs
cgroupDriver: cgroupfs
kubeletExtraArgs:
CgroupDriver: cgroupfs
cgroupDriver: cgroupfs
---
kind: JoinConfiguration
nodeRegistration:
taints:
- key: kind.bana.io/nodes
effect: NoSchedule
- *worker
Logs
These are note worthy logs:
...
---
CgroupDriver: cgroupfs
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: cgroupfs
cgroupRoot: /kubelet
evictionHard:
imagefs.available: 0%
nodefs.available: 0%
nodefs.inodesFree: 0%
failSwapOn: false
imageGCHighThresholdPercent: 100
kind: KubeletConfiguration
kubeletExtraArgs:
CgroupDriver: cgroupfs
cgroupDriver: cgroupfs
---
...
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
...
What gives? Why can't I use cgroupfs
?
$ docker info -f {{.CgroupDriver}}
cgroupfs