Description
Contributing guidelines
- I've read the contributing guidelines and wholeheartedly agree
I've found a bug and checked that ...
- ... the documentation does not mention anything about my problem
- ... there are no open or closed issues that are related to my problem
Description
I have been following the current documentation for how to use RUN --device
within a Dockerfile but I am running into the following error when building the image with docker build -f Dockerfile .
:
ERROR: failed to build: failed to solve: failed to load LLB: device nvidia.com/gpu=all is requested by the build but not allowed
If trying to allow the builder to access the device using docker build --allow=device=nvidia.com/gpu=all -f Dockerfile .
, I get the following error:
ERROR: failed to build: failed to solve: granting entitlement device is not allowed by build daemon configuration
I can see in my output of docker buildx inspect
(see below) that the GPU I have configured has not been automatically allowed to be used. Is this a bug? Or can the documentation for the GPU example of RUN --device
be clarified to show how to allow the GPU?
docker buildx inspect default
Name: default
Driver: docker
Last Activity: 2025-07-04 08:34:31 +0000 UTC
Nodes:
Name: default
Endpoint: default
Status: running
BuildKit version: v0.23.2
Platforms: linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
Labels:
org.mobyproject.buildkit.worker.moby.host-gateway-ip: 172.17.0.1
Devices:
Name: nvidia.com/gpu=0
Automatically allowed: false
Name: nvidia.com/gpu=GPU-cf7e763a-628e-84d7-d146-62824b805c1f
Automatically allowed: false
Name: nvidia.com/gpu=all
Automatically allowed: false
GC Policy rule#0:
All: false
Filters: type==source.local,type==exec.cachemount,type==source.git.checkout
Keep Duration: 48h0m0s
Max Used Space: 4.375GiB
GC Policy rule#1:
All: false
Keep Duration: 1440h0m0s
Reserved Space: 31.66GiB
Max Used Space: 4.657GiB
Min Free Space: 15.83GiB
GC Policy rule#2:
All: false
Reserved Space: 31.66GiB
Max Used Space: 4.657GiB
Min Free Space: 15.83GiB
GC Policy rule#3:
All: true
Reserved Space: 31.66GiB
Max Used Space: 4.657GiB
Min Free Space: 15.83GiB
The CDI configuration was created by running: sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.json
Expected behaviour
For docker build -f Dockerfile .
command to run with the Dockerfile without any errors.
Actual behaviour
Getting the following error:
ERROR: failed to build: failed to solve: failed to load LLB: device nvidia.com/gpu=all is requested by the build but not allowed
Buildx version
github.com/docker/buildx v0.25.0 faaea65
Docker info
Client: Docker Engine - Community
Version: 28.3.1
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.25.0
Path: /usr/libexec/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.38.1
Path: /usr/libexec/docker/cli-plugins/docker-compose
Server:
Containers: 2
Running: 2
Paused: 0
Stopped: 0
Images: 8
Server Version: 28.3.1
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
CDI spec directories:
/etc/cdi
/var/run/cdi
Discovered Devices:
cdi: nvidia.com/gpu=0
cdi: nvidia.com/gpu=GPU-cf7e763a-628e-84d7-d146-62824b805c1f
cdi: nvidia.com/gpu=all
Swarm: inactive
Runtimes: io.containerd.runc.v2 nvidia runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 05044ec0a9a75232cad458027ca83437aae3f4da
runc version: v1.2.5-0-g59923ef
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: builtin
cgroupns
Kernel Version: 6.8.0-1031-aws
Operating System: Ubuntu 24.04.2 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 30.98GiB
Name: ip-10-200-3-199
ID: 89391cb8-ae75-4089-859f-472af634ad22
Docker Root Dir: /var/lib/docker
Debug Mode: false
Experimental: false
Insecure Registries:
::1/128
127.0.0.0/8
Live Restore Enabled: false
Builders list
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
default* docker
\_ default \_ default running v0.23.2 linux/amd64 (+3), linux/386
Configuration
# syntax=docker/dockerfile:1.17-labs
FROM nvidia/cuda:12.9.1-cudnn-runtime-ubuntu24.04
RUN --device=nvidia.com/gpu=all nvidia-smi
Build logs
Additional info
No response