Releases: NVIDIA/nvidia-container-toolkit
v1.14.6
What's Changed
- Add support for extracting device major number from
/proc/devices
ifnvidia
is used as a device name overnvidia-frontend
. This is required to support the creation of/dev/char
symlinks on NVIDIA CUDA drivers with version550.x
. - Add support for selecting IMEX channels using the
NVIDIA_IMEX_CHANNELS
environement variable.
Changes in libnvidia-container
- Added creation and injection of IMEX channels.
Dependency updates
- Bump github.com/sirupsen/logrus from 1.9.0 to 1.9.3 by @dependabot in #355
- Bump golang.org/x/sys from 0.7.0 to 0.17.0 by @dependabot in #357
- Bump github.com/pelletier/go-toml from 1.9.4 to 1.9.5 by @dependabot in #359
- Bump github.com/fsnotify/fsnotify from 1.5.4 to 1.7.0 by @dependabot in #358
- Bump github.com/urfave/cli/v2 from 2.3.0 to 2.27.1 by @dependabot in #356
- Bump golang.org/x/mod from 0.5.0 to 0.15.0 by @dependabot in #367
- Bump github.com/stretchr/testify from 1.8.1 to 1.8.4 by @dependabot in #366
- Bump github.com/NVIDIA/go-nvml from 0.12.0-1 to 0.12.0-2 by @dependabot in #365
- Bump github.com/opencontainers/runtime-spec from 1.1.0 to 1.2.0 by @dependabot in #368
Full Changelog: v1.14.5...v1.14.6
v1.14.5
What's Changed
- Update dependencies to address CVE in runc.
- Fix
nvidia-ctk runtime configure --cdi.enabled
for Docker. This was incorrectly settingexperimental = true
instead
of settingfeatures.cdi = true
.
Full Changelog: v1.14.4...v1.14.5
v1.15.0-rc.3
What's Changed
- Fix bug in
nvidia-ctk hook update-ldcache
where default--ldconfig-path
value was not applied.
Full Changelog: v1.15.0-rc.2...v1.15.0-rc.3
v1.15.0-rc.2
What's changed
- Extend the
runtime.nvidia.com/gpu
CDI kind to support full-GPUs and MIG devices specified by index or UUID. - Fix bug when specifying
--dev-root
for Tegra-based systems. - Log explicitly requested runtime mode.
- Remove package dependency on libseccomp.
- Added detection of libnvdxgdmal.so.1 on WSL2
- Use devRoot to resolve MIG device nodes.
- Fix bug in determining default nvidia-container-runtime.user config value on SUSE-based systems.
- Add
crun
to the list of configured low-level runtimes. - Added support for
--ldconfig-path
tonvidia-ctk cdi generate
command. - Fix
nvidia-ctk runtime configure --cdi.enabled
for Docker. - Add discovery of the GDRCopy device (
gdrdrv
) if theNVIDIA_GDRCOPY
environment variable of the container is set toenabled
Changes in libnvidia-container
- Added detection of libnvdxgdmal.so.1 on WSL2
Changes in the toolkit-container
- Bump CUDA base image version to 12.3.1.
Full Changelog: v1.15.0-rc.1...v1.15.0-rc.2
v1.14.4
What's Changed
- Include
nvidia/nvoptix.bin
in list of graphics mounts. (#127) - Include
vulkan/icd.d/nvidia_layers.json
in list of graphics mounts. (#127) - Fixed bug in
nvidia-ctk config
command when using--set
. The types of applied config options are now applied correctly. - Log explicitly requested runtime mode.
- Remove package dependency on libseccomp. (#110)
- Added detection of libnvdxgdmal.so.1 on WSL2.
- Fix bug in determining default nvidia-container-runtime.user config value on SUSE-based systems. (#110)
- Add
crun
to the list of configured low-level runtimes. - Add
--cdi.enabled
option tonvidia-ctk runtime configure
command to enable CDI in containerd. - Added support for
nvidia-ctk runtime configure --enable-cdi
for thedocker
runtime. Note that this requires Docker >= 25.
Changes in livnvidia-container
- Added detection of libnvdxgdmal.so.1 on WSL2.
Changes in the toolkit-container
- Bumped CUDA base image version to 12.3.1.
Full Changelog: v1.14.3...v1.14.4
v1.15.0-rc.1
What's Changed
- Skip update of ldcache in containers without ldconfig. The .so.SONAME symlinks are still created.
- Normalize ldconfig path on use. This automatically adjust the ldconfig setting applied to ldconfig.real on systems where this exists.
- Include
nvidia/nvoptix.bin
in list of graphics mounts. - Include
vulkan/icd.d/nvidia_layers.json
in list of graphics mounts. - Add support for
--library-search-paths
tonvidia-ctk cdi generate
command. - Add support for injecting /dev/nvidia-nvswitch* devices if the NVIDIA_NVSWITCH=enabled envvar is specified.
- Added support for
nvidia-ctk runtime configure --enable-cdi
for thedocker
runtime. Note that this requires Docker >= 25. - Fixed bug in
nvidia-ctk config
command when using--set
. The types of applied config options are now applied correctly. - Add
--relative-to
option tonvidia-ctk transform root
command. This controls whether the root transformation is applied to host or container paths.
Changes in livnvidia-container
- Fix device permission check when using cgroupv2 (fixes NVIDIA/libnvidia-container/#227)
Full Changelog: v1.14.3...v1.15.0-rc.1
v1.14.3
What's Changed
Changes in livnvidia-container
- Bumped version to
v1.14.3
for NVIDIA Container Toolkit release
Changes in the toolkit-container
- Bumped CUDA base image version to 12.2.2.
Full Changelog: v1.14.1...v1.14.2
v1.14.2
What's Changed
- Fix bug on Tegra-based systems where symlinks were not created in containers.
- Add --csv.ignore-pattern command line option to nvidia-ctk cdi generate command.
Changes in livnvidia-container
- Bumped version to
v1.14.2
for NVIDIA Container Toolkit release
Full Changelog: v1.14.1...v1.14.2
v1.14.1
What's Changed
- Fixed bug where contents of
/etc/nvidia-container-runtime/config.toml
is ignored by the NVIDIA Container Runtime Hook.
Changes in libnvidia-container
- Use
libelf.so
fromelfutils-libelf-devel
on RPM-based systems due to removed mageia repositories hosting pmake and bmake.
Full Changelog: v1.14.0...v1.14.1
v1.14.0
This is a promotion of the (internal) v1.14.0-rc.3
release to GA.
This release of the NVIDIA Container Toolkit adds the following features:
- Improved support for the Container Device Interface (CDI) on Tegra-based systems
- Simplified packaging and distribution. We now only generate
.deb
and.rpm
packages that are compatible with all supported distributions instead of releasing distributions-specific packagfes.
NOTE: This will be the last release that includes the nvidia-container-runtime
and nvidia-docker2
packages.
NOTE: This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:
libnvidia-container 1.14.0
nvidia-container-toolkit 1.14.0
nvidia-container-runtime 3.14.0
nvidia-docker2 2.14.0
The packages for this release are published to the libnvidia-container
package repositories.
New Contributors
- @elliotcourant made their first contribution in #61
Full Changelog: v1.13.0...v1.14.0
v1.14.0-rc.3
- Added support for generating OCI hook JSON file to
nvidia-ctk runtime configure
command. - Remove installation of OCI hook JSON from RPM package.
- Refactored config for
nvidia-container-runtime-hook
. - Added a
nvidia-ctk config
command which supports setting config options using a--set
flag. - Added
--library-search-path
option tonvidia-ctk cdi generate
command incsv
mode. This allows folders where
libraries are located to be specified explicitly. - Updated go-nvlib to support devices which are not present in the PCI device database. This allows the creation of dev/char symlinks on systems with such devices installed.
- Added
UsesNVGPUModule
info function for more robust platform detection. This is required on Tegra-based systems where libnvidia-ml.so is also supported.
Changes from libnvidia-container v1.14.0-rc.3
- Generate the
nvc.h
header file automaticallty so that version does not need to be explicitly bumped.
Changes in the toolkit-container
- Set
NVIDIA_VISIBLE_DEVICES=void
to prevent injection of NVIDIA devices and drivers into the NVIDIA Container Toolkit container.
v1.14.0-rc.2
- Fix bug causing incorrect nvidia-smi symlink to be created on WSL2 systems with multiple driver roots.
- Remove dependency on coreutils when installing package on RPM-based systems.
- Create ouput folders if required when running
nvidia-ctk runtime configure
- Generate default config as post-install step.
- Added support for detecting GSP firmware at custom paths when generating CDI specifications.
- Added logic to skip the extraction of image requirements if
NVIDIA_DISABLE_REQUIRES
is set totrue
.
Changes from libnvidia-container v1.14.0-rc.2
- Include Shared Compiler Library (libnvidia-gpucomp.so) in the list of compute libaries.
Changes in the toolkit-container
- Ensure that common envvars have higher priority when configuring the container engines.
- Bump CUDA base image version to 12.2.0.
- Remove installation of nvidia-experimental runtime. This is superceded by the NVIDIA Container Runtime in CDI mode.
v1.14.0-rc.1
- chore(cmd): Fixing minor spelling error. by @elliotcourant in #61
- Add support for updating containerd configs to the
nvidia-ctk runtime configure
command. - Create file in
etc/ld.so.conf.d
with permissions644
to support non-root containers. - Generate CDI specification files with
644
permissions to allow rootless applications (e.g. podman) - Add
nvidia-ctk cdi list
command to show the known CDI devices. - Add support for generating merged devices (e.g.
all
device) to the nvcdi API. - Use . pattern to locate libcuda.so when generating a CDI specification to support platforms where a patch version is not specified.
- Update go-nvlib to skip devices that are not MIG capable when generating CDI specifications.
- Add
nvidia-container-runtime-hook.path
config option to specify NVIDIA Container Runtime Hook path explicitly. - Fix bug in creation of
/dev/char
symlinks by failing operation if kernel modules are not loaded. - Add option to load kernel modules when creating device nodes
- Add option to create device nodes when creating
/dev/char
symlinks
Changes from libnvidia-container v1.14.0-rc.1
- Support OpenSSL 3 with the Encrypt/Decrypt library
Changes in the toolkit-container
- Bump CUDA base image version to 12.1.1.
- Unify environment variables used to configure runtimes.