Skip to content

v1.13.0

Compare
Choose a tag to compare
@elezar elezar released this 12 Apr 14:19
· 783 commits to main since this release

This is a promotion of the v1.13.0-rc.3 release to GA.

This release of the NVIDIA Container Toolkit adds the following features:

  • Improved support for the Container Device Interface (CDI) specifications for GPU devices when using the NVIDIA Container Toolkit in the context of the GPU Operator.
  • Added the generation CDI specifications on WSL2-based systems using the nvidia-ctk cdi generate command. This is now the recommended mechanism for using GPUs on WSL2 and podman is the recommended container engine.

NOTE: This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:

The packages for this release are published to the libnvidia-container package repositories.

Full Changelog: v1.12.0...v1.13.0

v1.13.0-rc.3

  • Only initialize NVML for modes that require it when runing nvidia-ctk cdi generate.
  • Prefer /run over /var/run when locating nvidia-persistenced and nvidia-fabricmanager sockets.
  • Fix the generation of CDI specifications for management containers when the driver libraries are not in the LDCache.
  • Add transformers to deduplicate and simplify CDI specifications.
  • Generate a simplified CDI specification by default. This means that entities in the common edits in a spec are not included in device definitions.
  • Also return an error from the nvcdi.New constructor instead of panicing.
  • Detect XOrg libraries for injection and CDI spec generation.
  • Add nvidia-ctk system create-device-nodes command to create control devices.
  • Add nvidia-ctk cdi transform command to apply transforms to CDI specifications.
  • Add --vendor and --class options to nvidia-ctk cdi generate

Changes from libnvidia-container v1.13.0-rc.3

  • Fix segmentation fault when RPC initialization fails.
  • Build centos variants of the NVIDIA Container Library with static libtirpc v1.3.2.
  • Remove make targets for fedora35 as the centos8 packages are compatible.

Changes in the toolkit-container

  • Add nvidia-container-runtime.modes.cdi.annotation-prefixes config option that allows the CDI annotation prefixes that are read to be overridden.
  • Create device nodes when generating CDI specification for management containers.
  • Add nvidia-container-runtime.runtimes config option to set the low-level runtime for the NVIDIA Container Runtime

v1.13.0-rc.2

  • Don't fail chmod hook if paths are not injected
  • Only create by-path symlinks if CDI devices are actually requested.
  • Fix possible blank nvidia-ctk path in generated CDI specifications
  • Fix error in postun scriplet on RPM-based systems
  • Only check NVIDIA_VISIBLE_DEVICES for environment variables if no annotations are specified.
  • Add cdi.default-kind config option for constructing fully-qualified CDI device names in CDI mode
  • Add support for accept-nvidia-visible-devices-envvar-unprivileged config setting in CDI mode
  • Add nvidia-container-runtime-hook.skip-mode-detection config option to bypass mode detection. This allows legacy and cdi mode, for example, to be used at the same time.
  • Add support for generating CDI specifications for GDS and MOFED devices
  • Ensure CDI specification is validated on save when generating a spec
  • Rename --discovery-mode argument to --mode for nvidia-ctk cdi generate

Changes from libnvidia-container v1.13.0-rc.2

  • Fix segfault on WSL2 systems. This was triggered in the v1.12.1 and v1.13.0-rc.1 releases.

Changes in the toolkit-container

  • Add --cdi-enabled flag to toolkit config
  • Install nvidia-ctk from toolkit container
  • Use installed nvidia-ctk path in NVIDIA Container Toolkit config
  • Bump CUDA base images to 12.1.0
  • Set nvidia-ctk path in the
  • Add cdi.k8s.io/* to set of allowed annotations in containerd config
  • Generate CDI specification for use in management containers
  • Install experimental runtime as nvidia-container-runtime.experimental instead of nvidia-container-runtime-experimental
  • Install and configure mode-specific runtimes for cdi and legacy modes

v1.13.0-rc.1

  • Include MIG-enabled devices as GPUs when generating CDI specification
  • Fix missing NVML symbols when running nvidia-ctk on some platforms [#49]
  • Add CDI spec generation for WSL2-based systems to nvidia-ctk cdi generate command
  • Add auto mode to nvidia-ctk cdi generate command to automatically detect a WSL2-based system over a standard NVML-based system.
  • Add mode-specific (.cdi and .legacy) NVIDIA Container Runtime binaries for use in the GPU Operator
  • Discover all gsb*.bin GSP firmware files when generating CDI specification.
  • Align .deb and .rpm release candidate package versions
  • Remove fedora35 packaging targets

Changes in toolkit-container

  • Install nvidia-container-toolkit-operator-extensions package for mode-specific executables.
  • Allow nvidia-container-runtime.mode to be set when configuring the NVIDIA Container Toolkit

Changes from libnvidia-container v1.13.0-rc.1

  • Include all gsp*.bin firmware files if present
  • Align .deb and .rpm release candidate package versions
  • Remove fedora35 packaging targets