Toolbox NVIDIA card shows for OpenGL but not for Vulkan

NVIDIA card problems (quelle surprise) with rpmfusion packages.

Inside the toolbox: nvidia-smi shows the correct card. glxinfo shows a correct card. vulkaninfo shows llvmpipe.

Outside the toolbox: vulkaninfo shows correct graphics card

Any advice on how to fix/check this?

Thanks.

⬢ [foo@toolbx ~]$ nvidia-smi
Fri Nov 15 00:22:47 2024       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 565.57.01              Driver Version: 565.57.01      CUDA Version: 12.7     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3070        Off |   00000000:01:00.0  On |                  N/A |
|  0%   41C    P8             20W /  240W |    2508MiB /   8192MiB |      5%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A      2448      G   /usr/bin/gnome-shell                          596MiB |
|    0   N/A  N/A      3471      G   /usr/lib64/firefox/firefox                   1231MiB |
|    0   N/A  N/A      4047    C+G   /usr/bin/ptyxis                               195MiB |
|    0   N/A  N/A      4170      G   /usr/bin/Xwayland                             405MiB |
+-----------------------------------------------------------------------------------------+
⬢ [foo@toolbx ~]$ glxinfo -B
name of display: :0
display: :0  screen: 0
direct rendering: Yes
Memory info (GL_NVX_gpu_memory_info):
    Dedicated video memory: 8192 MB
    Total available memory: 8192 MB
    Currently available dedicated video memory: 5344 MB
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: NVIDIA GeForce RTX 3070/PCIe/SSE2
OpenGL core profile version string: 4.6.0 NVIDIA 565.57.01
OpenGL core profile shading language version string: 4.60 NVIDIA
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile

OpenGL version string: 4.6.0 NVIDIA 565.57.01
OpenGL shading language version string: 4.60 NVIDIA
OpenGL context flags: (none)
OpenGL profile mask: (none)

OpenGL ES profile version string: OpenGL ES 3.2 NVIDIA 565.57.01
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20

⬢ [foo@toolbx ~]$ vulkaninfo --summary
==========
VULKANINFO
==========

Vulkan Instance Version: 1.3.296


Instance Extensions: count = 24
-------------------------------
VK_EXT_acquire_drm_display             : extension revision 1
VK_EXT_acquire_xlib_display            : extension revision 1
VK_EXT_debug_report                    : extension revision 10
VK_EXT_debug_utils                     : extension revision 2
VK_EXT_direct_mode_display             : extension revision 1
VK_EXT_display_surface_counter         : extension revision 1
VK_EXT_headless_surface                : extension revision 1
VK_EXT_surface_maintenance1            : extension revision 1
VK_EXT_swapchain_colorspace            : extension revision 4
VK_KHR_device_group_creation           : extension revision 1
VK_KHR_display                         : extension revision 23
VK_KHR_external_fence_capabilities     : extension revision 1
VK_KHR_external_memory_capabilities    : extension revision 1
VK_KHR_external_semaphore_capabilities : extension revision 1
VK_KHR_get_display_properties2         : extension revision 1
VK_KHR_get_physical_device_properties2 : extension revision 2
VK_KHR_get_surface_capabilities2       : extension revision 1
VK_KHR_portability_enumeration         : extension revision 1
VK_KHR_surface                         : extension revision 25
VK_KHR_surface_protected_capabilities  : extension revision 1
VK_KHR_wayland_surface                 : extension revision 6
VK_KHR_xcb_surface                     : extension revision 6
VK_KHR_xlib_surface                    : extension revision 6
VK_LUNARG_direct_driver_loading        : extension revision 1

Instance Layers: count = 2
--------------------------
VK_LAYER_MESA_device_select Linux device selection layer 1.3.211  version 1
VK_LAYER_NV_optimus         NVIDIA Optimus layer         1.3.289  version 1

Devices:
========
GPU0:
	apiVersion         = 1.3.289
	driverVersion      = 0.0.1
	vendorID           = 0x10005
	deviceID           = 0x0000
	deviceType         = PHYSICAL_DEVICE_TYPE_CPU
	deviceName         = llvmpipe (LLVM 19.1.0, 256 bits)
	driverID           = DRIVER_ID_MESA_LLVMPIPE
	driverName         = llvmpipe
	driverInfo         = Mesa 24.2.6 (LLVM 19.1.0)
	conformanceVersion = 1.3.1.1
	deviceUUID         = 6d657361-3234-2e32-2e36-000000000000
	driverUUID         = 6c6c766d-7069-7065-5555-494400000000

By contrast, vulkaninfo outside the toolbox is fine:

$ vulkaninfo --summary
==========
VULKANINFO
==========

Vulkan Instance Version: 1.3.296


Instance Extensions: count = 24
-------------------------------
VK_EXT_acquire_drm_display             : extension revision 1
VK_EXT_acquire_xlib_display            : extension revision 1
VK_EXT_debug_report                    : extension revision 10
VK_EXT_debug_utils                     : extension revision 2
VK_EXT_direct_mode_display             : extension revision 1
VK_EXT_display_surface_counter         : extension revision 1
VK_EXT_headless_surface                : extension revision 1
VK_EXT_surface_maintenance1            : extension revision 1
VK_EXT_swapchain_colorspace            : extension revision 4
VK_KHR_device_group_creation           : extension revision 1
VK_KHR_display                         : extension revision 23
VK_KHR_external_fence_capabilities     : extension revision 1
VK_KHR_external_memory_capabilities    : extension revision 1
VK_KHR_external_semaphore_capabilities : extension revision 1
VK_KHR_get_display_properties2         : extension revision 1
VK_KHR_get_physical_device_properties2 : extension revision 2
VK_KHR_get_surface_capabilities2       : extension revision 1
VK_KHR_portability_enumeration         : extension revision 1
VK_KHR_surface                         : extension revision 25
VK_KHR_surface_protected_capabilities  : extension revision 1
VK_KHR_wayland_surface                 : extension revision 6
VK_KHR_xcb_surface                     : extension revision 6
VK_KHR_xlib_surface                    : extension revision 6
VK_LUNARG_direct_driver_loading        : extension revision 1

Instance Layers: count = 2
--------------------------
VK_LAYER_MESA_device_select Linux device selection layer 1.3.211  version 1
VK_LAYER_NV_optimus         NVIDIA Optimus layer         1.3.289  version 1

Devices:
========
GPU0:
	apiVersion         = 1.3.289
	driverVersion      = 565.57.1.0
	vendorID           = 0x10de
	deviceID           = 0x2488
	deviceType         = PHYSICAL_DEVICE_TYPE_DISCRETE_GPU
	deviceName         = NVIDIA GeForce RTX 3070
	driverID           = DRIVER_ID_NVIDIA_PROPRIETARY
	driverName         = NVIDIA
	driverInfo         = 565.57.01
	conformanceVersion = 1.3.8.2
	deviceUUID         = 3c4b693b-0187-c02a-92f8-3fb533a66f2c
	driverUUID         = a40eb34f-a796-5990-89ac-95d78eb83699
GPU1:
	apiVersion         = 1.3.289
	driverVersion      = 0.0.1
	vendorID           = 0x10005
	deviceID           = 0x0000
	deviceType         = PHYSICAL_DEVICE_TYPE_CPU
	deviceName         = llvmpipe (LLVM 19.1.0, 256 bits)
	driverID           = DRIVER_ID_MESA_LLVMPIPE
	driverName         = llvmpipe
	driverInfo         = Mesa 24.2.6 (LLVM 19.1.0)
	conformanceVersion = 1.3.1.1
	deviceUUID         = 6d657361-3234-2e32-2e36-000000000000
	driverUUID         = 6c6c766d-7069-7065-5555-494400000000

Anybody have any ideas?

Maybe there are missing packages in the toolbox?

The fact that vulkaninfo is running means that at least the basic Vulkan packages are installed.

Is there an NVIDIA-specific Vulkan package that I might have missed?

Works great for me by just creating a toolbox then installing vulkan-tools. Can you try something real quick?

Check for the vulkan files in nvidia_layers.json nvidia_icd_vksc.json

$ find /etc/vulkansc/ | grep nvidia
$ find /etc/vulkan/ | grep nvidia
$ find /usr/share/vulkansc/ | grep nvidia
$ find /usr/share/vulkansc/ | grep nvidia

If they don’t exist, then get the drivers:

$ wget https://us.download.nvidia.com/XFree86/Linux-x86_64/565.57.01/NVIDIA-Linux-x86_64-565.57.01.run
$ chmod +x NVIDIA-Linux-x86_64-565.57.01.run
$ sudo ./NVIDIA-Linux-x86_64-565.57.01.run -s
–no-kernel-modules
–no-x-check
–no-check-for-alternate-installs
–skip-module-load
–skip-depmod
–no-rebuild-initramfs
–no-questions
–no-systemd
–no-kernel-module-source
–no-dkms

In theory, it should complain about every single file existing already. Then try vulkaninfo again.

Just a wild shot…

Hmm, I installed some more packages and now its working. I noticed that the the nvidia-container-toolkit-base just upgraded itself.

For future reference in case someone else bumps into this:

⬢ [foo@toolbx ~]$ dnf list --installed | grep -i nvidia
libnvidia-container-devel.x86_64        1.17.2-1                    nvidia-container-toolkit
libnvidia-container-static.x86_64       1.17.2-1                    nvidia-container-toolkit
libnvidia-container-tools.x86_64        1.17.2-1                    nvidia-container-toolkit
libnvidia-container1.x86_64             1.17.2-1                    nvidia-container-toolkit
libva-nvidia-driver.x86_64              0.0.12-3.fc41               fedora
nvidia-container-runtime.noarch         3.14.0-1                    nvidia-container-toolkit
nvidia-container-toolkit.x86_64         1.17.2-1                    nvidia-container-toolkit
nvidia-container-toolkit-base.x86_64    1.17.2-1                    nvidia-container-toolkit
⬢ [foo@toolbx ~]$ dnf list --installed | grep -i vulkan
mesa-vulkan-drivers.x86_64              24.2.6-1.fc41               6b61c6041cbd426da127a9d5bbe03dfb
vulkan-headers.noarch                   1.3.296.0-1.fc41            fedora
vulkan-loader.x86_64                    1.3.296.0-1.fc41            5b417f32c8da4e3484b0ed1934d8cc54
vulkan-loader-devel.x86_64              1.3.296.0-1.fc41            fedora
vulkan-tools.x86_64                     1.3.296.0-2.fc41            updates
vulkan-utility-libraries-devel.x86_64   1.3.296.0-1.fc41            fedora

Yeah, it’s back and even more annoying.

I have one container that is nominally derived from registry.fedoraproject.org/fedora-toolbox:41 that works fine and one that doesn’t. Even more annoyingly, if I use the one that works as a base for new containers, those also work. If, however, I base things off of the fedora-toolbox:41 directly and then install the container toolkit, vulkaninfo --summary fails.

Irritatingingly, cuda works fine in both containers as does “glxinfo -B”. I’m really mystified.

I have literally reduced the dnfs so that there is no difference at all between the the two containers if I look using “dnf list --installed”.

I’m really at a loss. There has to be a difference somewhere. Any suggestions for how I run it down?

can you “ls -l /usr/lib64/ | grep nvidia” in the host to see what the drivers point to? I’m no expert on toolbx but it could be using the CDI spec. You can generate it with “sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml”

On my system I install the nvidia container toolkit from Installing the NVIDIA Container Toolkit — NVIDIA Container Toolkit 1.17.0 documentation

Your container really doesn’t need anything installed on it. You can also refer to my previous post on how to verify the driver installed on the container.

For reference,

lrwxrwxrwx. 1 root   root          26 Nov 25 16:56 libcudadebugger.so.1 -> libcudadebugger.so.550.135
-rwxr-xr-x. 3 nobody nobody  10524136 Dec 31  1969 libcudadebugger.so.550.135
lrwxrwxrwx. 1 root   root          18 Nov 25 16:56 libcuda.so.1 -> libcuda.so.550.135
-rwxr-xr-x. 3 nobody nobody  28712096 Dec 31  1969 libcuda.so.550.135
lrwxrwxrwx. 1 root   root          24 Nov 25 16:56 libEGL_nvidia.so.0 -> libEGL_nvidia.so.550.135
-rwxr-xr-x. 3 nobody nobody   1345696 Dec 31  1969 libEGL_nvidia.so.550.135
lrwxrwxrwx. 1 root   root          30 Nov 25 16:56 libGLESv1_CM_nvidia.so.1 -> libGLESv1_CM_nvidia.so.550.135
-rwxr-xr-x. 3 nobody nobody     68000 Dec 31  1969 libGLESv1_CM_nvidia.so.550.135
lrwxrwxrwx. 1 root   root          27 Nov 25 16:56 libGLESv2_nvidia.so.2 -> libGLESv2_nvidia.so.550.135
-rwxr-xr-x. 3 nobody nobody    117144 Dec 31  1969 libGLESv2_nvidia.so.550.135
lrwxrwxrwx. 1 root   root          24 Nov 25 16:56 libGLX_nvidia.so.0 -> libGLX_nvidia.so.550.135
-rwxr-xr-x. 3 nobody nobody   1203776 Dec 31  1969 libGLX_nvidia.so.550.135
lrwxrwxrwx. 1 root   root          21 Nov 25 16:56 libnvcuvid.so.1 -> libnvcuvid.so.550.135
-rwxr-xr-x. 3 nobody nobody  10566992 Dec 31  1969 libnvcuvid.so.550.135
lrwxrwxrwx. 1 root   root          30 Nov 25 16:56 libnvidia-allocator.so.1 -> libnvidia-allocator.so.550.135
-rwxr-xr-x. 3 nobody nobody    168808 Dec 31  1969 libnvidia-allocator.so.550.135
lrwxrwxrwx. 1 root   root          24 Nov 25 16:56 libnvidia-cfg.so.1 -> libnvidia-cfg.so.550.135
-rwxr-xr-x. 3 nobody nobody    398968 Dec 31  1969 libnvidia-cfg.so.550.135
-rwxr-xr-x. 3 nobody nobody  30352200 Dec 31  1969 libnvidia-eglcore.so.550.135
lrwxrwxrwx. 1 root   root          27 Nov 25 16:56 libnvidia-encode.so.1 -> libnvidia-encode.so.550.135
-rwxr-xr-x. 3 nobody nobody    277152 Dec 31  1969 libnvidia-encode.so.550.135
lrwxrwxrwx. 1 root   root          24 Nov 25 16:56 libnvidia-fbc.so.1 -> libnvidia-fbc.so.550.135
-rwxr-xr-x. 3 nobody nobody    137824 Dec 31  1969 libnvidia-fbc.so.550.135
-rwxr-xr-x. 3 nobody nobody  32464992 Dec 31  1969 libnvidia-glcore.so.550.135
-rwxr-xr-x. 3 nobody nobody    582808 Dec 31  1969 libnvidia-glsi.so.550.135
-rwxr-xr-x. 3 nobody nobody   9062480 Dec 31  1969 libnvidia-glvkspirv.so.550.135
-rwxr-xr-x. 3 nobody nobody  43659040 Dec 31  1969 libnvidia-gpucomp.so.550.135
-rwxr-xr-x. 3 nobody nobody   1379720 Dec 31  1969 libnvidia-gtk2.so.550.135
-rwxr-xr-x. 3 nobody nobody   1388424 Dec 31  1969 libnvidia-gtk3.so.550.135
lrwxrwxrwx. 1 root   root          23 Nov 25 16:56 libnvidia-ml.so.1 -> libnvidia-ml.so.550.135
-rwxr-xr-x. 3 nobody nobody   2082456 Dec 31  1969 libnvidia-ml.so.550.135
lrwxrwxrwx. 1 root   root          24 Nov 25 16:56 libnvidia-ngx.so.1 -> libnvidia-ngx.so.550.135
-rwxr-xr-x. 3 nobody nobody   4562136 Dec 31  1969 libnvidia-ngx.so.550.135
lrwxrwxrwx. 1 root   root          25 Nov 25 16:56 libnvidia-nvvm.so.4 -> libnvidia-nvvm.so.550.135
-rwxr-xr-x. 3 nobody nobody  86842616 Dec 31  1969 libnvidia-nvvm.so.550.135
lrwxrwxrwx. 1 root   root          27 Nov 25 16:56 libnvidia-opencl.so.1 -> libnvidia-opencl.so.550.135
-rwxr-xr-x. 3 nobody nobody  23613128 Dec 31  1969 libnvidia-opencl.so.550.135
lrwxrwxrwx. 1 root   root          32 Nov 25 16:56 libnvidia-opticalflow.so.1 -> libnvidia-opticalflow.so.550.135
-rwxr-xr-x. 3 nobody nobody     67704 Dec 31  1969 libnvidia-opticalflow.so.550.135
-rwxr-xr-x. 3 nobody nobody     10176 Dec 31  1969 libnvidia-pkcs11-openssl3.so.550.135
-rwxr-xr-x. 3 nobody nobody     10168 Dec 31  1969 libnvidia-pkcs11.so.550.135
lrwxrwxrwx. 1 root   root          35 Nov 25 16:56 libnvidia-ptxjitcompiler.so.1 -> libnvidia-ptxjitcompiler.so.550.135
-rwxr-xr-x. 3 nobody nobody  28674464 Dec 31  1969 libnvidia-ptxjitcompiler.so.550.135
-rwxr-xr-x. 3 nobody nobody  76336528 Dec 31  1969 libnvidia-rtcore.so.550.135
-rwxr-xr-x. 3 nobody nobody     18632 Dec 31  1969 libnvidia-tls.so.550.135
-rwxr-xr-x. 3 nobody nobody     10088 Dec 31  1969 libnvidia-wayland-client.so.550.135
lrwxrwxrwx. 1 root   root          21 Nov 25 16:56 libnvoptix.so.1 -> libnvoptix.so.550.135
-rwxr-xr-x. 3 nobody nobody  59927784 Dec 31  1969 libnvoptix.so.550.135

I figured it out: it’s a missing nvidia_icd.json file.

If I copy an nvidia_icd.json file from the working container into /etc/vulkan/icd.d on the non-working container, the card is immediately recognized.

I filed a bug on Github (nvidia-container-toolkit fails to create /etc/vulkan/icd.d/nvidia_icd.json · Issue #811 · NVIDIA/nvidia-container-toolkit · GitHub).

The toolkit is to be installed in the host following the instructions for podman and then get injected in the container. The setup need to run each time you install a new driver version.

I followed the directions from here:
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html

For Podman, NVIDIA recommends using CDI for accessing NVIDIA devices in containers.

You can examine my installation transcript from the bug report and point out the step that I missed. I would be grateful.

The CDI specification was correct and also noting:

Podman configuration

podman does not require any specific
configuration to enable CDI support and processes specified --device flags directly.

Do any of these procedures generate a nvidia_icd.json? It’s certainly possible I skipped an essential step somewhere.

nvidia-smi found the card as did glxinfo -B. The container and drivers were fine. cuda was even fine. It was only vulkaninfo that failed because of the missing nvidia_icd.json.

The json file is provided by the Nvidia drivers you installed on the host. The container toolkit only layers them into containers created after the fact to make things easier.

This need to be run each time you change nvidia driver.

$ sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml

Look at the “/etc/cdi/nvidia.yaml” file generated by that command yourself, and it will make sense.

 $ cat /etc/cdi/nvidia.yaml | grep json
  - containerPath: /etc/vulkan/icd.d/nvidia_icd.json
    hostPath: /etc/vulkan/icd.d/nvidia_icd.json
  - containerPath: /etc/vulkan/implicit_layer.d/nvidia_layers.json
    hostPath: /etc/vulkan/implicit_layer.d/nvidia_layers.json
  - containerPath: /usr/share/egl/egl_external_platform.d/10_nvidia_wayland.json
    hostPath: /usr/share/egl/egl_external_platform.d/10_nvidia_wayland.json
  - containerPath: /usr/share/egl/egl_external_platform.d/15_nvidia_gbm.json
    hostPath: /usr/share/egl/egl_external_platform.d/15_nvidia_gbm.json
  - containerPath: /usr/share/glvnd/egl_vendor.d/10_nvidia.json
    hostPath: /usr/share/glvnd/egl_vendor.d/10_nvidia.json

This is just one of the way you can run nvidia drivers in a container. This option is great because you don’t end up duplicating files in each containers. You can also download and install the drivers manually inside your container or mount the files yourself with an elaborate command line. Distrobox find and mount the files itself when using the --nvidia argument to get the same result without the nvidia container toolkit.

Again, you really don’t need to install anything in your container to make this work.

Edit:

That bug you linked in your own bug report makes sense, actually. I don’t use RPM Fusion, but I do know that they rename the json file to .x86_64.json. The correct name are,

nvidia_icd.json
nvidia_layers.json

Not sure why they do that. But good catch. @leigh123linux

Hmm, the issue seems to be that command. I get:

WARN[0000] Could not locate vulkan/icd.d/nvidia_icd.json: pattern vulkan/icd.d/nvidia_icd.json not found
pattern vulkan/icd.d/nvidia_icd.json not found 
WARN[0000] Could not locate vulkan/icd.d/nvidia_layers.json: pattern vulkan/icd.d/nvidia_layers.json not found
pattern vulkan/icd.d/nvidia_layers.json not found 
foo@fedora:~$ rpm -qa | grep -i nvidia | sort
akmod-nvidia-565.57.01-1.fc41.x86_64
libnvidia-container1-1.17.2-1.x86_64
libnvidia-container1-debuginfo-1.17.2-1.x86_64
libnvidia-container-devel-1.17.2-1.x86_64
libnvidia-container-tools-1.17.2-1.x86_64
libva-nvidia-driver-0.0.13^20241108git259b7b7-2.fc41.x86_64
nvidia-container-runtime-3.14.0-1.noarch
nvidia-container-toolkit-1.17.2-1.x86_64
nvidia-container-toolkit-base-1.17.2-1.x86_64
nvidia-gpu-firmware-20241110-1.fc41.noarch
nvidia-modprobe-565.57.01-1.fc41.x86_64
nvidia-persistenced-565.57.01-1.fc41.x86_64
nvidia-settings-565.57.01-1.fc41.x86_64
nvidia-xconfig-565.57.01-1.fc41.x86_64
xorg-x11-drv-nvidia-565.57.01-3.fc41.x86_64
xorg-x11-drv-nvidia-cuda-565.57.01-3.fc41.x86_64
xorg-x11-drv-nvidia-cuda-libs-565.57.01-3.fc41.x86_64
xorg-x11-drv-nvidia-devel-565.57.01-3.fc41.x86_64
xorg-x11-drv-nvidia-kmodsrc-565.57.01-3.fc41.x86_64
xorg-x11-drv-nvidia-libs-565.57.01-3.fc41.x86_64
xorg-x11-drv-nvidia-power-565.57.01-3.fc41.x86_64
xorg-x11-drv-nvidia-xorg-libs-565.57.01-3.fc41.x86_64
foo@fedora:~$ rpm -qa | grep -i vulkan | sort
mesa-vulkan-drivers-24.2.7-1.fc41.x86_64
vulkan-loader-1.3.296.0-1.fc41.x86_64
vulkan-tools-1.3.296.0-2.fc41.x86_64
foo@fedora:~$ rpm-ostree status -v
State: idle
AutomaticUpdates: disabled
Deployments:
● fedora:fedora/41/x86_64/silverblue (index: 0)
                  Version: 41.20241125.0 (2024-11-25T00:38:31Z)
               BaseCommit: 5e5f81ac9327ab9192f2317c406a4e7014679bac6c7d5de89a32e741a1092725
                           ├─ repo-0 (2024-10-24T13:55:59Z)
                           ├─ repo-1 (2024-11-25T00:16:55Z)
                           └─ repo-2 (2024-11-25T00:21:12Z)
                   Commit: 8b20eb4ced1ef0882cf204f747a024ac29eeac5e6b308223e3eb932169e7ee94
                           ├─ copr:copr.fedorainfracloud.org:phracek:PyCharm (2024-08-12T11:59:47Z)
                           ├─ fedora (2024-10-25T08:41:19Z)
                           ├─ fedora-cisco-openh264 (2024-03-11T19:22:31Z)
                           ├─ google-chrome (2024-11-24T19:58:38Z)
                           ├─ nvidia-container-toolkit (2024-11-15T23:44:44Z)
                           ├─ rpmfusion-free (2024-10-27T07:49:25Z)
                           ├─ rpmfusion-free-updates (2024-11-23T12:56:46Z)
                           ├─ rpmfusion-nonfree (2024-10-27T07:58:23Z)
                           ├─ rpmfusion-nonfree-nvidia-driver (2024-11-23T13:28:40Z)
                           ├─ rpmfusion-nonfree-steam (2024-11-23T13:28:51Z)
                           ├─ rpmfusion-nonfree-updates (2024-11-23T13:18:45Z)
                           ├─ updates (2024-11-25T01:51:23Z)
                           └─ updates-archive (2024-11-25T02:38:28Z)
                   Staged: no
                StateRoot: fedora
             GPGSignature: 1 signature
                           Signature made Sun 24 Nov 2024 06:39:40 PM CST using RSA key ID D0622462E99D6AD1
                           Good signature from "Fedora <fedora-41-primary@fedoraproject.org>"
          LayeredPackages: akmod-nvidia libnvidia-container-devel libnvidia-container1-debuginfo libva-nvidia-driver nvidia-container-runtime nvidia-container-toolkit
                           nvidia-settings nvidia-xconfig vulkan-tools xorg-x11-drv-nvidia xorg-x11-drv-nvidia-cuda xorg-x11-drv-nvidia-devel xorg-x11-drv-nvidia-libs
                           xorg-x11-drv-nvidia-xorg-libs
            LocalPackages: rpmfusion-free-release-41-1.noarch rpmfusion-nonfree-release-41-1.noarch