Just adding some info for future readers who searched for the same error message:
I had the same error messages about being unable to find the akmod package and unable to find CUDA:
none of the providers can be installed
And:
All matches were filtered out by modular filtering for argument: xorg-x11-drv-nvidia-cuda
Error: Unable to find a match: xorg-x11-drv-nvidia-cuda
Then I noticed that the error messages were saying something about being unable to find a dependency version >=
(greater than or equal to) the required one. In other words, DNF cannot find a new enough version of the required dependencies.
So on a whim, I did sudo dnf list "*nvidia*"
To my horror, I saw the problem:
About half of the NVIDIA packages were absolutely ancient and were being selected from the NVIDIA Fedora 35 CUDA repository. (I was using Fedora 38. But NVIDIA tells you to add older repos if you want older CUDA Toolkits.)
I disabled that repository, and ran a new attempt to install the driver: sudo dnf install akmod-nvidia --refresh
Voila. It worked.
So here is a warning: Anyone who installs older CUDA Toolkits by adding NVIDIA’s official CUDA repo to Fedora will run into this problem.
For example, I needed CUDA Toolkit 11 since Microsoft’s popular AI framework requires that version. So I installed NVIDIA’s Fedora 35 CUDA repo, since it was the only one that still shipped that old toolkit. Normally, that’s totally fine. You can easily install old toolkit from those repos, and they work just fine on Fedora 39+ and the latest driver. The issue is that NVIDIA also includes a bunch of driver packages inside their CUDA repos, whose names conflict with the RPM Fusion driver packages. Which is why DNF doesn’t know how to resolve the conflict.
The installation instructions are indeed very outdated at RPM Fusion. They mention a command, sudo dnf module disable nvidia-driver
, which is supposed to prevent conflicts between the driver repos. But that command doesn’t work anymore (the module with that name doesn’t exist). (Edit: Apparently it can exist on some systems, but not mine. Jeff below says that it’s related to having installed old NVIDIA drivers via their CUDA repo at some point.)
It’s clearly not a good idea to mix old NVIDIA CUDA repos with Fedora’s package manager, since DNF lacks support for “package pinning” (i.e. packages always updating via the same source they were installed from, and dependencies always preferring same-repo if possible, to prevent these kinds of conflicts).
It’s technically possible to edit the /etc/yum.repos.d/cuda-fedoraXX.repo
file to add a line similar to exclude=package package1 someotherpackagewithanasteriskwildcard*
, to filter out all old driver packages (things that RPM Fusion provides instead). But that’s just gonna be hassle and adds annoying long-term maintenance in case package names change in the future.
But hey, if someone wants to do that, the process is as follows. First open the repo (such as the one I’m using for CUDA Toolkit 11):
https://developer.download.nvidia.com/compute/cuda/repos/fedora35/x86_64/
Then look at the package names. In this example, it is easy to see that all the ancient driver packages are prefixed with kmod-*
and nvidia-*
. So it would just be a matter of adding exclude=kmod-* nvidia-*
to the repo’s config file.
I can personally vouch that when I do sudo dnf list --installed | grep "@cuda"
to look at what had been installed by CUDA 11, I see that all of the necessary packages are prefixed with cuda-* gds-* lib* nsight-*
. So the old driver packages (nvidia-*
and kmod-*
) that NVIDIA put in their repo are totally useless for CUDA Toolkit users. Therefore, the exclude-line I mentioned in the previous paragraph should work fine, but I am not willing to waste time on testing it.
For my own sanity, I have instead decided to uninstall all custom, older CUDA Toolkits and only install “Fedora-approved” NVIDIA stuff directly from RPM Fusion. Furthermore, since I actually NEED older CUDA versions, I am going to research how to make podman containers which run older CUDA Toolkits. I’ve read that it is doable. I am never letting NVIDIA’s ancient, official repo touch my main OS again!
(Update: I’ve researched a bit more. It’s very easy to make docker/podman containers that use older CUDA Toolkit versions. NVIDIA provides different base-images for each CUDA version. You basically just have to write a container “compose” file which refers to the correct base-image that you want to run. So yeah, just search for articles about that online and have fun.)
So yeah… if you are getting these DNF errors, it means you have multiple repositories that provide the same conflicting package. And that your DNF is selecting the older version of that package.