I have Nvidia drivers from RPMFusion an it is working with no issues, no problems at all (KDE Plasma /Wayland).
But things get terrific when it comes to using GPU for machine learning with CUDA. I have not been able to get it working, no matter of the approach I tried (I am using podman containers).
So I am condemned to use my server’s CPU to develop and test Tensorflow apps :-(, as I will not go for the Nvidia official drivers nor make any change that could compromise the stability of my system. Even if I did so, I am pretty sure that I would not be able to get better results.
I am not sure if it could be a SELinux related problem, but could be unvaluable for me if anyone out there could have any clue to follow.
I got CUDA on rootless Podman working on my system recently.
The key things were:
Install nvidia-container-toolkit as per @ai-ml/nvidia-container-toolkit Copr (Follow the “Installation Instructions” there, and that includes an initial test to show everything works.)
Include the option --device nvidia.com/gpu=all whenever you run a container that requires CUDA.
I didn’t need to make any SELinux changes specific to CUDA, though I did need to make some for all Podman containers, CUDA or non-CUDA. (I configured my Podman storageRoot to be different from the default, so I had to fix the SELinux labelling for its new location.)