Nvidia Prime setup for Fedora 32 (for integrated intel GPU + NVIDIA gpu, for optimas based laptop)

Hi all,

I was trying to install Nvidia prime on fedora 32 and I faced many issues as I had to follow many steps from different websites, forums and It was very tricky (but trust me, it is simple, lol)

I am just going to put steps to setup NVIDIA Prime(or whatever the hell people call it), on fedora 32 KDE, I guess, it will work with any version of fedora.

1.) I followed this guide up to step 7 to setup NVIDIA drivers and Intel drivers.
https://docs.fedoraproject.org/en-US/quick-docs/how-to-set-nvidia-as-primary-gpu-on-optimus-based-laptops/

(As newer drivers block nouveau drivers automatically from grub so, I think, we do not need to worry about it, but still if it disturbs you, then you can always follow this page (Step 2 of the link) to block those drivers.
[https://www.reddit.com/r/Fedora/comments/bw4b0p/how_to_fedora_nvidia_prime/](http://Block nouveau)
BTW, I have kept nvidia-drm.modeset=1 in grub, if it is not there in your grub config.)

2.) I copied the nvidia.conf file, just did as following (Do not add PrimaryGPU “yes” in nvidia conf or do not edit)

Execute the following command to copy the display render details for the X11.
sudo cp -p /usr/share/X11/xorg.conf.d/nvidia.conf /etc/X11/xorg.conf.d/nvidia.conf

Mine looks something like this,

#This file is provided by xorg-x11-drv-nvidia
#Do not edit

Section "OutputClass"
        Identifier "nvidia"
        MatchDriver "nvidia-drm"
        Driver "nvidia"
        Option "AllowEmptyInitialConfiguration"
        Option "SLI" "Auto"
        Option "BaseMosaic" "on"
EndSection

Section "ServerLayout"
        Identifier "layout"
        Option "AllowNVIDIAGPUScreens"
EndSection

3.) you can check the openGL renderer by this command,
glxinfo | egrep “OpenGL vendor|OpenGL renderer”

it should show something like this,
OpenGL vendor string: Intel
OpenGL renderer string: Mesa Intel(R) UHD Graphics 630 (CFL GT2)

4.) we will setup Nvidia’s dynamics power management settings,

sudo -s
dnf update
cat > /etc/modprobe.d/nvidia.conf <<EOF
# Enable DynamicPwerManagement
# Chapter 22. PCI-Express Runtime D3 (RTD3) Power Management
options nvidia NVreg_DynamicPowerManagement=0x02
EOF

you can read about this more details on this page,
https://rpmfusion.org/Howto/Optimus

5.) You can check for sourced and sink drivers using this command,
xrandr --listproviders
But for now, it won’t show any output as we haven’t set up xorg.conf file yet for device and screen inputs, it is important to get right output for this, if it gives output then, you can simply jump to last step. :stuck_out_tongue:

6.) let’s setup xorg.conf file,
you can put this command to edit or create xorg.conf file to add sections of source and sink device

sudo nano /usr/share/X11/xorg.conf

you can copy this configuration in the file.
It will require to replace PCI as per your graphics card
you can check the PCI number using:
lspci | grep -E “VGA|3D”

##Content of xorg file
#please replace PCI port as per your card

Section "ServerLayout"
    Identifier "layout"
    Screen 0 "intel"
    Inactive "nvidia"
EndSection

Section "Device"
    Identifier "nvidia"
    Driver "nvidia"
    BusID "PCI:1:0:0"
EndSection

Section "Screen"
    Identifier "nvidia"
    Device "nvidia"
    Option "AllowEmptyInitialConfiguration" "True"
#   Option "PrimaryGPU" "yes"
EndSection

Section "Device"
    Identifier "intel"
    Driver "modesetting"
    BusID       "PCI:0:2:0"
EndSection

Section "Screen"
    Identifier "intel"
    Device "intel"
EndSection

Yes, I have bypassed NVIDIA as primary GPU here.

you can now restart the system after that to load up those settings.

you can read more about Prime render offload here.
[Chapter 35. PRIME Render Offload](http://NVIDIA prime render offload guide)

7.) It is almost done,
now check xrandr --listproviders
it will show something like this.

Providers: number : 2
Provider 0: id: 0x43 cap: 0xf, Source Output, Sink Output, Source Offload, Sink Offload crtcs: 3 outputs: 1 associated providers: 1 name:modesetting
Provider 1: id: 0x26f cap: 0x2, Sink Output crtcs: 4 outputs: 3 associated providers: 1 name:NVIDIA-G0

it is important to get modesetting and NVIDIA-G0 in respective order as provider 0 will be used to render screen for normal application using iGPU and we will setup other application to render using dGPU.

8.) Now is the time to setup environment variable for the particular application which require to run using dedicated graphics card, or you can simply use this command in konsole to run the software.
__NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia "appliation name"

I personally prefer to copy these variables in the environment file of the particular application as I use Houdini(3d vfx software), so I copy these lines in the houdini.env file.
__NV_PRIME_RENDER_OFFLOAD=1
__GLX_VENDOR_LIBRARY_NAME=nvidia

these are reference image without and with variables, it is so easy to configure app for this.


That is it…!!!

I am getting around 12-16 W with my laptop in ideal state with firefox on wireless drivers using powertop.
You might get even less consumption as I have 17" inch screen and 144hz display(I am keeping it on 60, anyway) On idle, it sometimes go as low as 9-11.

I have also configured tlp using tlpui for convenience, I have disabled bluetooth in the startup, kept it off.
I have kept processor and disk setup with default settings as it will load up company settings and I think it is good. you can do as per your convenience.

I hope, this article was helpful.
Please drop a message if there is something I am doing wrong or can be done in another way as well. This worked for me.

I will just drop some links related to this article which is good to read for more details.

https://forums.developer.nvidia.com/t/prime-and-prime-synchronization/44423
http://download.nvidia.com/XFree86/Linux-x86_64/435.17/README/dynamicpowermanagement.html
https://rpmfusion.org/Howto/Optimus
https://wiki.archlinux.org/index.php/NVIDIA_Optimus#Using_Bumblebee

4 Likes

@t0xic0der any chance you’d be able to help @icreatefx with this please?

Sure thing. I would read (and re-read if needed) the article and see what I can do. :slightly_smiling_face:

You have stated it right that the newer drivers automatically add the configuration for blacklisting nouveau and rebuild the kernel modules during the installation so it is not required. Though, people would want to check again as many Optimus-based laptops still have freezing boot issue (mostly because people did not care enough to upgrade their firmware) and those are the circumstances where an explicit blacklisting of nouveau, followed by rebuilding kernel modules may be needed.

@FranciscoD The quick docs site looks a lot more polished now. What did I miss? :rofl:

You would want to understand that the dynamics power management settings do not play well with the PRIME configuration. A PRIME configuration is expected to give you a better performance overall at the cost of reduced battery life and increased thermal output. Even RPM Fusion states it to be an optional settings and people would want to keep it like that, unless they want a hybrid support which is beyond our case anyway. :smile:

I do not quite understand the reason behind these two commands. Once the PRIME support has been enabled, everything (note everything) would use NVIDIA out-of-the-box. Starting from the display server, to the applications and all - they would not need any kind of command-line argument again to run.

Keeping in mind, the separate images that you have posted here - I am starting to have doubts on if you really were able to get PRIME configuration right. Could you please tell me how you are still able to use your Mesa?

Not a clue, but since you wrote the guide on optimus, you’re the only person I can point to here on the forum when people run into trouble :stuck_out_tongue:

1 Like

I am not technically aware how PRIME should behave, but I wanted something similar to windows when it comes to graphics driver support, something like hybrid graphics.
I thought, Prime settings are something which uses Nvidia drivers when it is required to load up such process.

About that commands, I am using those commands to use Nvidia as primary graphical driver to run that application. Using nvidia all the time will use lot of energy and drain battery too soon, which no one would like (obviously).
When I do not use those commands, that application loads up using mesa drivers.

If you believe that my Prime configuration is not right, then you can correct it here.
But whatever I have configured using this steps. I am pretty happy with it :smiley: haha

Cause it is giving me similar battery performance and graphical performance as windows used to give.
I am happy with it :smiley:

I think, I am quite happy with this setup as It behaves as I was hoping for.
Thank you for the support mate :+1:

1 Like

No no. I am not talking about specifically the documentation per se but about the site as a whole. Somebody indeed gave their time to correct the padding of the topbar (and a lot of other stuff which were messed up the last time I was there). The site looks a lot polished as a result. :heart_eyes:

Understandable. PRIME configuration (the word being derived from the word primary as you might have come to notice by now) is a way to have the discrete GPU (NVIDIA in our case) as the primary GPU to render everything that comes its way. This basically is done in order to attain a better graphical performance though at a cost of battery life and thermal output.

The thing is, people at RPM Fusion had been (say) thoughtful enough to have the configuration that you are looking for, enabled out-of-the-box. I mean when one installs drivers from RPM Fusion - they are configured in such a way that the integrated card is used all the time and the discrete card would be used as and when the user explicitly asks for it. Workstation has a nice contextual menu option called “Run with dedicated graphics card” to allow you to do so without messing around much :smile:

1 Like

There could be veeery nice a Fist Run Script for autoconfiguration of the Nvidia-Grafic Drivers.

could be cool, if there a possible for a script who checks on the installation if this a Notebook,
the 2nd one if on the " lspci | grep -E “VGA|3D” > /etc/issue-graphic "
two or one Grapicard, the 3rd is, is there a nvidia on it
=> Question popup to the Admin on first boot : " Do you want configuration your Nvidia Card " Yes/No
=> Question popup to the Admin on first boot : " Is this a notebook ? "
// =Autoconfiguration

  • yes = check for both graphics Driver
  • yes = ask for rpmfusion support enable … " Do you want the original Nvidia Drivers and rpmfusion repo ? "
  • yes = Ask which Grafik-card should be the Primary
  • yes = Ask for Optimus support if the Highspeed-Card not the Primary
    //= Workstation Configuration
  • no = ask for rpmfusion support enable …
  • no = " Popup "Want you have a Workstation Installation of Nvidia Drivers ? "

Therewith could be a exist a nvidia-installation-fedora.sh for more easy detect a nvidia-card,
if there 2 grafics cards or a workstation and this could run at the fist independent boot and
set up a grafic-info file in /etc/ if there a nvidia exist, therewith it is also given to be possible a autoconfiguration for more faster set up the nvidia-card with orginal drivers as support for the admin or fresh user of Fedora Distro, this prompt script if this anyone build up and add to our distro, could be a nice installations support be for a Fedora Admin/User.
I be to less into Fedora for know how your’re Structures works and how all works inside in fedora and i want not Study like in a university of Fedora the Fedora Wiki where is many obsolete and not actual is.
But i guess some check&autoconfig-script could be really pretty well for the first System-Start that could help many some Nvidia/graphic-configurations issues to excluded in advance for many users of Fedora. also if i see the KDE/Plasma with UHD-TV-nouveau running issues,
could some scrip before sddm starts massive help exclude some issues…
(i have make a Issue on bugs.kde.org for this but they say, go to nouveau Driver home… what’s really not good… imho…)
For exclude some issues in front for Fedora, could some script really help and Fedora could also run on 4K-Screens or more huge … by the way, some TV’s use today also WebOS…
just for info, maybe could webos over vga/hdmi-cable also be controlled for become many informations about the Screen… Size, deep and others… I have a LG-Smart-TV-Screen with 43" and have a 4x more huge screen as a notebook as 4x more workspace with the right size…, so gimp+looking TV… no problem and also space for a other programm on one Workscreen and on a other a running Virtualbox with 2 emulated Systems in Real Notebook-Size (and i have also 2 More switchable Workscreens…
so, this just for info for all…
So, coming back to the Script… who could create and bind in some nice tool.sh script :smiling_face_with_three_hearts:
for have the possible to install kde/plasma without trubles in drivers from nvidia (and with less nouveau… tRuBlEs…)

Best regards
Blacky

I think the option “Run with dedicated graphics card” is available with gnome and cinnamon environments. I am using kde and I have not find it.
I think, using Nvidia as primary graphics is not required for most of the applications, otherwise what is the purpose to keep one less resource heavy graphics and one resource heavy graphics if we can not utilize it properly :smiley: :smiley: Just my personal opinion.
I wish fedora could also integrate some ways to install graphics driver using UI, as it has been provided with many other distros. But, fedora follows open source code policy so I don’t think so they will do that ever.

I admit it’s been a few years since I had to configure this but I don’t see any mention of primusrun or optirun in your examples. I used to configure bumblebee for Optimus. Running any process by prepending optirun to it forced the process to use (and show available) the discrete GPU. According to the updated docs the equivalent is primusrun so are either of those available? I used to do this with the closed NVidia drivers fine. It’s not a very intuitive UX but it’s kind of smart to balance power only where needed.

Good luck.