GPU PCI-Passthrough with Silverblue/Kinoite

The Details

I am attempting to perform GPU PCI Pass-through to a QEMU (KVM) Virtual Machine. I did this once on Fedora Workstation, but like all first attempts, my docs were sparse and I didn’t want to mess with it. Fast-forward, now I’m on Fedora Silverblue, and I want to run the VM again and do the PCI Pass-through, but I think there may be some parts I’ll need help with.

I am following this Blog from the last time I did this:

The mrzk.io calls for some edits to /etc/modprobe.d/vfio.conf and then a dracut command. However, I don’t think that would “jive” with Silverblue. I also do not want to deviate too far and make too many custom modifications to my install as Silverblue’s ease of updates and upgrades is what makes it appealing to me.

The Question

For Silverblue, how would one handle this? Has anyone else got GPU Pass-through working on Silverblue?

Here is what I have currently…

GPU PCI-Passthrough on Silverblue

References

Hardware

Primary GPU for Host

Secondary GPU for VM

Preliminary Checks

Run this grep command to find if your CPU has virtualization enabled.

sudo grep --color --regexp vmx --regexp svm /proc/cpuinfo

If there is no result, make sure to enable VT-d for Intel or AMD-V for AMD based motherboards. Consult your hardware’s instructions on how to do that.

Steps for AMD CPU

Ensure CPU has IOMMU enabled

Run the following command.

dmesg | grep -i -e DMAR -e IOMMU

This should already be enabled for AMD CPUs. Either way, you’ll want to do this.

sudo rpm-ostree kargs \
  --append-if-missing="amd_iommu=on" \
  --append-if-missing="iommu=pt" \
  --append-if-missing="rd.driver.pre=vfio_pci"

Check PCI Bus Groups

#!/usr/bin/env bash

shopt -s nullglob
for g in $(find /sys/kernel/iommu_groups/* -maxdepth 0 -type d | sort -V); do
    echo "IOMMU Group ${g##*/}:"
    for d in $g/devices/*; do
        echo -e "\t$(lspci -nns ${d##*/})"
    done;
done;

It should spit out stuff that looks like this…

➜  ~ 
> bash ./02-check-pci-bus-groups.sh
IOMMU Group 0:
	00:01.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU Group 1:
	00:01.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483]
[...lots of output...]

IOMMU Group 32:
	0d:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 22 [Radeon RX 6700/6700 XT/6750 XT / 6800M] [1002:73df] (rev c1)
IOMMU Group 33:
	0d:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 21/23 HDMI/DP Audio Controller [1002:ab28]

IOMMU Group 34:
	0e:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1)
	0e:00.1 Audio device [0403]: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)
[...more output...]

You can also get the device IDs using lspci. In this case, I’m looking for my NVIDIA card to pass-through.

➜  ~ 
> lspci -vnn | grep -i --regexp NVIDIA
0e:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1) (prog-if 00 [VGA controller])

0e:00.1 Audio device [0403]: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)

➜  ~ 
> lspci -vnn | grep -i --regexp Radeon
0d:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 22 [Radeon RX 6700/6700 XT/6750 XT / 6800M] [1002:73df] (rev c1) (prog-if 00 [VGA controller])

I don’t run silverblue, but you should be able to pass through the equivalent options you would normally put into /etc/modprobe.d/vfio.conf as kernel command line parameters.

Per The kernel’s command-line parameters — The Linux Kernel documentation the options are specified as <module_name>.<parameter>. Parameters for a module can be verified via running modinfo <module_name> in this case modinfo vfio-pci.

For silverblue you should be able to do this with a command similar to the following

sudo rpm-ostree kargs --append=vfio-pci.ids=10de:13c2,10de:0fbb

Where the comma separated ids are from your pci devices you wish to bind to vfio-pci. I used the ids from your nvidia card as an example.

1 Like

I don’t think I did it correctly.

➜  ~ 
> sudo rpm-ostree kargs \             
  --append-if-missing="vfio-pci.ids=10de:13c2,10de:0fbb"
➜  ~ 
> sudo systemctl reboot

This is my output.

➜  ~ 
> sudo lspci -vnn
[sudo] password for filbot: 

[...lots of output...]

0e:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1) (prog-if 00 [VGA controller])
	Subsystem: ZOTAC International (MCO) Ltd. Device [19da:1366]
	Flags: bus master, fast devsel, latency 0, IRQ 255, IOMMU group 34
	Memory at f5000000 (32-bit, non-prefetchable) [size=16M]
	Memory at c0000000 (64-bit, prefetchable) [size=256M]
	Memory at d0000000 (64-bit, prefetchable) [size=32M]
	I/O ports at e000 [size=128]
	Expansion ROM at f6000000 [disabled] [size=512K]
	Capabilities: [60] Power Management version 3
	Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
	Capabilities: [78] Express Legacy Endpoint, MSI 00
	Capabilities: [100] Virtual Channel
	Capabilities: [250] Latency Tolerance Reporting
	Capabilities: [258] L1 PM Substates
	Capabilities: [128] Power Budgeting <?>
	Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
	Capabilities: [900] Secondary PCI Express
	Kernel modules: nouveau

0e:00.1 Audio device [0403]: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)
	Subsystem: ZOTAC International (MCO) Ltd. Device [19da:1366]
	Flags: bus master, fast devsel, latency 0, IRQ 131, IOMMU group 34
	Memory at f6080000 (32-bit, non-prefetchable) [size=16K]
	Capabilities: [60] Power Management version 3
	Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
	Capabilities: [78] Express Endpoint, MSI 00
	Kernel driver in use: snd_hda_intel
	Kernel modules: snd_hda_intel

[...lots of output...]

From what I remember, I’m looking for…

-- Kernel driver in use: snd_hda_intel
++ Kernel driver in use: vfio-pci

But it’s not. I need to now figure out how to check if the settings are correct.

I opened up /etc/grub/grub2.cfg and /etc/grub/grub2-efi.cfg with nothing in /etc/grub/grub.d/, I see these lines in the configs.

### BEGIN /etc/grub.d/15_ostree ###
menuentry 'Fedora Linux 37.20221229.0 (Silverblue) (ostree:0)' --class gnu-linux --class gnu --class os --unrestricted 'ostree-0-2667736b-064d-4b79-b3fe-fa2c69d06110' {
load_video
set gfxpayload=keep
insmod gzio
insmod part_gpt
insmod ext2
search --no-floppy --fs-uuid --set=root 2667736b-064d-4b79-b3fe-fa2c69d06110
linux16 /ostree/fedora-b04ca39f14e6c55f85dbf401c71e4ce0270a5990144b4e192c800d399e419869/vmlinuz-6.0.15-300.fc37.x86_64 rhgb quiet root=UUID=d765dd04-753b-405e-a8f9-1a46cbbdeaf7 rootflags=subvol=root ostree=/ostree/boot.0/fedora/b04ca39f14e6c55f85dbf401c71e4ce0270a5990144b4e192c800d399e419869/0 rd.driver.blacklist=nouveau modprobe.blacklist=nouveau nvidia-drm.modeset=1 rw amd_iommu=on iommu=pt rd.driver.pre=vfio.pci amd_iommu=on iommu=pt rd.driver.pre=vfio.pci vfio-pci.ids=10de:13c2,10de:0fbb
initrd16 /ostree/fedora-b04ca39f14e6c55f85dbf401c71e4ce0270a5990144b4e192c800d399e419869/initramfs-6.0.15-300.fc37.x86_64.img
}
menuentry 'Fedora Linux 37.20221229.0 (Silverblue) (ostree:1)' --class gnu-linux --class gnu --class os --unrestricted 'ostree-1-2667736b-064d-4b79-b3fe-fa2c69d06110' {
load_video
set gfxpayload=keep
insmod gzio
insmod part_gpt
insmod ext2
search --no-floppy --fs-uuid --set=root 2667736b-064d-4b79-b3fe-fa2c69d06110
linux16 /ostree/fedora-b04ca39f14e6c55f85dbf401c71e4ce0270a5990144b4e192c800d399e419869/vmlinuz-6.0.15-300.fc37.x86_64 rhgb quiet root=UUID=d765dd04-753b-405e-a8f9-1a46cbbdeaf7 rootflags=subvol=root ostree=/ostree/boot.0/fedora/b04ca39f14e6c55f85dbf401c71e4ce0270a5990144b4e192c800d399e419869/1 rd.driver.blacklist=nouveau modprobe.blacklist=nouveau nvidia-drm.modeset=1 rw amd_iommu=on iommu=pt rd.driver.pre=vfio.pci amd_iommu=on iommu=pt rd.driver.pre=vfio.pci
initrd16 /ostree/fedora-b04ca39f14e6c55f85dbf401c71e4ce0270a5990144b4e192c800d399e419869/initramfs-6.0.15-300.fc37.x86_64.img
}

So it looks like it took? I don’t know.

After doing the above rpm-ostree kargs and a reboot. We figured out that we had to then do a rpm-ostree initramfs command that looked like this.

➜  ~ 
> sudo rpm-ostree initramfs \
  --enable \
  --arg="--add-drivers" \
  --arg="vfio-pci" \
  --reboot

Then once this was rebooted, I saw that the vfio Kernel modules were loaded as well as the lspci command showed that the Nvidia card was now using vfio-pci as the driver.

I’m going to say that this question is answered. The rest is VM setup which isn’t Silverblue specific. I’ll post a link to a GitHub repository with my final document when I get my VM working.

The full documentation is listed here.

1 Like