Overlaying libvirt on Silverblue / Kinoite / Sericea / Onyx and CoreOS

I use libvirt to manage my virtual machines on my systems and until recently I was overlaying it on my rpm-ostree systems by installing the virt-install and virt-manager packages which pull-in the libvirt daemon via dependencies.

Approximately a month ago, the zfs-fuse package got “fixed” to depend on initscripts (see: 2214965 – Do not require initscripts / use initscripts-service instead) and I did not want to get initscripts back on my system so I’ve looked at trimming the dependencies.

Unfortunately, virt-manager pulls-in libvirt-daemon-kvm or libvirt-daemon-qemu which both pull-in the zfs-fuse package via dependencies.

So the first step was to move virt-manager to a dedicated toolbox, copy the desktop entry to ~/.local/share/applications and edit it to run it in the toolbox directly:

$ toolbox create virt-manager
$ toolbox enter virt-manager
$ sudo dnf update -y && sudo dnf install -y virt-manager virt-install
$ cp /usr/share/applications/virt-manager.desktop ~/.local/share/applications/
$ cat ~/.local/share/applications/virt-manager.desktop | grep Exec=
Exec=toolbox run --container virt-manager virt-manager

The second step was to figure out through trial and error the right sub-packages to install to get a working libvirt setup for the common KVM / QCOW2 / Linux only use cases. I landed on:


Overall this is a very poor user experience for setting up libvirt on rpm-ostree based systems but I could not figure out a better option for now.

An alternative option for virt-manager would be to use the Cockpit Flatpak to manage virtual machines but this is currently blocked on issue#30 for me on KDE. I’ve not yet checked if all the functionalities that I need have been implemented in the Cockpit UI but I hope that this will enable me to remove virt-manager in the long term.

Suggestions to improve this setup or fix this in a better way are welcomed!


Thanks for the guide! I’m poking around ublue images and layering packages on top of it. restorecon seems not to work in container builds and swtpm gets an incorrect label:

❯ ls -Z /usr/bin/swtpm
system_u:object_r:bin_t:s0 /usr/bin/swtpm

so I can’t start VMs with tpm enabled.
Is there any way to fix/workaround it?

This looks like an on-topic but “unrelated” issue. Can you file an issue upstream in the rpm-ostree issue tracker? Thanks

Wow, the timing of this is uncanny! A couple days ago I layered both libvirt-daemon and libvirt-daemon-driver-qemu-9.0.0-3.fc38.x86_64 on top of my kinoite install. I didn’t know about the latter when installing the former, so they ended up in two subsequent images. After I installing driver-qemu and rebooting my entire system would freeze up shortly after signing in via sddm. I checked the system journal and couldn’t see any obvious errors. Maybe I shouldn’t use the -3 version of the qemu driver?

Any guide on using cockpit in toolbox? I have installed it in toolbox. But can’t start the service.

> sudo systemctl enable --now cockpit.socket
Failed to enable unit: Access denied

The Cockpit service will not work in a toolbox. It’s meant to run on the system directly. The Cockpit client should work, but you can use the Flatpak.

You can also run libvirt inside a container. Distrobox has instructions on this: https://github.com/89luca89/distrobox/blob/main/docs/posts/run_libvirt_in_distrobox.md

although I prefer to run podman directly via:

sudo podman run -d \
        --privileged \
        --net host \
        --hostname libvirt \
        --name libvirt \
        -v /dev:/dev \
        -v /var/lib/libvirt:/var/lib/libvirt \
        -v libvirt_etc:/etc/libvirt \

also if you install the gnome-boxes flatpak, that includes its own instance of libvirt.


You can also use podman machine to manage virtual machines with podman. See podman machine --help for further info.

Thanks a lot! based on those docs, I started working on making a more “pre-built”, ready for use image, but I’ve not reached something I’m satisfied with yet: https://github.com/travier/quay-containerfiles/tree/main/libvirtd

Here is a simple Containerfile I made based on AlmaLinux 8

FROM docker.io/almalinux/8-init:latest
RUN dnf groupinstall --setopt install_weak_deps=0 --nodocs --assumeyes --allowerasing "Virtualization Host" && dnf clean all
RUN rm -v /etc/systemd/system/*.wants/*
RUN systemctl enable libvirtd sshd
RUN echo "Port 2222" | tee -a /etc/ssh/sshd_config
COPY authorized_keys /root/.ssh/authorized_keys

Here is one based on Alma 9. RHEL 9 dropped SPICE support so the Copr repo adds it back in. See 2030592 – Keep QXL/SPICE support for RHEL 9

FROM docker.io/almalinux/9-init:latest
RUN dnf install --assumeyes dnf-plugins-core && dnf copr enable --assumeyes ligenix/enterprise-qemu-spice
RUN dnf groupinstall --setopt install_weak_deps=0 --nodocs --assumeyes --allowerasing "Virtualization Host" && dnf clean all
RUN rm -v /etc/systemd/system/*.wants/*
RUN systemctl enable libvirtd sshd
RUN echo "Port 2222" | tee -a /etc/ssh/sshd_config
COPY authorized_keys /root/.ssh/authorized_keys

and one for Fedora rawhide if I want to test the latest libvirt version for whatever reason

FROM quay.io/fedora/fedora:rawhide

RUN dnf -y install systemd && dnf clean all && \
  (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
  rm -f /lib/systemd/system/multi-user.target.wants/*; \
  rm -f /etc/systemd/system/*.wants/*; \
  rm -f /lib/systemd/system/local-fs.target.wants/*; \
  rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
  rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
  rm -f /lib/systemd/system/basic.target.wants/*; \
  rm -f /lib/systemd/system/anaconda.target.wants/*;

RUN dnf install --setopt install_weak_deps=0 --nodocs --assumeyes --allowerasing qemu libvirt openssh-server netcat && dnf clean all
RUN ln -s /usr/bin/qemu-system-x86_64 /usr/libexec/qemu-kvm #backwards compability with Alma
RUN systemctl enable sshd
RUN echo "Port 2222" | tee -a /etc/ssh/sshd_config
COPY authorized_keys /root/.ssh/authorized_keys
CMD ["/usr/sbin/init"]
1 Like

If you are only interested in running VMs for headless stuff, like servers, you will get a lot less junk dependencies pulled in by using the qemu-kvm-core package instead of the qemu-kvm package. There is a decent explanation on the libvirt site about how to get a more slim libvirt install with fewer unnecessary dependencies, but it pretty much boils down to that list you are using.


This part seems problematic to me (I used to use virt-manager a lot though these days I mostly use headless distrobox containers)

Why would either kvm or qemu depend on a specific filesystem? Seems like a dependency bug

The Using Cockpit to graphically manage systems, without installing Cockpit on them! post on the Fedora Magazine likely has the answer why this did not work:

Question: I connected to a remote system that doesn’t have Cockpit installed, but I don’t see Virtual Machines or one of the other applications listed in the menu. I thought you just said these were included in the Cockpit Client Flatpak?

Answer: When you login to a remote system that doesn’t have Cockpit packages installed, you’ll only see the menu options for underlying functionality available on the remote system. For example, you’ll only see Virtual Machines in the Cockpit menu if the remote host has the libvirt-dbus package installed.

Will give this a try.

This works!

I’m still facing Empty (blank) page on launch. Console output on refresh: flatpak-spawn: Invalid byte sequence in conversion input · Issue #33 · flathub/org.cockpit_project.CockpitClient · GitHub, but building the latest Cockpit client from the upstream repo gives me a working Flatpak.

Hi @siosm, have you found a good way to use libvirt on Silverblue?
Seeing this page, I see why you want to avoid zfs-fuse…

OK, I’m getting closer to a working setup and I posted my containers and commands in https://github.com/travier/quay-containerfiles/tree/main/libvirtd.

I’m still having weird issues with those however that I haven’t resolved yet:

  • The libvirtd daemon won’t start directly, only on the first connection.
  • The dbus-broker and systemd-journald service don’t start directly in the
  • DHCP does not work for virtual networks.
  • Passing the full /dev to the container (to share USB devices for examples)
    results in the following error on VM start:
    QEMU Driver error : failed to umount devfs on /dev: Device or resource busy
1 Like

This certainly is complicated. I didn’t get the dhcp either for basic NAT. Have you found a solution?

I have not found a solution so far. I went back to layering for now.

Potential paths forward are:

  • looking at using systemd sys-ext: systemd-sysext
  • keeping the container and duplicating the units from the container to the system and turning them into real system units that call the program from the container via podman.
1 Like

might be smaller trouble to add the required libvirt xml configs to boxes libvirt to get the network working. I dunno, I have no real need for this, so I lay back and wait if someone comes up with a working solution :smiley: . But I’m surpriced how oversimplified (useless in my scenarios) the Boxes is.

Something I’m wondering is what is the Flatpack for Gnome Boxes doing that could be used to get a working solution for the non-Gnome variants?