Unable to create new virt-manager vm with software TPM on Fedora 40

I have removed the module causing the tainted flag, issuing sudo cat /proc/sys/kernel/tainted returns 0. This did not change the behaviour when trying to start a vm with selinux enabled and the vm using tpm.

Here is a log of booting and attempting to create a new vm with tpm, this also fails.
https://pastebin.com/YXTXzU6S

The result of sudo ausearch -i -m avc,user_avc,selinux_err,user_selinux_err -ts today after trying to create a new vm is as follows:

----
type=AVC msg=audit(03/05/24 08:33:36.032:271) : avc:  denied  { create } for  pid=5582 comm=qemu-img anonclass=[io_uring] scontext=system_u:system_r:virtstoraged_t:s0 tcontext=system_u:object_r:io_uring_t:s0 tclass=anon_inode permissive=1 
----
type=AVC msg=audit(03/05/24 08:33:36.032:272) : avc:  denied  { map } for  pid=5582 comm=qemu-img path=anon_inode:[io_uring] dev="anon_inodefs" ino=37255 scontext=system_u:system_r:virtstoraged_t:s0 tcontext=system_u:object_r:io_uring_t:s0 tclass=anon_inode permissive=1 
----
type=AVC msg=audit(03/05/24 08:33:36.032:273) : avc:  denied  { read write } for  pid=5582 comm=qemu-img path=anon_inode:[io_uring] dev="anon_inodefs" ino=37255 scontext=system_u:system_r:virtstoraged_t:s0 tcontext=system_u:object_r:io_uring_t:s0 tclass=anon_inode permissive=1 
----
type=AVC msg=audit(03/05/24 08:33:36.223:276) : avc:  denied  { write } for  pid=5600 comm=swtpm_localca name=.lock.swtpm-localca dev="dm-1" ino=981544573 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:var_lib_t:s0 tclass=file permissive=1 
----
type=AVC msg=audit(03/05/24 08:33:36.258:277) : avc:  denied  { write } for  pid=5610 comm=swtpm path=/run/libvirt/qemu/swtpm/1-fedora39-swtpm.pid dev="tmpfs" ino=4024 scontext=system_u:system_r:swtpm_t:s0 tcontext=system_u:object_r:qemu_var_run_t:s0 tclass=file permissive=0 
----
type=AVC msg=audit(03/05/24 08:33:36.259:278) : avc:  denied  { write } for  pid=5610 comm=swtpm name=swtpm dev="tmpfs" ino=4021 scontext=system_u:system_r:swtpm_t:s0 tcontext=system_u:object_r:qemu_var_run_t:s0 tclass=dir permissive=0

sudo dnf reinstall swtpm* and rebooting did not change the behaviour.

Ok, we need two different bug reports. I think the issue of David is more a configuration issue that caused some ownership/labeling problems, but this should not occur. I am not sure if that needs to be fixed upstream or if you need to customize some directories yourself, but the maintainer will know (a comparable report already exists for swtpm. I am quite sure the qemu-img denial of earlier had nothing to do with the issue you experience. Maybe it was a manifest of one of the earlier issues that have been fixed. First of all, make another update as of today (some new packages are contained) with dnf update --refresh, reboot, check, and if the issue remains, then:

@safforddr

Please file a bug report here: Log in to Red Hat Bugzilla (you can login there with your Fedora Account System account)

Please file against the component ā€œswtpmā€ and read the template that will appear then quite thoroughly and answer the questions precisely and explicitly. The less the maintainer has to find out themselves (and the more is obvious at first glance), the higher is the chance that your ticket will get quick attention.

Please ensure that the following information is contained:

ā†’ please choose an expressive title: I suggest ā€œCreating VMs not working IF swtpm is used AND IF SELinux is enabled; virt-manager is used; no avc denials logged; internal error: Could not run ā€˜/usr/bin/swtpm_setupā€™.ā€
ā†’ you have updated your system as of today with dnf update --refresh and your system includes swtpm-0.8.1-7.fc40
ā†’ you experience the issue when CREATING VMs with tpm, but not when USING existing VMs with tpm (always!)
ā†’ you do NOT experience the issue when SELinux is disabled (always!)
ā†’ the output of ausearch is empty / no denials logged of the whole boot
ā†’ please add a link in the bug report to this ask.fedora page (and also add the bug report link here!)
ā†’ please mention that there is a swtpm github ticket of 2022 that contains the same internal error: openSUSE: TPM 1.2 not working with libvirt/qemu, swtpm_setup exit status 1, no log created Ā· Issue #284 Ā· stefanberger/swtpm Ā· GitHub (I donā€™t think it is the same but maybe it is a hint in some direction)
ā†’ please add the log (or a link) to the internal error log ( /var/log/swtpm/libvirt/qemu/fedora-swtpm.log)
ā†’ please add a link to a new file, and contain in this file the outputs of the following commands (and make clear which output belongs to which command; at the best, just copy the whole terminal with commands and their outputs): ls -laZ /usr/lib/systemd/systemd , ls -laZ /usr/bin/swtpm_setup , ls -laZ /var/lib/libvirt/swtpm/2e3c9e89-2ee1-4156-ae4b-b1e1dc1d163c/tpm2 , ls -laZ /var/lib/libvirt/swtpm/2e3c9e89-2ee1-4156-ae4b-b1e1dc1d163c , ls -laZ /var/lib/libvirt/swtpm/ , ls -laZ /var/lib/libvirt/ (feel free to anonymize files, but ensure that the username in the files and journal remain equal)
ā†’ please add a link (not inline!) to the journalctl you provided and shortly elaborate what has happened from your user perspective in this journal (so, what did you do, and what has resulted, including the used tools)
ā†’ additionally to the journal link, please provide the following extract from the journal inline and make clear that it is an extract from the very attached journal:

May 02 09:38:49 fedora audit[4560]: VIRT_MACHINE_ID pid=4560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm vm="fedora" uuid=2e3c9e89-2ee1-4156-ae4b-b1e1dc1d163c vm-ctx=system_u:system_r:svirt_t:s0:c107,c635 img-ctx=system_u:object_r:svirt_image_t:s0:c107,c635 model=selinux exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 02 09:38:49 fedora audit[4560]: VIRT_MACHINE_ID pid=4560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm vm="fedora" uuid=2e3c9e89-2ee1-4156-ae4b-b1e1dc1d163c vm-ctx=+107:+107 img-ctx=+107:+107 model=dac exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 02 09:38:49 fedora systemd[1]: Listening on virtlogd-admin.socket - libvirt logging daemon admin socket.
May 02 09:38:49 fedora audit: BPF prog-id=98 op=LOAD
May 02 09:38:49 fedora systemd[1]: Starting virtlogd.service - libvirt logging daemon...
May 02 09:38:49 fedora systemd[1]: Started virtlogd.service - libvirt logging daemon.
May 02 09:38:49 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=virtlogd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
May 02 09:38:49 fedora virtqemud[4560]: libvirt version: 10.1.0, package: 1.fc40 (Fedora Project, 2024-03-01-18:35:13, )
May 02 09:38:49 fedora virtqemud[4560]: hostname: fedora
May 02 09:38:49 fedora virtqemud[4560]: internal error: Could not run '/usr/bin/swtpm_setup'. exitstatus: 1; Check error log '/var/log/swtpm/libvirt/qemu/fedora-swtpm.log' for details.
May 02 09:38:49 fedora virtqemud[4560]: cannot resolve symlink /var/lib/libvirt/swtpm/2e3c9e89-2ee1-4156-ae4b-b1e1dc1d163c/tpm2: No such file or directory
May 02 09:38:49 fedora virtqemud[4560]: Unable to restore labels on TPM state and/or log file
May 02 09:38:49 fedora audit[4560]: VIRT_RESOURCE pid=4560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=disk reason=start vm="fedora" uuid=2e3c9e89-2ee1-4156-ae4b-b1e1dc1d163c old-disk="?" new-disk="/var/lib/libvirt/images/fedora-1.qcow2" exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 02 09:38:49 fedora audit[4560]: VIRT_RESOURCE pid=4560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=disk reason=start vm="fedora" uuid=2e3c9e89-2ee1-4156-ae4b-b1e1dc1d163c old-disk="?" new-disk="/home/dave/Downloads/Fedora-Everything-netinst-x86_64-40-1.14.iso" exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 02 09:38:49 fedora audit[4560]: VIRT_RESOURCE pid=4560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=net reason=start vm="fedora" uuid=2e3c9e89-2ee1-4156-ae4b-b1e1dc1d163c old-net="?" new-net="52:54:00:9c:f9:27" exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 02 09:38:49 fedora audit[4560]: VIRT_RESOURCE pid=4560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=dev reason=start vm="fedora" uuid=2e3c9e89-2ee1-4156-ae4b-b1e1dc1d163c bus=usb device=555342207265646972646576 exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 02 09:38:49 fedora audit[4560]: VIRT_RESOURCE pid=4560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=dev reason=start vm="fedora" uuid=2e3c9e89-2ee1-4156-ae4b-b1e1dc1d163c bus=usb device=555342207265646972646576 exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 02 09:38:49 fedora audit[4560]: VIRT_RESOURCE pid=4560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=chardev reason=start vm="fedora" uuid=2e3c9e89-2ee1-4156-ae4b-b1e1dc1d163c old-chardev="?" new-chardev="/run/libvirt/qemu/channel/1-fedora/org.qemu.guest_agent.0" exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 02 09:38:49 fedora audit[4560]: VIRT_RESOURCE pid=4560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=rng reason=start vm="fedora" uuid=2e3c9e89-2ee1-4156-ae4b-b1e1dc1d163c old-rng="?" new-rng="/dev/urandom" exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 02 09:38:49 fedora audit[4560]: VIRT_RESOURCE pid=4560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=tpm-emulator reason=start vm="fedora" uuid=2e3c9e89-2ee1-4156-ae4b-b1e1dc1d163c device="/run/libvirt/qemu/swtpm/1-fedora-swtpm.sock" exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 02 09:38:49 fedora audit[4560]: VIRT_RESOURCE pid=4560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=mem reason=start vm="fedora" uuid=2e3c9e89-2ee1-4156-ae4b-b1e1dc1d163c old-mem=0 new-mem=2097152 exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 02 09:38:49 fedora audit[4560]: VIRT_RESOURCE pid=4560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=vcpu reason=start vm="fedora" uuid=2e3c9e89-2ee1-4156-ae4b-b1e1dc1d163c old-vcpu=0 new-vcpu=2 exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 02 09:38:49 fedora audit[4560]: VIRT_CONTROL pid=4560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm op=start reason=booted vm="fedora" uuid=2e3c9e89-2ee1-4156-ae4b-b1e1dc1d163c vm-pid=0 exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=failed'
May 02 09:39:00 fedora gnome-shell[2895]: JS ERROR: TypeError: this.actor is null
                                          _syncEnabled@resource:///org/gnome/shell/ui/windowManager.js:145:25
                                          onStopped@resource:///org/gnome/shell/ui/windowManager.js:157:35
                                          _makeEaseCallback/<@resource:///org/gnome/shell/ui/environment.js:65:22
                                          _easeActorProperty/<@resource:///org/gnome/shell/ui/environment.js:232:60
                                          _destroyWindowDone@resource:///org/gnome/shell/ui/windowManager.js:1607:21
                                          onStopped@resource:///org/gnome/shell/ui/windowManager.js:1595:39
                                          _makeEaseCallback/<@resource:///org/gnome/shell/ui/environment.js:65:22
                                          _easeActor/<@resource:///org/gnome/shell/ui/environment.js:154:64
                                          @resource:///org/gnome/shell/ui/init.js:21:20

Please read all points I have written here and ensure that they apply to your situation! Just to avoid that I misunderstood something.

I am quite sure that the maintainer of the swtpm will have some questions for you and might ask for some further data / outputs / files. So keep watching the topic at least daily or even more, and provide what is asked for.


@bug2k24 I will write you shortly something here to create another bug report.

@bug2k24 First of all, make another update as of today (some new packages are contained) with dnf update --refresh, reboot, check, and if the issue remains, then:

Please also file a bug report at: Log in to Red Hat Bugzilla (you can login there with your Fedora Account System account)

A general suggestion in advance is to create new journals and logs and avcdenial outputs with an untainted kernel and only work with an untainted kernel in the bug report: external modules often cause problematic behavior because many vendors donā€™t align and/or test their code sufficiently (and immature code in kernel space can outreach in unforeseeable directions). Also, this can cause logs to differ, and thus make it harder to reproduce issues (and we obviously also want to create incentives for vendors to add their code officially to the kernel and its massive testing and reviewing to create stability & security; enforce best practices and such). Thus, if you use a case with a tainted kernel, there might be some maintainers who will close or ignore the ticket because this is not supported (and if only tainted logs exists, it is logical to presume at first glance a link of the error to the external modules). In any case, untainted cases increase your chances of getting the issue solved.

So, if you file the bug, please ensure to include the following information:
ā†’ please choose an expressive title: I suggest ā€œCreating AND using VMs not working IF swtpm is used AND IF SELinux is enabled; virt-manager is used; creating and using swtpm VMs causes different AVC denial logsā€
ā†’ you cannot CREATE AND USE VMs with swtpm, but both works fine with VMs without swtpm (always!)
ā†’ the issue only occurs IF SELinux is enabled (therefore: either swtpm or SELinux needs to be removed/disabled) (always!)
ā†’ please create data for both cases (creating and using swtpm VMs while SELinux is enabled): add the avcdenials of both cases inline and add a link to the respective journal (I suggest to provide full journals of dedicated boots) - in all cases, the journals and avcdenials have to be linked to their respective case (using or creating swtpm VMs with SELinux enabled).
ā†’ make clear that the avc denials equal, and thus indicate, the very time of the provocation of the respective issue (creating/using); thatā€™s also important indication when reviewing the journals
ā†’ link this ask.fedora topic to the bug report and vice versa

Please read all points I have written here and ensure that they apply to your situation! Just to avoid that I misunderstood something.

I am quite sure that the maintainer of the swtpm will have some questions for you and might ask for some further data / outputs / files. So keep watching the topic at least daily or even more, and provide what is asked for.

In your case, I see the chance that the maintainer shifts the bug report to another component (and thus another maintainer) because I am not for sure if swtpm is the cause, but it is the maintainers of two components who have to verify that, and swtpm is the most obvious denial in all cases you have logged, so this is where we start.

I might add something about the qemu-img denial later to the report, but for now, stick with the above.

Also, ensure that journals and avcdenials you use are from the same occurrence (so that the avcdenial times equal their logs in the journals)

I have submitted the bug here https://bugzilla.redhat.com/show_bug.cgi?id=2278905

3 Likes

Unfortunately, I am still seeing the issue after making the update provided by py0xc3 above. Attempting to start up an existing VM that had been working prior to F40 (and for several days post F40). (attempted several times, but these are the last 3 attempts).
sudo ausearch -i -m avc,user_avc,selinux_err,user_selinux_err -ts today

----
type=AVC msg=audit(05/03/2024 09:31:18.436:387) : avc:  denied  { create } for  pid=7038 comm=swtpm name=6-Eyre-swtpm.sock scontext=unconfined_u:unconfined_r:svirt_t:s0:c150,c949 tcontext=unconfined_u:object_r:user_tmp_t:s0 tclass=sock_file permissive=0 
----
type=AVC msg=audit(05/03/2024 09:59:23.626:388) : avc:  denied  { create } for  pid=10142 comm=swtpm name=7-Eyre-swtpm.sock scontext=unconfined_u:unconfined_r:svirt_t:s0:c599,c753 tcontext=unconfined_u:object_r:user_tmp_t:s0 tclass=sock_file permissive=0 
----
type=AVC msg=audit(05/03/2024 10:03:58.365:406) : avc:  denied  { create } for  pid=10939 comm=swtpm name=8-Eyre-swtpm.sock scontext=unconfined_u:unconfined_r:svirt_t:s0:c714,c798 tcontext=unconfined_u:object_r:user_tmp_t:s0 tclass=sock_file permissive=0 

For clarity, these lines were created when I attempt to ā€œrunā€ the QEMU/KVM User session for the VM named ā€œEyreā€

Steps taken so far: run sudo dnf upgrade --enablerepo=updates-testing --refresh --advisory=FEDORA-2024-f53eab6892 and restarted my system
running again I get that thereā€™s Nothing to do. Complete!

I also attempted to define a new user session VM using the same disk image, but ran into similar tpm errors. Is there a way to verify that the FEDORA-2024-f53eab6892 update is in place?

@targetedpanda the update is already in stable. This means, you just have to do dnf update --refresh. However, I have moved your post here because what you experience could be a different issue than the one that was solved in the earlier update. At first glance, it looks like you have the same issue as bug2k24 , but please allow me to ask a few things to verify.

I read your post in the way that your ausearch output comes from attempts to start existing swtpm VMs (let me know if I am wrong here). If starting existing swtpm VMs fails, have you already tried to create new swtpm VMs? I assume this also fails? Can you create also an ausearch output of an attempt to create a new swtpm VM?

Further, can you provide extracts from your journal of around 1 minute (30 seconds before and after occurrence) of the two very cases (starting + creating): the easiest way might be to get a rough day-extract from your journal with a command like journalctl --since "2024-05-01" --until "2024-05-02" (ā€œsinceā€ and/or ā€œuntilā€ can be used so that you donā€™t need to get and work in the full journal with all contained days) and then go to the time index of the respective ausearch denials (time index of an attempt to start a swtpm VM, you can use the above time index 10:09:23 if that one is from starting an existing swtpm VM, plus an extract from a time index of creating a swtpm VM attempt along with its ausearch output). In all cases, take the time index plus 30 seconds before and 30 seconds after.

I am quite sure the above bug report of bug2k24 also applies to you, but I would like to verify similarities/differences. That can be indicative.

Itā€™s ironic that system mode now works, albeit with some leftover denials even after complete relabeling, but session mode is broken.

I should probably create a temporarily permissive policy until this mess is fixed and file another bug report.

1 Like

If starting existing swtpm VMs fails, have you already tried to create new swtpm VMs? I assume this also fails? Can you create also an ausearch output of an attempt to create a new swtpm VM?

Yes, I tried to create a new VM and got this ausearch output:
---- type=AVC msg=audit(05/03/2024 12:37:28.942:481) : avc: denied { create } for pid=24281 comm=swtpm name=15-win11_3-swtpm.sock scontext=unconfined_u:unconfined_r:svirt_t:s0:c749,c1001 tcontext=unconfined_u:object_r:user_tmp_t:s0 tclass=sock_file permissive=0
For good measure, hereā€™s the stack details from the Virtual Machine Manager error popup when I attempt to create the VM:


Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 72, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/createvm.py", line 2008, in _do_async_install
    installer.start_install(guest, meter=meter)
  File "/usr/share/virt-manager/virtinst/install/installer.py", line 695, in start_install
    domain = self._create_guest(
             ^^^^^^^^^^^^^^^^^^^
  File "/usr/share/virt-manager/virtinst/install/installer.py", line 637, in _create_guest
    domain = self.conn.createXML(initial_xml or final_xml, 0)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib64/python3.12/site-packages/libvirt.py", line 4529, in createXML
    raise libvirtError('virDomainCreateXML() failed')
libvirt.libvirtError: operation failed: swtpm died and reported: 

journal output on VM start failure:

journal output on VM creation failure:

Thanks for digging into this!

1 Like

@targetedpanda may I ask since when the issue occurs in your case? Is it quite new? I am currently thinking if the issue in session mode has been introduced by the updates that fixed the recently identified bug. Also, it would be nice to know what the most recent updates have been before the issue occurred and when that was (you might check in /var/log/ the files dnf.log, dnf.log.1, dnf.log.2). Package names and times should be sufficient.

Have you experienced the issue in the past? So did your system mode fail before the recent updates but the session mode worked? And since the recent updates vice versa?

if it is something like that, you might add a comment to the existing bug report. It is not yet clear if that is swtpm or selinux-policy related and how the issues are interrelated (outreaches of selinux polcies are sometimes hard to foresee). So it might be helpful to have already a comment about that in the bug report, just so that the two maintainers (both are already involved in the ticket) have heard about it.

If the recent updates have broken the user mode, and if you and ak thus maybe now experience a new issue that is the outcome of the recent update, then my assumption that ak has the same issue as bug2k24 could be wrong (or both the post- and pre-update denials are related and have a common cause not yet identified at all :smile: )

At least, we found a good way to spend our Friday evening :smiley:

1 Like

Hey Chris, thatā€™s a good point. I actually upgraded to F40 a few days ago (2024-04-30T15:51:23-0700) and didnā€™t exhibit issues until this morning when I attempted to start the VM

Hereā€™s the morningā€™s logs:

----
type=AVC msg=audit(05/03/2024 07:22:52.779:763) : avc:  denied  { connectto } for  pid=4609 comm=abrt-dump-journ path=/run/systemd/userdb/io.systemd.Home scontext=system_u:system_r:abrt_dump_oops_t:s0 tcontext=system_u:system_r:init_t:s0 tclass=unix_stream_socket permissive=0 
----
type=AVC msg=audit(05/03/2024 07:22:52.779:764) : avc:  denied  { connectto } for  pid=4609 comm=abrt-dump-journ path=/run/systemd/userdb/io.systemd.Machine scontext=system_u:system_r:abrt_dump_oops_t:s0 tcontext=system_u:system_r:systemd_machined_t:s0 tclass=unix_stream_socket permissive=0 
----
type=AVC msg=audit(05/03/2024 07:22:52.779:765) : avc:  denied  { connectto } for  pid=4609 comm=abrt-dump-journ path=/run/systemd/userdb/io.systemd.Home scontext=system_u:system_r:abrt_dump_oops_t:s0 tcontext=system_u:system_r:init_t:s0 tclass=unix_stream_socket permissive=0 
----
type=AVC msg=audit(05/03/2024 07:22:52.779:766) : avc:  denied  { connectto } for  pid=4609 comm=abrt-dump-journ path=/run/systemd/userdb/io.systemd.Machine scontext=system_u:system_r:abrt_dump_oops_t:s0 tcontext=system_u:system_r:systemd_machined_t:s0 tclass=unix_stream_socket permissive=0 
----
type=USER_AVC msg=audit(05/03/2024 07:39:58.680:250) : pid=1 uid=root auid=unset ses=unset subj=system_u:system_r:init_t:s0 msg='avc:  denied  { status } for auid=unset uid=root gid=root cmdline="" function="method_get_unit_file_state" scontext=system_u:system_r:snappy_t:s0 tcontext=system_u:system_r:init_t:s0 tclass=system permissive=1 exe=/usr/lib/systemd/systemd sauid=root hostname=? addr=? terminal=?' 
----
type=USER_AVC msg=audit(05/03/2024 08:21:26.594:381) : pid=1 uid=root auid=unset ses=unset subj=system_u:system_r:init_t:s0 msg='avc:  denied  { status } for auid=unset uid=root gid=root cmdline="" function="method_get_unit_file_state" scontext=system_u:system_r:snappy_t:s0 tcontext=system_u:system_r:init_t:s0 tclass=system permissive=1 exe=/usr/lib/systemd/systemd sauid=root hostname=? addr=? terminal=?' 
----
type=AVC msg=audit(05/03/2024 08:22:21.697:399) : avc:  denied  { create } for  pid=20802 comm=swtpm name=1-Tahoe-swtpm.sock scontext=unconfined_u:unconfined_r:svirt_t:s0:c671,c1004 tcontext=unconfined_u:object_r:user_tmp_t:s0 tclass=sock_file permissive=0 
----
type=AVC msg=audit(05/03/2024 08:22:27.418:400) : avc:  denied  { create } for  pid=20864 comm=swtpm name=2-Tahoe-swtpm.sock scontext=unconfined_u:unconfined_r:svirt_t:s0:c270,c531 tcontext=unconfined_u:object_r:user_tmp_t:s0 tclass=sock_file permissive=0 
----

This indicates my first swtpm issue was 8:22:21am PDT (Tahoe is my other named VM)

I did do a system upgrade step from Discovery (I couldnā€™t remember if I did that before or after the issue with the VM), but based on these lines from the dnf.log, I think it was before:

2024-05-03T07:55:50-0700 INFO --- logging initialized ---
2024-05-03T07:55:50-0700 DDEBUG timer: config: 37 ms
2024-05-03T07:55:50-0700 DEBUG Loaded plugins: builddep, changelog, config-manager, copr, debug, debuginfo-install, download, generate_completion_cache, groups-manager, needs-restarting, notify-packagekit, playground, repoclosure, repod
iff, repograph, repomanage, reposync, system-upgrade
2024-05-03T07:55:50-0700 DEBUG DNF version: 4.19.2
2024-05-03T07:55:50-0700 DDEBUG Command: dnf-3 system-upgrade upgrade 
2024-05-03T07:55:50-0700 DDEBUG Installroot: /
2024-05-03T07:55:50-0700 DDEBUG Releasever: 40
2024-05-03T07:55:50-0700 DEBUG cachedir: /var/cache/dnf

7:55am: I did the system update (which would have presumably included the fix that went up in stable)
8:22am: was when I first tried my VMs and noticed the issue and started debugging the situation
9:09am: the next entry on the DNF log shows up when I ran another update command
2024-05-03T09:09:32-0700 DDEBUG Command: dnf update --refresh
9:13am: followed by another based on the bug report comments, which I was then tracking
2024-05-03T09:13:04-0700 DDEBUG Command: dnf upgrade --enablerepo=updates-testing --refresh --advisory=FEDORA-2024-f53eab6892

  • There were NO DNF log entries between 7:55 and 9:09am (which covered the time that the first VM start failures occurred).
  • I never used system mode for my VMs as I was never able to get them up (I assume this means the ā€œQEMU/KVMā€ connection type instead of the ā€œQEMU/KVM User Sessionā€) in Virtual Machine Manager (apologies, I donā€™t know the distinction, truly).

This leads me to believe that my system got the FEDORA-2024-f53eab6892 fix a few minutes BEFORE I started showing symptoms of the problem. Iā€™ll make a comment on the bug with this information, but I just hope Iā€™m capturing the right info and providing accurate analysis.

Thanks!

@targetedpanda can you do the following and let us know?

sudo cat /var/log/dnf.lo* | grep swtpm | grep 2024

and maybe also

sudo cat /var/log/dnf.lo* | grep 2024-05-03 | grep fc40 | grep grade

This gives us some clarify. If the assumption is right, the issue on your side was introduced with any update today (obviously I suspect swtpm-0.8.1-7.fc40)

@targetedpanda I just sent something in the bug report for you. I think the issue in your case is the nvidia driver that you have installed just a few minutes before you experiened the issue the first time.

The new swtpm package was installed several hours AFTER you started to experience the issue.

If trying without nvidia (at the best without any external modules/driver) solves your issue, let us know here to keep the bug report clean from posts that (if that is true) do not belong there

svc@alecto:~$ sudo cat /var/log/dnf.lo* | grep swtpm | grep 2024
[sudo] password for svc: 
2024-05-03T11:18:35-0700 DDEBUG Command: dnf reinstall swtpm* 
2024-05-03T11:18:35-0700 DDEBUG Extra commands: ['reinstall', 'swtpm*']
2024-05-03T11:18:36-0700 DEBUG ---> Package swtpm.x86_64 0.8.1-7.fc40 will be reinstalled
2024-05-03T11:18:36-0700 DEBUG ---> Package swtpm-libs.x86_64 0.8.1-7.fc40 will be reinstalled
2024-05-03T11:18:36-0700 DEBUG ---> Package swtpm-selinux.noarch 0.8.1-7.fc40 will be reinstalled
2024-05-03T11:18:36-0700 DEBUG ---> Package swtpm-tools.x86_64 0.8.1-7.fc40 will be reinstalled
2024-05-03T11:19:08-0700 DEBUG Reinstalled: swtpm-0.8.1-7.fc40.x86_64
2024-05-03T11:19:08-0700 DEBUG Reinstalled: swtpm-libs-0.8.1-7.fc40.x86_64
2024-05-03T11:19:08-0700 DEBUG Reinstalled: swtpm-selinux-0.8.1-7.fc40.noarch
2024-05-03T11:19:08-0700 DEBUG Reinstalled: swtpm-tools-0.8.1-7.fc40.x86_64
2024-05-03T11:19:08-0700 DDEBUG /var/cache/dnf/updates-02a32a5ce99e20ab/packages/swtpm-0.8.1-7.fc40.x86_64.rpm removed
2024-05-03T11:19:08-0700 DDEBUG /var/cache/dnf/updates-02a32a5ce99e20ab/packages/swtpm-tools-0.8.1-7.fc40.x86_64.rpm removed
2024-05-03T11:19:08-0700 DDEBUG /var/cache/dnf/updates-02a32a5ce99e20ab/packages/swtpm-selinux-0.8.1-7.fc40.noarch.rpm removed
2024-05-03T11:19:08-0700 DDEBUG /var/cache/dnf/updates-02a32a5ce99e20ab/packages/swtpm-libs-0.8.1-7.fc40.x86_64.rpm removed

and the 2nd example didnā€™t return anything, so I made it super vague (just looking for fc40)

svc@alecto:~$ sudo cat /var/log/dnf.lo* | grep fc40
2024-04-30T15:51:23-0700 DDEBUG Command: dnf -y install --nogpgcheck --disablerepo=* /tmp/akmods.eSf5s2tb/results/kmod-nvidia-6.8.7-300.fc40.x86_64-550.76-1.fc40.x86_64.rpm 
2024-04-30T15:51:23-0700 DDEBUG Extra commands: ['-y', 'install', '--nogpgcheck', '--disablerepo=*', '/tmp/akmods.eSf5s2tb/results/kmod-nvidia-6.8.7-300.fc40.x86_64-550.76-1.fc40.x86_64.rpm']
2024-04-30T15:51:24-0700 DEBUG ---> Package kmod-nvidia-6.8.7-300.fc40.x86_64.x86_64 3:550.76-1.fc40 will be installed
 kmod-nvidia-6.8.7-300.fc40.x86_64 x86_64 3:550.76-1.fc40    @commandline  40 M
2024-04-30T15:51:35-0700 DEBUG Installed: kmod-nvidia-6.8.7-300.fc40.x86_64-3:550.76-1.fc40.x86_64
2024-05-03T08:01:14-0700 DDEBUG Command: dnf -y install --nogpgcheck --disablerepo=* /tmp/akmods.CtU4vaSW/results/kmod-nvidia-6.8.8-300.fc40.x86_64-550.78-1.fc40.x86_64.rpm 
2024-05-03T08:01:14-0700 DDEBUG Extra commands: ['-y', 'install', '--nogpgcheck', '--disablerepo=*', '/tmp/akmods.CtU4vaSW/results/kmod-nvidia-6.8.8-300.fc40.x86_64-550.78-1.fc40.x86_64.rpm']
2024-05-03T08:01:16-0700 DEBUG ---> Package kmod-nvidia-6.8.8-300.fc40.x86_64.x86_64 3:550.78-1.fc40 will be installed
 kmod-nvidia-6.8.8-300.fc40.x86_64 x86_64 3:550.78-1.fc40    @commandline  41 M
2024-05-03T08:01:27-0700 DEBUG Installed: kmod-nvidia-6.8.8-300.fc40.x86_64-3:550.78-1.fc40.x86_64
2024-05-03T11:18:36-0700 DEBUG ---> Package swtpm.x86_64 0.8.1-7.fc40 will be reinstalled
2024-05-03T11:18:36-0700 DEBUG ---> Package swtpm-libs.x86_64 0.8.1-7.fc40 will be reinstalled
2024-05-03T11:18:36-0700 DEBUG ---> Package swtpm-selinux.noarch 0.8.1-7.fc40 will be reinstalled
202+4-05-03T11:18:36-0700 DEBUG ---> Package swtpm-tools.x86_64 0.8.1-7.fc40 will be reinstalled
 swtpm                              x86_64                      0.8.1-7.fc40                         updates                       29 k
 swtpm-libs                         x86_64                      0.8.1-7.fc40                         updates                       50 k
 swtpm-selinux                      noarch                      0.8.1-7.fc40                         updates                       21 k
 swtpm-tools                        x86_64                      0.8.1-7.fc40                         updates                      111 k
2024-05-03T11:19:08-0700 DEBUG Reinstalled: swtpm-0.8.1-7.fc40.x86_64
2024-05-03T11:19:08-0700 DEBUG Reinstalled: swtpm-libs-0.8.1-7.fc40.x86_64
2024-05-03T11:19:08-0700 DEBUG Reinstalled: swtpm-selinux-0.8.1-7.fc40.noarch
2024-05-03T11:19:08-0700 DEBUG Reinstalled: swtpm-tools-0.8.1-7.fc40.x86_64
2024-05-03T11:19:08-0700 DDEBUG /var/cache/dnf/updates-02a32a5ce99e20ab/packages/swtpm-0.8.1-7.fc40.x86_64.rpm removed
2024-05-03T11:19:08-0700 DDEBUG /var/cache/dnf/updates-02a32a5ce99e20ab/packages/swtpm-tools-0.8.1-7.fc40.x86_64.rpm removed
2024-05-03T11:19:08-0700 DDEBUG /var/cache/dnf/updates-02a32a5ce99e20ab/packages/swtpm-selinux-0.8.1-7.fc40.noarch.rpm removed
2024-05-03T11:19:08-0700 DDEBUG /var/cache/dnf/updates-02a32a5ce99e20ab/packages/swtpm-libs-0.8.1-7.fc40.x86_64.rpm removed

Note that I did a forced reinstall of swtpm AFTER I noticed the VM issue, which did not help (which you see at 11:18 in the 2nd output above)

@targetedpanda I think this seemingly ā€œreinstallā€ was actually the installation because the package was not listed throughout the dnf logs before.

The link you have used that said ā€œnothing to installā€ did so because the testing was already finished: the package was already stable at this time, so the testing link no longer referred to that package.

I think you updated it accidentally yourself when you wanted to reinstall it :slight_smile: That leads to the assumption that ā€œreinstallā€ can lead to an update. (Give me a moment, I create a toolbox to test it - Im curious now)

Iā€™m not sureā€¦the nvidia driver shouldnā€™t have caused an swtpm error, but Iā€™m happy to try to eliminate it. I attempted to do the sudo dnf upgrade --enablerepo=updates-testing --refresh --advisory=FEDORA-2024-f53eab6892 prior to forcing the reinstall of swtpm, but those upgrade commands said there was nothing to update, which I read as it had already been installed/updated. The nvidia driver was updated on 4/30 as well as this morning at the same time as the system update.

svc@alecto:~$ sudo cat /var/log/dnf.log | grep Command | grep nvidia
2024-04-30T14:21:19-0700 DDEBUG Command: dnf -y install --nogpgcheck --disablerepo=* /tmp/akmods.9bqQdHHV/results/kmod-nvidia-6.8.7-200.fc39.x86_64-550.76-1.fc39.x86_64.rpm 
2024-04-30T14:22:41-0700 DDEBUG Command: dnf -y install --nogpgcheck --disablerepo=* /tmp/akmods.NCRohSE7/results/kmod-nvidia-6.8.6-200.fc39.x86_64-550.76-1.fc39.x86_64.rpm 
2024-04-30T14:38:58-0700 DDEBUG Command: dnf install -y gcc kernel-headers kernel-devel akmod-nvidia xorg-x11-drv-nvidia xorg-x11-drv-nvidia-libs 
2024-04-30T14:39:18-0700 DDEBUG Command: dnf install -y xorg-x11-drv-nvidia-cuda 
2024-04-30T14:39:31-0700 DDEBUG Command: dnf install -y xorg-x11-drv-nvidia-cuda-libs 
2024-04-30T15:51:23-0700 DDEBUG Command: dnf -y install --nogpgcheck --disablerepo=* /tmp/akmods.eSf5s2tb/results/kmod-nvidia-6.8.7-300.fc40.x86_64-550.76-1.fc40.x86_64.rpm 
2024-05-03T08:01:14-0700 DDEBUG Command: dnf -y install --nogpgcheck --disablerepo=* /tmp/akmods.CtU4vaSW/results/kmod-nvidia-6.8.8-300.fc40.x86_64-550.78-1.fc40.x86_64.rpm

Edit: I see your point about the nothing to install since it had been moved to stable. Iā€™m trying the restroecon command from the other thread now and trying again.

At least in my toolbox the behavior is not equivalent, yet reinstalling and then updating works. However, your dnf logs do not contain any entry of swtpm before the reinstall of the time after 11:00, and given that the update was not existing at all when these logs began, I tend to exclude it from the list of possibilites.

At that time, it was already stable. Thus, this testing command did no longer work.

Well, it would not be my first guess as well. However, when foreign code is added to the kernel, the outreach cannot be foreseen. I have seen bugs in wifi driverā€™s wifi discovery to cause errors in the graphics card. The relations in the kernel are not always comprehensible because their logic work completely different to ours. One of the reasons why we have so many issues with foreign drivers: lack of testing and considering of/in the actual production environment.

In the end, you installed only one package immediately before you have experienced the issue the first time, and that was nvidia. And a kernel module can have such outreaches, even if it was not my first suspect. It is the time that makes it at the moment my favorite:

before 08:01 - all good
08:01 - only nvidia driver update
08.22 - first occurrence.

Absolutely!

Ah, okay! I thought the system-upgrade at 7:55 was doing something but wasnā€™t showing on the logs, but maybe thatā€™s not true. Looking back at activity on 4/30, I saw that it just started showing Fedora Linux 40 and didnā€™t have the huge list of things that changed and so thought that was typical logging behavior. But I see now that that information was actually cutoff by the log so thatā€™s on the dnf.log.1 insteadā€¦ Apologies, pretty new to linux and still getting the hang of things like this.

Iā€™m looking to restart in recovery mode to disable my nvidia driver (though Iā€™m worried this will break something else with the running of the Hypervisor)

Side note: should I not be using ā€œuser sessionā€ mode? The problem I had with loading my qcow2 image in the main mode was that it didnā€™t find a bootable disk, but that wasnā€™t a problem when I created the VM with User session.

There are multiple dnf log files. Thatā€™s why I suggested commands with ā€œdnf.lo*ā€ with the * to ensure it includes all files, such as dnf.log , dnf.log.1, dnf.log.2, and so on. Your dnf.log begins on 30.4. - everything before is in earlier dnf.log.* files. Yet, earlier files before 30.4. should not be relevant anyway.

However, when skimming your logs, I somehow have the feeling that updates are done by some other tool that does not document in dnf logs. I do not use GNOME (I assume you do? Fedora Workstation?) so I donā€™t know if its graphical update tool does not cause dnf logs for some reason (my expectation was that it is just a front end of dnf and thus causes logs). Maybe someone else knows off the cuff (?)

It indeed looks to me that dnf is usually only used to do the nvidia update on your system, and nothing more.

I assume you usually use some other tool to do updates? Which one is it? Is it default on your Fedora variant? Whatever it is, you could try to find out if it creates its own logs and where it does store them, in order to then check if and when it installed swtpm-0.8.1-7.fc40. Feel free to let me know, I am curious. And it is relevant information if the swtpm could be the origin that helps to get a better understanding of the overall issue we currently have.

Btw, there is a question for you in the bug report.

Yep, youā€™re right I used a different tool, but I am on KDE Plasma and I did the system updates from Discovery UI app, not via the CLI. The grep on on those logs did include the log wildcard and I found it odd that swtpm wasnā€™t on anything except when I force reinstalled it this morning after this stuff started. I think when I install system updates from Discovery, it just doesnā€™t log anywhere that I can find or learn about. Now I know, I should probably just use dnf from now onā€¦

Taking a peek at the bug report.

I donā€™t know how big your file system is, but you can try the following command to find out if the term ā€œswtpm-0.8.1-7.fc40ā€ is mentioned anywhere. Be aware that this can take a while depending on how much you have stored: this searches the whole content of every file.

I would limit it to /home and /var, I do not see a reason how a related log could end up somewhere else:

grep -Rnw '/home/' -e 'swtpm-0.8.1-7.fc40'
grep -Rnw '/var/' -e 'swtpm-0.8.1-7.fc40'

Feel free to further limit the search paths. As I said, that opens and searches through everything.

In any case, you should get the one output of the reinstallation you did after 11.00