Unable to create new virt-manager vm with software TPM on Fedora 40

<deleted by author> (the user who solved the issue by re-installing Fedora & updating has the issue again since they rebooted the first time after the installation of the virtualization packages; see bug report)


Additionally, we have now an open bug report you should follow daily and add relevant data (especially if people there ask for some data) in case another update as of today does still not solve your issue:

https://bugzilla.redhat.com/show_bug.cgi?id=2278123

.The maintainer is off the cuff not able to reproduce the issue: they ask for the exact steps and tools the affected users use from the default Fedora to installing tools/packages (including tools like virt-manager , cockpit and whatever could be linked to virtualization, and how you do install/configure them) up to the point when the issue occurs. Please provide them with the related information: See the comment of 2024-05-01 12:04:56 UTC in the bugzilla ticket BZ #2278123. You can use your normal Fedora accounts in the bugzilla when you click login and then Fedora Account System.

I cannot follow the remaining week, so its up to you to read & provide, and maybe test, what is necessary in collaboration with the maintainer. All the best!

,The maintainer has provided an update that shall solve the issue: it is now in testing. The F40 update has already passed all automated tests.

If you want, it would be appreciated if you test if the package solves your issue and let us know. This way we can respond early.

You can install the update swtpm-0.8.1-7.fc40 with sudo dnf upgrade --enablerepo=updates-testing --refresh --advisory=FEDORA-2024-f53eab6892

If you still experience the issue after the update, please provide an elaboration if and how the problem has changed, and let us know the time of a provocation of the issue along with a new sudo ausearch -i -m avc,user_avc,selinux_err,user_selinux_err -ts today output.

I installed the swtpm update, so cumulatively I am running:
selinux-policy.noarch 40.17-1.fc40 @updates
selinux-policy-targeted.noarch 40.17-1.fc40 @updates
swtpm.x86_64 0.8.1-7.fc40 @updates-testing

Originally I was seeing several AVC denials associated with trying to create a
virt-manager VM with swtpm, and all of those errors are gone, but still the
swtpm_setup stage fails if selinux is enforcing, and succeeds if selinux is disabled.

I’m really stumped as there are no AVC errors in journalctl, ausearch, or the audit logs.

What I am doing is simple:

  1. as a regular user (dave) I run virt-manager (GUI)
    which pops up an authenticator
  2. I click the “new” vm button, point to the fedora 40 iso, and accept
    all the defaults until the last screen where I click
    “customize configuration before install”
    and add hardware to add a TPM.
  3. When I click to start the installation I get an error message about
    swtpm_setup failing (attached).

(Attachment virt-manager_error.txt is missing)

(Attachment fedora-test-swtpm.log is missing)

(Attachment journalctl.txt is missing)

The attached files were rejected.
including them inline:
virt manager error:

Unable to complete install: 'internal error: Could not run '/usr/bin/swtpm_setup'. exitstatus: 1; Check error log '/var/log/swtpm/libvirt/qemu/fedora-test-swtpm.log' for details.'

Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 72, in cb_wrapper
callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/createvm.py", line 2008, in _do_async_install
installer.start_install(guest, meter=meter)
File "/usr/share/virt-manager/virtinst/install/installer.py", line 695, in start_install
domain = self._create_guest(
^^^^^^^^^^^^^^^^^^^
File "/usr/share/virt-manager/virtinst/install/installer.py", line 637, in _create_guest
domain = self.conn.createXML(initial_xml or final_xml, 0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.12/site-packages/libvirt.py", line 4529, in createXML
raise libvirtError('virDomainCreateXML() failed')
libvirt.libvirtError: internal error: Could not run '/usr/bin/swtpm_setup'. exitstatus: 1; Check error log '/var/log/swtpm/libvirt/qemu/fedora-test-swtpm.log' for details.

fedora-test-swtpm.log

Starting vTPM manufacturing as tss:tss @ Wed 01 May 2024 04:39:36 PM EDT
Bad filedescriptor 5 for UnixIO client control channel
swtpm process terminated unexpectedly.
Could not start the TPM 2.
An error occurred. Authoring the TPM state failed.
Error getting next filename: No such process
Ending vTPM manufacturing @ Wed 01 May 2024 04:39:36 PM EDT

journalctl:

May 01 16:38:27 fedora systemd[2607]: Started app-gnome-org.gnome.Terminal-4078.scope - Application launched by gnome-shell.
May 01 16:38:27 fedora systemd[2607]: Created slice app-org.gnome.Terminal.slice - Slice /app/org.gnome.Terminal.
May 01 16:38:27 fedora systemd[2607]: Starting gnome-terminal-server.service - GNOME Terminal Server...
May 01 16:38:27 fedora systemd[2607]: Started gnome-terminal-server.service - GNOME Terminal Server.
May 01 16:38:27 fedora systemd[2607]: Started vte-spawn-bbbb0ab6-c6e5-486a-b774-b59226cad3a8.scope - VTE child process 4095 launched by gnome-terminal-server process 4087.
May 01 16:38:31 fedora PackageKit[2020]: get-updates transaction /13716_bdecbabc from uid 1000 finished with success after 48ms
May 01 16:38:31 fedora PackageKit[2020]: get-updates transaction /13717_bdeeacec from uid 1000 finished with success after 47ms
May 01 16:38:43 fedora audit: BPF prog-id=90 op=LOAD
May 01 16:38:43 fedora systemd[1]: Starting fprintd.service - Fingerprint Authentication Daemon...
May 01 16:38:43 fedora systemd[1]: Started fprintd.service - Fingerprint Authentication Daemon.
May 01 16:38:43 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=fprintd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
May 01 16:38:51 fedora audit[4157]: USER_AUTH pid=4157 uid=1000 auid=1000 ses=3 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:authentication grantors=pam_usertype,pam_localuser,pam_unix acct="dave" exe="/usr/lib/polkit-1/polkit-agent-helper-1" hostname=? addr=? terminal=? res=success'
May 01 16:38:51 fedora audit[4157]: USER_ACCT pid=4157 uid=1000 auid=1000 ses=3 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:accounting grantors=pam_unix,pam_localuser acct="dave" exe="/usr/lib/polkit-1/polkit-agent-helper-1" hostname=? addr=? terminal=? res=success'
May 01 16:38:51 fedora polkitd[1526]: Operator of unix-session:2 successfully authenticated as unix-user:dave to gain TEMPORARY authorization for action org.libvirt.unix.manage for unix-process:4140:9703 [/usr/bin/python3 /usr/bin/virt-manager] (owned by unix-user:dave)
May 01 16:38:51 fedora kernel: Lockdown: systemd-logind: hibernation is restricted; see man kernel_lockdown.7
May 01 16:38:51 fedora kernel: Lockdown: systemd-logind: hibernation is restricted; see man kernel_lockdown.7
May 01 16:38:51 fedora kernel: Lockdown: systemd-logind: hibernation is restricted; see man kernel_lockdown.7
May 01 16:38:51 fedora systemd[1]: Listening on virtstoraged-admin.socket - libvirt storage daemon admin socket.
May 01 16:38:51 fedora systemd[1]: Listening on virtstoraged-ro.socket - libvirt storage daemon read-only socket.
May 01 16:38:51 fedora systemd[1]: Starting virtstoraged.service - libvirt storage daemon...
May 01 16:38:52 fedora systemd[1]: Started virtstoraged.service - libvirt storage daemon.
May 01 16:38:52 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=virtstoraged comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
May 01 16:38:52 fedora systemd[1]: Listening on virtnetworkd-admin.socket - libvirt network daemon admin socket.
May 01 16:38:52 fedora systemd[1]: Listening on virtnetworkd-ro.socket - libvirt network daemon read-only socket.
May 01 16:38:52 fedora systemd[1]: Starting virtnetworkd.service - libvirt network daemon...
May 01 16:38:52 fedora systemd[1]: Started virtnetworkd.service - libvirt network daemon.
May 01 16:38:52 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=virtnetworkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
May 01 16:38:52 fedora kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
May 01 16:38:52 fedora NetworkManager[1642]: <info> [1714595932.0975] manager: (virbr0): new Bridge device (/org/freedesktop/NetworkManager/Devices/3)
May 01 16:38:52 fedora audit[4242]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=4242 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4243]: NETFILTER_CFG table=filter:7 family=2 entries=2 op=nft_register_chain pid=4243 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4244]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=4244 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4245]: NETFILTER_CFG table=filter:9 family=2 entries=2 op=nft_register_chain pid=4245 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4246]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=4246 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4247]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=4247 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4248]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_chain pid=4248 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4249]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=4249 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4250]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_chain pid=4250 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4251]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=4251 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4252]: NETFILTER_CFG table=nat:16 family=2 entries=2 op=nft_register_chain pid=4252 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4253]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=4253 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4254]: NETFILTER_CFG table=mangle:18 family=2 entries=2 op=nft_register_chain pid=4254 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4255]: NETFILTER_CFG table=mangle:19 family=2 entries=2 op=nft_register_chain pid=4255 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4260]: NETFILTER_CFG table=filter:20 family=10 entries=2 op=nft_register_chain pid=4260 subj=system_u:system_r:iptables_t:s0 comm="ip6tables"
May 01 16:38:52 fedora audit[4261]: NETFILTER_CFG table=filter:21 family=10 entries=2 op=nft_register_chain pid=4261 subj=system_u:system_r:iptables_t:s0 comm="ip6tables"
May 01 16:38:52 fedora audit[4262]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=4262 subj=system_u:system_r:iptables_t:s0 comm="ip6tables"
May 01 16:38:52 fedora audit[4263]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=4263 subj=system_u:system_r:iptables_t:s0 comm="ip6tables"
May 01 16:38:52 fedora audit[4264]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=4264 subj=system_u:system_r:iptables_t:s0 comm="ip6tables"
May 01 16:38:52 fedora audit[4265]: NETFILTER_CFG table=filter:25 family=10 entries=2 op=nft_register_chain pid=4265 subj=system_u:system_r:iptables_t:s0 comm="ip6tables"
May 01 16:38:52 fedora audit[4266]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_chain pid=4266 subj=system_u:system_r:iptables_t:s0 comm="ip6tables"
May 01 16:38:52 fedora audit[4267]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=4267 subj=system_u:system_r:iptables_t:s0 comm="ip6tables"
May 01 16:38:52 fedora audit[4268]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_chain pid=4268 subj=system_u:system_r:iptables_t:s0 comm="ip6tables"
May 01 16:38:52 fedora audit[4269]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=4269 subj=system_u:system_r:iptables_t:s0 comm="ip6tables"
May 01 16:38:52 fedora audit[4270]: NETFILTER_CFG table=nat:30 family=10 entries=2 op=nft_register_chain pid=4270 subj=system_u:system_r:iptables_t:s0 comm="ip6tables"
May 01 16:38:52 fedora audit[4271]: NETFILTER_CFG table=nat:31 family=10 entries=2 op=nft_register_chain pid=4271 subj=system_u:system_r:iptables_t:s0 comm="ip6tables"
May 01 16:38:52 fedora audit[4272]: NETFILTER_CFG table=mangle:32 family=10 entries=2 op=nft_register_chain pid=4272 subj=system_u:system_r:iptables_t:s0 comm="ip6tables"
May 01 16:38:52 fedora audit[4273]: NETFILTER_CFG table=mangle:33 family=10 entries=2 op=nft_register_chain pid=4273 subj=system_u:system_r:iptables_t:s0 comm="ip6tables"
May 01 16:38:52 fedora audit[1627]: NETFILTER_CFG table=firewalld:34 family=1 entries=121 op=nft_register_chain pid=1627 subj=system_u:system_r:firewalld_t:s0 comm="firewalld"
May 01 16:38:52 fedora audit[4276]: NETFILTER_CFG table=filter:35 family=2 entries=1 op=nft_register_rule pid=4276 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4278]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=4278 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4279]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=4279 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4280]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=4280 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4281]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=4281 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4282]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=4282 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4283]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=4283 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4284]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=4284 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4285]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=4285 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4287]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=4287 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4288]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=4288 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4289]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=4289 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4290]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=4290 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4292]: NETFILTER_CFG table=nat:48 family=2 entries=1 op=nft_register_rule pid=4292 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4294]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_rule pid=4294 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4295]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_rule pid=4295 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4296]: NETFILTER_CFG table=nat:51 family=2 entries=1 op=nft_register_rule pid=4296 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4297]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_rule pid=4297 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora audit[4298]: NETFILTER_CFG table=mangle:53 family=2 entries=1 op=nft_register_rule pid=4298 subj=system_u:system_r:iptables_t:s0 comm="iptables"
May 01 16:38:52 fedora avahi-daemon[1521]: Joining mDNS multicast group on interface virbr0.IPv4 with address 192.168.124.1.
May 01 16:38:52 fedora avahi-daemon[1521]: New relevant interface virbr0.IPv4 for mDNS.
May 01 16:38:52 fedora avahi-daemon[1521]: Registering new address record for 192.168.124.1 on virbr0.IPv4.
May 01 16:38:52 fedora NetworkManager[1642]: <info> [1714595932.2116] device (virbr0): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
May 01 16:38:52 fedora NetworkManager[1642]: <info> [1714595932.2118] device (virbr0): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external')
May 01 16:38:52 fedora NetworkManager[1642]: <info> [1714595932.2121] device (virbr0): Activation: starting connection 'virbr0' (499fe407-84e4-4dd5-8429-b68c27cb13b9)
May 01 16:38:52 fedora NetworkManager[1642]: <info> [1714595932.2128] device (virbr0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external')
May 01 16:38:52 fedora NetworkManager[1642]: <info> [1714595932.2129] device (virbr0): state change: prepare -> config (reason 'none', sys-iface-state: 'external')
May 01 16:38:52 fedora NetworkManager[1642]: <info> [1714595932.2130] device (virbr0): state change: config -> ip-config (reason 'none', sys-iface-state: 'external')
May 01 16:38:52 fedora NetworkManager[1642]: <info> [1714595932.2131] device (virbr0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external')
May 01 16:38:52 fedora systemd[1]: Starting NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service...
May 01 16:38:52 fedora dnsmasq[4316]: started, version 2.90 cachesize 150
May 01 16:38:52 fedora dnsmasq[4316]: compile time options: IPv6 GNU-getopt DBus no-UBus i18n IDN2 DHCP DHCPv6 no-Lua TFTP conntrack ipset no-nftset auth cryptohash DNSSEC loop-detect inotify dumpfile
May 01 16:38:52 fedora dnsmasq-dhcp[4316]: DHCP, IP range 192.168.124.2 -- 192.168.124.254, lease time 1h
May 01 16:38:52 fedora dnsmasq-dhcp[4316]: DHCP, sockets bound exclusively to interface virbr0
May 01 16:38:52 fedora dnsmasq[4316]: reading /etc/resolv.conf
May 01 16:38:52 fedora dnsmasq[4316]: using nameserver 127.0.0.53#53
May 01 16:38:52 fedora dnsmasq[4316]: read /etc/hosts - 8 names
May 01 16:38:52 fedora dnsmasq[4316]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 names
May 01 16:38:52 fedora dnsmasq-dhcp[4316]: read /var/lib/libvirt/dnsmasq/default.hostsfile
May 01 16:38:52 fedora systemd[1]: Listening on virtnodedevd-admin.socket - libvirt nodedev daemon admin socket.
May 01 16:38:52 fedora systemd[1]: Listening on virtnodedevd-ro.socket - libvirt nodedev daemon read-only socket.
May 01 16:38:52 fedora systemd[1]: Starting virtnodedevd.service - libvirt nodedev daemon...
May 01 16:38:52 fedora systemd[1]: Started NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service.
May 01 16:38:52 fedora NetworkManager[1642]: <info> [1714595932.2356] device (virbr0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external')
May 01 16:38:52 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=NetworkManager-dispatcher comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
May 01 16:38:52 fedora NetworkManager[1642]: <info> [1714595932.2357] device (virbr0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external')
May 01 16:38:52 fedora NetworkManager[1642]: <info> [1714595932.2360] device (virbr0): Activation: successful, device activated.
May 01 16:38:52 fedora systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
May 01 16:38:52 fedora systemd[1]: Started virtnodedevd.service - libvirt nodedev daemon.
May 01 16:38:52 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=virtnodedevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
May 01 16:38:52 fedora virtnodedevd[4322]: libvirt version: 10.1.0, package: 1.fc40 (Fedora Project, 2024-03-01-18:35:13, )
May 01 16:38:52 fedora virtnodedevd[4322]: hostname: fedora
May 01 16:38:52 fedora virtnodedevd[4322]: operation failed: failed to read the PCI VPD data: invalid checksum
May 01 16:38:52 fedora virtnodedevd[4322]: operation failed: failed to read the PCI VPD data
May 01 16:38:52 fedora kernel: r8169 0000:0f:00.0: invalid VPD tag 0x00 (size 0) at offset 0; assume missing optional EEPROM
May 01 16:38:52 fedora virtnodedevd[4322]: operation failed: failed to read the PCI VPD data: invalid checksum
May 01 16:38:52 fedora virtnodedevd[4322]: operation failed: failed to read the PCI VPD data
May 01 16:38:52 fedora virtnodedevd[4322]: operation failed: failed to read the PCI VPD data: invalid checksum
May 01 16:38:52 fedora virtnodedevd[4322]: operation failed: failed to read the PCI VPD data: invalid checksum
May 01 16:38:52 fedora virtnodedevd[4322]: operation failed: failed to read the PCI VPD data
May 01 16:38:52 fedora virtnodedevd[4322]: operation failed: failed to read the PCI VPD data: invalid checksum
May 01 16:38:52 fedora virtnodedevd[4322]: operation failed: failed to read the PCI VPD data: invalid checksum
May 01 16:39:02 fedora systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
May 01 16:39:02 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=NetworkManager-dispatcher comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
May 01 16:39:13 fedora systemd[1]: fprintd.service: Deactivated successfully.
May 01 16:39:13 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=fprintd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
May 01 16:39:13 fedora audit: BPF prog-id=90 op=UNLOAD
May 01 16:39:36 fedora audit[1805]: VIRT_MACHINE_ID pid=1805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm vm="fedora-test" uuid=88be6d02-c06f-4b4b-bee8-23e64ed7890c vm-ctx=system_u:system_r:svirt_t:s0:c332,c348 img-ctx=system_u:object_r:svirt_image_t:s0:c332,c348 model=selinux exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 01 16:39:36 fedora audit[1805]: VIRT_MACHINE_ID pid=1805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm vm="fedora-test" uuid=88be6d02-c06f-4b4b-bee8-23e64ed7890c vm-ctx=+107:+107 img-ctx=+107:+107 model=dac exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 01 16:39:36 fedora systemd[1]: Listening on virtlogd-admin.socket - libvirt logging daemon admin socket.
May 01 16:39:36 fedora audit: BPF prog-id=91 op=LOAD
May 01 16:39:36 fedora systemd[1]: Starting virtlogd.service - libvirt logging daemon...
May 01 16:39:36 fedora systemd[1]: Started virtlogd.service - libvirt logging daemon.
May 01 16:39:36 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=virtlogd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
May 01 16:39:36 fedora virtqemud[1805]: libvirt version: 10.1.0, package: 1.fc40 (Fedora Project, 2024-03-01-18:35:13, )
May 01 16:39:36 fedora virtqemud[1805]: hostname: fedora
May 01 16:39:36 fedora virtqemud[1805]: internal error: Could not run '/usr/bin/swtpm_setup'. exitstatus: 1; Check error log '/var/log/swtpm/libvirt/qemu/fedora-test-swtpm.log' for details.
May 01 16:39:36 fedora virtqemud[1805]: cannot resolve symlink /var/lib/libvirt/swtpm/88be6d02-c06f-4b4b-bee8-23e64ed7890c/tpm2: No such file or directory
May 01 16:39:36 fedora virtqemud[1805]: Unable to restore labels on TPM state and/or log file
May 01 16:39:36 fedora audit[1805]: VIRT_RESOURCE pid=1805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=disk reason=start vm="fedora-test" uuid=88be6d02-c06f-4b4b-bee8-23e64ed7890c old-disk="?" new-disk="/var/lib/libvirt/images/fedora-1.qcow2" exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 01 16:39:36 fedora audit[1805]: VIRT_RESOURCE pid=1805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=disk reason=start vm="fedora-test" uuid=88be6d02-c06f-4b4b-bee8-23e64ed7890c old-disk="?" new-disk="/home/dave/Downloads/Fedora-Everything-netinst-x86_64-40-1.14.iso" exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 01 16:39:36 fedora audit[1805]: VIRT_RESOURCE pid=1805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=net reason=start vm="fedora-test" uuid=88be6d02-c06f-4b4b-bee8-23e64ed7890c old-net="?" new-net="52:54:00:f6:bb:75" exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 01 16:39:36 fedora audit[1805]: VIRT_RESOURCE pid=1805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=dev reason=start vm="fedora-test" uuid=88be6d02-c06f-4b4b-bee8-23e64ed7890c bus=usb device=555342207265646972646576 exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 01 16:39:36 fedora audit[1805]: VIRT_RESOURCE pid=1805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=dev reason=start vm="fedora-test" uuid=88be6d02-c06f-4b4b-bee8-23e64ed7890c bus=usb device=555342207265646972646576 exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 01 16:39:36 fedora audit[1805]: VIRT_RESOURCE pid=1805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=chardev reason=start vm="fedora-test" uuid=88be6d02-c06f-4b4b-bee8-23e64ed7890c old-chardev="?" new-chardev="/run/libvirt/qemu/channel/1-fedora-test/org.qemu.guest_agent.0" exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 01 16:39:36 fedora audit[1805]: VIRT_RESOURCE pid=1805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=rng reason=start vm="fedora-test" uuid=88be6d02-c06f-4b4b-bee8-23e64ed7890c old-rng="?" new-rng="/dev/urandom" exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 01 16:39:36 fedora audit[1805]: VIRT_RESOURCE pid=1805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=tpm-emulator reason=start vm="fedora-test" uuid=88be6d02-c06f-4b4b-bee8-23e64ed7890c device="/run/libvirt/qemu/swtpm/1-fedora-test-swtpm.sock" exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 01 16:39:36 fedora audit[1805]: VIRT_RESOURCE pid=1805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=mem reason=start vm="fedora-test" uuid=88be6d02-c06f-4b4b-bee8-23e64ed7890c old-mem=0 new-mem=2097152 exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 01 16:39:36 fedora audit[1805]: VIRT_RESOURCE pid=1805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=vcpu reason=start vm="fedora-test" uuid=88be6d02-c06f-4b4b-bee8-23e64ed7890c old-vcpu=0 new-vcpu=2 exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 01 16:39:36 fedora audit[1805]: VIRT_CONTROL pid=1805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm op=start reason=booted vm="fedora-test" uuid=88be6d02-c06f-4b4b-bee8-23e64ed7890c vm-pid=0 exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=failed'
May 01 16:39:48 fedora systemd[2607]: Starting grub-boot-success.service - Mark boot as successful...
May 01 16:39:48 fedora systemd[2607]: Finished grub-boot-success.service - Mark boot as successful.
May 01 16:41:43 fedora gnome-terminal-[4087]: void terminal_screen_shell_preexec(VteTerminal*): assertion '!priv->between_preexec_and_precmd' failed

@safforddr first of all, the above is not a full log. If SELinux makes the difference, and if the recent logs have no denial, then it is likely that another denial that has taken place much earlier has broken something that directly or indirectly causes the issue. Thus, I cannot help you without a full log. (Your ausearch output of today does not log anything at all? Empty output for the whole time in the log?)

Second, just to ensure that the other bug is excluded in your case, please keep the new update that fixes the other bug (swtpm-0.8.1-7.fc40). If you removed it, please install it again.

Then, after that, provide the FULL log of a dedicated boot I asked for above. Also, when providing the two outputs, please let me know the very time you have provoked the issue.

@bug2k24 Since you two are remaining, and since you two are the only ones to not have had rpm-virtqemud denials, I tackle you two together here. In order to go ahead and to get some indication, please do the same as David. It would be helpfu to have data of two, especially to compare your two cases since your denials ain’t equal, too.

We sort this out until no bug is left. But that needs data.

Another point just suggested by the maintainer of swtpm: please do an autorelabelling BEFORE implementing the above. It’s worth a try in these circumstances.

Please ensure SELinux is enabled, then do sudo touch /.autorelabel and then reboot. This may take a while!

@py0xc3 Seems the log file is too long to place inline here, here’s a link https://pastebin.com/HhAMWDXC

ausearch output? very time of provocation of issue? (see above)

This can save me a lot of time, especially if I later compare the data of two users.

By the way, if anything has changed in the behavior/issue, please let me know.

Does the issue behave as following?

→ if selinux is enabled, everything works except creating swtpm-VMs, but normal VMs can be created and existing swtpm-VMs can be started.
→ if selinux is disabled, all works

Please confirm, or let me know if it is different or if there is another unintended behavior (I would like to ensure to distinguish this bug from the comparable/similar bugs we had in the recent days)

(the same question for you @safforddr )

@py0xc3 Here are the last 2 attempts to start a VM with all updates installed, the result has not changed.

type=AVC msg=audit(02/05/24 13:15:44.965:264) : avc:  denied  { write } for  pid=5666 comm=swtpm path=/run/libvirt/qemu/swtpm/1-MyVM-swtpm.pid dev="tmpfs" ino=4095 scontext=system_u:system_r:swtpm_t:s0 tcontext=system_u:object_r:qemu_var_run_t:s0 tclass=file permissive=0 
----
type=AVC msg=audit(02/05/24 13:15:44.965:265) : avc:  denied  { write } for  pid=5666 comm=swtpm name=swtpm dev="tmpfs" ino=4092 scontext=system_u:system_r:swtpm_t:s0 tcontext=system_u:object_r:qemu_var_run_t:s0 tclass=dir permissive=0 
----
type=AVC msg=audit(02/05/24 14:15:35.523:292) : avc:  denied  { write } for  pid=9304 comm=swtpm path=/run/libvirt/qemu/swtpm/1-MyVM-swtpm.pid dev="tmpfs" ino=4198 scontext=system_u:system_r:swtpm_t:s0 tcontext=system_u:object_r:qemu_var_run_t:s0 tclass=file permissive=0 
----
type=AVC msg=audit(02/05/24 14:15:35.524:293) : avc:  denied  { write } for  pid=9304 comm=swtpm name=swtpm dev="tmpfs" ino=4196 scontext=system_u:system_r:swtpm_t:s0 tcontext=system_u:object_r:qemu_var_run_t:s0 tclass=dir permissive=0

Nothing has changed with any of the updates, I have not been trying to create any new VMs, just start/run existing ones. Disabling selinux allows all VMs to start, enabling it and the ones using tpm will not start.

[edit] I have gone through various combinations of reverting the packages and starting other vms earlier today, I have just included the previous 2 above as the others may cause confusion and I will not remember what versions were loaded at what time stamp. I have also done sudo touch /.autorelabel with no change in behaviour.

Just in case you have done the autorelabeling before the most recent bug fix (this one) has been applied: Please do it again. Usually this should be not necessary, but in some cases, depending how your system has been customized/configured/labeled in the past, it might occur that some earlier change/manipulation conflicts with an update and is not aligned intuitively. Given that we cannot reproduce your issue at the moment, there is a chance this is it. Please also keep a record of the very time of the autorelabeling, just in case we will need its logs later.

If autorelabeling AFTER the recent update does not work, I will skim your logs tomorrow, and unless I find something else, I will provide you some guidance about how to create a new bug report and what to contain.

Then please try now. And if it does not work, please provide all data that can help (as elaborated above). Any change in behavior and in the logs/denials between starting and creating machines can give a hint.

Thanks. That’s indeed a difference to other reports

I had done the autorelabeling since installing the updates with no change in behaviour.

I tried to post a comprehensive update with the complete journal,
and this list rejected it with the error message
“Body limited to 32000 characters”
What is the correct way to post the entire journal (~300000 characters)?
dave

Don’t

You can select a particular time period that is related to the error or use journalctl options to select relevant portions. We would not effectively parse those thousands of lines to find the relevant entries.

Narrow down the part being posted so we are not tasked with such an exhaustive search.

Please be careful when commenting if you haven’t read the topic. I explicitly asked the author to provide the full journal of a boot. I talked to the maintainer of the software yesterday and we are currently not able to reproduce the very behavior - in both cases here. There is a chance that the cause (that breaks something) occurs at some time during the boot, or at least before the apparent error in the software occurs. There is an indication that an SELinux denial is the cause, and it could have taken place long before the actual error gets apparent. We had some cases already at the SELinux confinement tests, where a lot of issues went back to SELinux denials that occurred at login but broke software throughout boot time. Alternatively, the boot could reveal a difference of the users system to our defaults and thus help us to identify how to configure a system to reproduce the error. That’s why we need that. Obviously, some desperation has risen in the recent days, given the many SELinux issues we had :wink: Each time we solved one, we find out its actually two (and thus another one remaining) :smile:


@safforddr The easiest way is to put the file elsewhere and provide a link here. E.g., gitlab or github. But there are also many pages that give the possibility to just upload a file and give you a link for the content. Don’t forget its two outputs :slight_smile: ausearch and journal (please use the exact commands for both I provided above), both of a full boot (the ausearch makes my life much easier for the inital comparison).

While testing, I have been switching back and forth between enforcing
and disabled mode, and have been relabeling every time before turning
enforcing back on. Just to be sure, I just touched /.autorelabel, rebooted
and watched while the system did successfully relabel.

Trying to use virt-manager to create a new VM with a swtpm still fails
in the swtpm_setup step if enforcing is on, but succeeds if selinux
is disabled. I have no problem running an existing VM with a swtpm
with selinux in enforcing mode.

I am running:
selinux-policy.noarch 40.17-1.fc40 @updates
selinux-policy-targeted.noarch 40.17-1.fc40 @updates
swtpm.x86_64 0.8.1-7.fc40 @updates-testing

Originally I was seeing several AVC denials associated with trying to create a
virt-manager VM with swtpm, and all of those errors are gone.

root@fedora:/home/dave/virt# ausearch -i -m
avc,user_avc,selinux_err,user_selinux_err -ts today

I’m really stumped as there are no AVC errors in journalctl, ausearch, or the audit logs, but the problem goes away if selinux is disabled.

What I am doing is simple:

  1. as a regular user (dave) I run virt-manager (GUI)
    which pops up an authenticator
  2. I click the “new” vm button, point to the fedora 40 iso, and accept
    all the defaults until the last screen where I click
    “customize configuration before install”
    and add hardware to add a TPM.
  3. When I click to start the installation I get an error message about
    swtpm_setup failing (attached).

I’m linking to the full journal and logs this time.
Virt-manager is run at 09:38:00 in the log.
The swtpm_setup failure is at 09:38:49

journalctl.txt

virt-manager_error.txt

swtpm.log

@safforddr Does this still apply to you in the most recent update state? So, the issue is only to create swtpm VMs with SELinux enabled, while using existing swtpm VMs works fine with SELinux enabled?


@bug2k24 In case your tests prove that in the most recent update state, you still cannot create swtpm VMs with SELinux enabled (which means, both creating and using swtpm VMs doesn’t work in your case), a sudo journalctl -r --boot=0 of a creation attempt would be nice to compare your logs of the error in creating and the error of using the swtpm VMs (based on your arguments, I assume the log you already provided logs the error of using a swtpm VM). If creating swtpm VMs fails, an ausearch -i -m avc,user_avc,selinux_err,user_selinux_err -ts today output would be appreciated as well. If the issue occurs at creating swtpm VMs but without logged denials (as in David’s case), or if there are multiple denials at multiple times, then I would need to know the exact time of the occurrence to know where to start looking.

Then we will decide if we make one or two bug reports tomorrow. I will compare a few things in the logs in the meantime.

@safforddr additionally to my last post: when checking your logs, I end up at the question if your issue is related to swtpm. Can you please check if the issues occur the same way if you create and use VMs without swtpm? I just saw that was never explicitly mentioned in your case.

If swtpm does not make a difference in your case, and if the issues around creating and using VMs are always the same with and without swtpm, then this would add some clarification to your case (and also to the previous SELinux denial you had, which did not involve swtpm).

Please verify that with the most recent update state.

Also, please try sudo dnf reinstall swtpm* with the * at the end, reboot, try again (no new logs necessary or so, just try if it solves the problem)


(So far, I tend to assume its two different issues in the logs of you two.)

@bug2k24 additionally to my last two posts: You have a tainted kernel because you have external modules/drivers installed. Please unload all of them for once for testing, then verify that all are unloaded with the command cat /proc/sys/kernel/tainted → the output of that command has to be 0 (which means untainted). If it is tainted = 0, then please try again and see if that solves your issue and report.

Of course you could also try to reinstall swtpm with sudo dnf reinstall swtpm* as I suggested to David. In your case I don’t see a high chance that this helps since you have no privileges/ownership issues in permanent storage, but nevertheless worth a quick try.

Creating a VM without a TPM works fine with enforcing on.
It fails only if the swtpm is added.
I can run existing VMs with or without swtpm fine.
I just tried dnf reinstall swtpm* (both with and without testing enabled),
and there was no change.
dave