There is already a topic about the same problem, my case is different because I get this issue on a freshly installed, fully updated Fedora 40 (I ran dnf update --refresh and rebooted before creating a VM).
Creating a new VM with emulated TPM using Virtual Machine Manager fails with this error message:
virt-manager error
Unable to complete install: 'internal error: Could not run '/usr/bin/swtpm_setup'. exitstatus: 1; Check error log '/var/log/swtpm/libvirt/qemu/win11-swtpm.log' for details.'
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 72, in cb_wrapper
callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/createvm.py", line 2008, in _do_async_install
installer.start_install(guest, meter=meter)
File "/usr/share/virt-manager/virtinst/install/installer.py", line 695, in start_install
domain = self._create_guest(
^^^^^^^^^^^^^^^^^^^
File "/usr/share/virt-manager/virtinst/install/installer.py", line 637, in _create_guest
domain = self.conn.createXML(initial_xml or final_xml, 0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.12/site-packages/libvirt.py", line 4529, in createXML
raise libvirtError('virDomainCreateXML() failed')
libvirt.libvirtError: internal error: Could not run '/usr/bin/swtpm_setup'. exitstatus: 1; Check error log '/var/log/swtpm/libvirt/qemu/win11-swtpm.log' for details.
Since the original author of the other topic is back, I would give them some time to provide the requested data. I am curious how the logs of you two compare over a period of several seconds around the issue.
Just to ensure I understand you correctly: you can use swtpm VM while SELinux is enabled just like the other user, but you cannot create them while SELinux is enabled. But when SELinux is disabled, creating swtpm VM works fine, too. Further, creating normal VM without swtpm works fine even if SELinux is enabled. Right?
Yes, everything is correct - I can create a VM without a TPM with SELinux being enforced just fine, but making a VM with attached TPM works only when SELinux is off.
I would wait a little for the other user and compare, but I expect this issue results in another bug report. I wonder if the user in the other topic has only the one swtpm denial or if there are more that maybe are logged a moment later, or maybe even earlier. In the confined user SIG tests I experienced that some errors have had their cause in denials that occurred during the start of the GUI, so much earlier. I don’t think this is the case here, but having a look of the entries over some seconds can be indicative. However, wait a day or two to see, but my guess is at this time that we have discovered another policy issue, but one that is related to the previous one.
Another question in advance: can you report any change in logs or behavior after today’s update?
Btw, it would be appreciated if you provide the same data (the data of a dedicated boot) for comparison as I asked the other user for here
As for the changes in behavior before and after update, I can’t really tell - I reinstalled the system just today and updated it right away, before even installing virtualization tools. After I installed them and encountered the problem with SELinux, the next run of sudo dnf update --refresh said that the system is up to date already (but I rebooted it anyway).
I tend to wait for data of the other user, but having had a short skim, my guess is that the issue is linked to swtpm-selinux → the selinux-policy related to swtpm. Maybe it is comparable to the other issue and can be solved in a comparable way. Unless other indication comes up until then, I will write you some points by Thursday evening to consolidate into a bug report.
General info: there are currently three open topics that cover likely the same problem origin, although the denials are not equal but overlapping (the difference in the denials can be explained by, e.g., differences in the configuration of host or guests):
I do not merge them at this time since we do not know 100% that they are identifcal. SELinux policy issues are complex (but I am confident at this time that a policy fix will solve all three ).
Everyone who is affected by the issue is encouraged to provide the full output of ausearch -i -m avc,user_avc,selinux_err,user_selinux_err -ts today after provoking the issue so that we can compare their denials, and add a short elaboration of your very issue. Please compare your ausearch output with the denials of the very topics and post them in the relevant one! Unless other information comes up, we will open a bug ticket later today or tomorrow in this one (no worries, we will link the ticket to all topics ).
I expect the issue is a policy in swtpm-selinux. That can be solved with an update once the maintainers untangled and fixed it.
@kparal it is up to you if you determine this already a common issue. I don’t know how widespread swtpm actually is on Fedora.
Today I made a yet another fresh install of F40, and now it works out of the box!
There are a couple things to note:
This time I used a RAID1 BTRFS vs a single device volume yesterday.
The previous install was made with only two BTRFS subvolumes - / and /home. Today I created also subvolumes for /var/lib/libvirt/images and /srv.
I guess, the number of updated packages right after the installation was higher - 707 vs 704 yesterday (± few). Could it be that the mirror that was automatically selected for my system just lacked the hotfix yesterday?
I added that another installation solved your issue. Feel free to add further data to the report if I missed something.
Yet, I would ask you to verify all “cases” and if really everything works now permanently and reliably. If so, yeah, then you can consider the topic solved and mark your solution. Thanks btw for providing so much information: that’s a support also for the other users who have the issue.
I will ask users in the other topics to also update with refresh, and then see if an update made that difference, which I consider the most likely solution given the differences you describe.
However, when you test if it now really works fine, can you then get another ausearch -i -m avc,user_avc,selinux_err,user_selinux_err -ts today after these tests and additionally the time period of your tests plus 15 seconds before and after of your tests from the sudo journalctl -r? That would be interesting even if it works now (which I really hope).
All users who experience this issue: please do sudo dnf update --refresh once again , install all updates and then reboot. Then, check if the problem still occurs. There is a chance that an update that was pushed to stable yesterday or today had solved the issue
Feel free to mention in the bug report if another update does not solve the issue for you.
May 01 12:08:56 lab systemd[2423]: Started app-org.kde.konsole-dbf8cc124dc645d6ba8892d16154b572.scope - Konsole - Terminal.
May 01 12:08:01 lab kwin_x11[2943]: kwin_core: XCB error: 3 (BadWindow), sequence: 23641, resource id: 98566175, major code: 129 (SHAPE), minor code: 6 (Input)
May 01 12:08:01 lab kwin_x11[2943]: kwin_core: XCB error: 152 (BadDamage), sequence: 23640, resource id: 20972857, major code: 143 (DAMAGE), minor code: 2 (Destroy)
May 01 12:07:54 lab (pcscd)[4263]: pcscd.service: Referenced but unset environment variable evaluates to an empty string: PCSCD_ARGS
May 01 12:07:54 lab audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pcscd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
May 01 12:07:54 lab systemd[1]: Started pcscd.service - PC/SC Smart Card Daemon.
May 01 12:07:54 lab plasmashell[4091]: DRM kernel driver 'nvidia-drm' in use. NVK requires nouveau.
May 01 12:07:54 lab plasmashell[4091]: [4092:4092:0501/120754.187360:ERROR:chrome_browser_cloud_management_controller.cc(161)] Cloud management controller initialization aborted as CBCM is not enabled. Please use the `--enable-chrome-browser-cloud-management` command line flag to enable it if you are not using the official Google Chrome build.
May 01 12:07:54 lab systemd[1]: bluetooth.service - Bluetooth service was skipped because of an unmet condition check (ConditionPathIsDirectory=/sys/class/bluetooth).
May 01 12:07:53 lab systemd[2423]: Started app-brave\x2dbrowser-7f50e172cb5e43da82ed959700d86991.scope - Brave Web Browser - Web Browser.
May 01 12:07:47 lab audit[2205]: VIRT_CONTROL pid=2205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm op=start reason=booted vm="win11" uuid=c89e68cc-54b4-427d-8027-6158d2919a3b vm-pid=0 exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=failed'
May 01 12:07:47 lab audit[2205]: VIRT_RESOURCE pid=2205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=vcpu reason=start vm="win11" uuid=c89e68cc-54b4-427d-8027-6158d2919a3b old-vcpu=0 new-vcpu=4 exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 01 12:07:47 lab audit[2205]: VIRT_RESOURCE pid=2205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=mem reason=start vm="win11" uuid=c89e68cc-54b4-427d-8027-6158d2919a3b old-mem=0 new-mem=8388608 exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 01 12:07:47 lab audit[2205]: VIRT_RESOURCE pid=2205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=tpm-emulator reason=start vm="win11" uuid=c89e68cc-54b4-427d-8027-6158d2919a3b device="/run/libvirt/qemu/swtpm/1-win11-swtpm.sock" exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 01 12:07:47 lab audit[2205]: VIRT_RESOURCE pid=2205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=dev reason=start vm="win11" uuid=c89e68cc-54b4-427d-8027-6158d2919a3b bus=usb device=555342207265646972646576 exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 01 12:07:47 lab audit[2205]: VIRT_RESOURCE pid=2205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=dev reason=start vm="win11" uuid=c89e68cc-54b4-427d-8027-6158d2919a3b bus=usb device=555342207265646972646576 exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 01 12:07:47 lab audit[2205]: VIRT_RESOURCE pid=2205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=net reason=start vm="win11" uuid=c89e68cc-54b4-427d-8027-6158d2919a3b old-net="?" new-net="52:54:00:45:16:db" exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 01 12:07:47 lab audit[2205]: VIRT_RESOURCE pid=2205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=disk reason=start vm="win11" uuid=c89e68cc-54b4-427d-8027-6158d2919a3b old-disk="?" new-disk="/home/user/Downloads/en-us_windows_11_business_editions_version_23h2_updated_dec_2023_x64_dvd_d9da936d.iso" exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 01 12:07:47 lab audit[2205]: VIRT_RESOURCE pid=2205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=disk reason=start vm="win11" uuid=c89e68cc-54b4-427d-8027-6158d2919a3b old-disk="?" new-disk="/var/lib/libvirt/images/win11.qcow2" exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 01 12:07:47 lab virtqemud[2205]: Unable to restore labels on TPM state and/or log file
May 01 12:07:47 lab virtqemud[2205]: cannot resolve symlink /var/lib/libvirt/swtpm/c89e68cc-54b4-427d-8027-6158d2919a3b/tpm2: No such file or directory
May 01 12:07:47 lab virtqemud[2205]: internal error: Could not run '/usr/bin/swtpm_setup'. exitstatus: 1; Check error log '/var/log/swtpm/libvirt/qemu/win11-swtpm.log' for details.
May 01 12:07:47 lab virtqemud[2205]: hostname: lab
May 01 12:07:47 lab virtqemud[2205]: libvirt version: 10.1.0, package: 1.fc40 (Fedora Project, 2024-03-01-18:35:13, )
May 01 12:07:47 lab audit[4080]: AVC avc: denied { open } for pid=4080 comm="swtpm" path="/var/log/swtpm/libvirt/qemu/win11-swtpm.log" dev="dm-1" ino=406729 scontext=system_u:system_r:swtpm_t:s0 tcontext=system_u:object_r:virt_log_t:s0 tclass=file permissive=0
May 01 12:07:47 lab audit[2205]: AVC avc: denied { getattr } for pid=2205 comm="rpc-virtqemud" name="/" dev="dm-1" ino=256 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:fs_t:s0 tclass=filesystem permissive=1
May 01 12:07:47 lab audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=virtlogd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
May 01 12:07:47 lab systemd[1]: Started virtlogd.service - libvirt logging daemon.
May 01 12:07:47 lab audit[2205]: AVC avc: denied { setattr } for pid=2205 comm="rpc-virtqemud" name="win11-swtpm.log" dev="dm-1" ino=406729 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:virt_log_t:s0 tclass=file permissive=1
May 01 12:07:47 lab systemd[1]: Starting virtlogd.service - libvirt logging daemon...
May 01 12:07:47 lab audit: BPF prog-id=89 op=LOAD
May 01 12:07:47 lab audit[2205]: VIRT_MACHINE_ID pid=2205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm vm="win11" uuid=c89e68cc-54b4-427d-8027-6158d2919a3b vm-ctx=+107:+107 img-ctx=+107:+107 model=dac exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 01 12:07:47 lab audit[2205]: VIRT_MACHINE_ID pid=2205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm vm="win11" uuid=c89e68cc-54b4-427d-8027-6158d2919a3b vm-ctx=system_u:system_r:svirt_t:s0:c81,c521 img-ctx=system_u:object_r:svirt_image_t:s0:c81,c521 model=selinux exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'
May 01 12:07:47 lab audit[4068]: AVC avc: denied { read write } for pid=4068 comm="qemu-img" path="anon_inode:[io_uring]" dev="anon_inodefs" ino=33889 scontext=system_u:system_r:virtstoraged_t:s0 tcontext=system_u:object_r:io_uring_t:s0 tclass=anon_inode permissive=1
May 01 12:07:47 lab audit[4068]: AVC avc: denied { map } for pid=4068 comm="qemu-img" path="anon_inode:[io_uring]" dev="anon_inodefs" ino=33889 scontext=system_u:system_r:virtstoraged_t:s0 tcontext=system_u:object_r:io_uring_t:s0 tclass=anon_inode permissive=1
May 01 12:07:47 lab audit[4068]: AVC avc: denied { create } for pid=4068 comm="qemu-img" anonclass=[io_uring] scontext=system_u:system_r:virtstoraged_t:s0 tcontext=system_u:object_r:io_uring_t:s0 tclass=anon_inode permissive=1
May 01 12:07:38 lab drkonqi-coredump-launcher[4057]: Unable to find file for pid 6977 expected at "kcrash-metadata/kwin_x11.5645111173a34e109e16a2db5958d6a4.6977.ini"
May 01 12:07:38 lab systemd[2423]: Started drkonqi-coredump-launcher@1-3007-1000.service - Launch DrKonqi for a systemd-coredump crash (PID 3007/UID 1000).
May 01 12:07:38 lab drkonqi-coredump-launcher[4056]: Unable to find file for pid 10236 expected at "kcrash-metadata/kwin_x11.5645111173a34e109e16a2db5958d6a4.10236.ini"
May 01 12:07:38 lab systemd[2423]: Started drkonqi-coredump-launcher@0-3007-1000.service - Launch DrKonqi for a systemd-coredump crash (PID 3007/UID 1000).
May 01 12:07:36 lab audit: BPF prog-id=88 op=UNLOAD
May 01 12:07:36 lab audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=fprintd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
May 01 12:07:36 lab systemd[1]: fprintd.service: Deactivated successfully.
May 01 12:07:25 lab audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=snapperd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
May 01 12:07:25 lab systemd[1]: snapperd.service: Deactivated successfully.
May 01 12:07:19 lab audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=NetworkManager-dispatcher comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
May 01 12:07:19 lab systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
It seems that creating a VM with TPM works only right after installing virtualization tools (I used command sudo dnf group install --with-optional virtualization), before reboot.
I was wondering, but that in the time between installation and reboot swtpm works might be already noteworthy indication. I added it already to the bug report.
If you identify further indication, feel free to add it to the report, and I suggest to follow the bug report daily: if the maintainer(s) ask for information, you should provide it. You can login to the bugzilla with your FAS account and then add you to the CC list in the bug report. This way you get an email with each new post there. Let’s give them some time to sort the issue out. I am optimistic
,The maintainer is off the cuff not able to reproduce the issue: they ask for the exact steps and tools the affected users use from the default Fedora to installing tools/packages (including tools like virt-manager , cockpit and whatever could be linked to virtualization, and how you do install/configure them) up to the point when the issue occurs. Please provide them with the related information: See the comment of 2024-05-01 12:04:56 UTC in the bugzilla ticket BZ #2278123. You can use your normal Fedora accounts in the bugzilla when you click login and then Fedora Account System.
I cannot follow the remaining week, so its up to you to read & provide, and maybe test, what is necessary in collaboration with the maintainer. All the best!
Thanks. That’s relevant information. But please put data in the bug report, since further elaboration shall take place there. I posted it there this time (I was still logged in anyway), but I suggest to follow and answer there
.The maintainer has provided an update that shall solve the issue: it is now in testing. The F40 update has already passed all automated tests.
If you want, it would be appreciated if you test if the package solves your issue and let us know. This way we can respond early.
You can install the update swtpm-0.8.1-7.fc40 with sudo dnf upgrade --enablerepo=updates-testing --refresh --advisory=FEDORA-2024-f53eab6892
If you still experience the issue after the update, please provide an elaboration if and how the problem has changed, and let us know the time of a provocation of the issue along with a new sudo ausearch -i -m avc,user_avc,selinux_err,user_selinux_err -ts today output.