Fedora 39: Update/boot fails on advanced lvm setup

,

I’ve stumbled across a specific issue in combination with lvm2 in Fedora 39 and I’d like some help pointing me in where/how to report it.

I have /home on an lvm2 raid5 volume that is also backed by a (mirrored) cache volume. Both the Raid5 volume and the mirrored cache use raid integrity. This configuration has been working since Fedora 37 (possibly even earlier) without a glitch but I am unable to boot Fedora 39 in this configuration due to errors.

I can finish booting if I disable integrity on the mirrored volume only. This issue seems to be only an issue during boot. If I disable raidintegrity I can boot normally and I can edit this volume and even enable raidintegrerity (but subsequent boot will fail again).

How to reproduce (on my system) from a Fedora 39 iso:

liveuser@localhost-live:~$ sudo -s
root@localhost-live:/home/liveuser# nano /etc/lvm/lvm.conf
root@localhost-live:/home/liveuser# pvscan
PV /dev/magnetic_storage/home_storage VG all_storage lvm2 [43.13 TiB / 0 free]
PV /dev/nvme_storage/cache_storage VG all_storage lvm2 [<750.00 GiB / 0 free]
PV /dev/sda VG magnetic_storage lvm2 [10.91 TiB / 45.70 GiB free]
PV /dev/sdc VG magnetic_storage lvm2 [10.91 TiB / 45.70 GiB free]
PV /dev/sdd VG magnetic_storage lvm2 [10.91 TiB / 45.70 GiB free]
PV /dev/sde VG magnetic_storage lvm2 [10.91 TiB / 45.70 GiB free]
PV /dev/sdf VG magnetic_storage lvm2 [10.91 TiB / 45.70 GiB free]
PV /dev/nvme0n1 VG nvme_storage lvm2 [931.51 GiB / 175.44 GiB free]
PV /dev/nvme1n1 VG nvme_storage lvm2 [931.51 GiB / 175.44 GiB free]
Total: 9 [100.25 TiB] / in use: 9 [100.25 TiB] / in no VG: 0 [0 ]
root@localhost-live:/home/liveuser# vgscan
Found volume group “all_storage” using metadata type lvm2
Found volume group “magnetic_storage” using metadata type lvm2
Found volume group “nvme_storage” using metadata type lvm2
root@localhost-live:/home/liveuser# lvscan
inactive ‘/dev/all_storage/home_lv’ [43.13 TiB] inherit
ACTIVE ‘/dev/magnetic_storage/home_storage’ [43.13 TiB] inherit
ACTIVE ‘/dev/nvme_storage/cache_storage’ [750.00 GiB] inherit
root@localhost-live:/home/liveuser# vgchange --refresh

In journalctl:

Jan 02 12:39:47 localhost-live dmeventd[1913]: No longer monitoring RAID device magnetic_storage-home_storage for events.
Jan 02 12:39:51 localhost-live dmeventd[1913]: Monitoring RAID device magnetic_storage-home_storage for events.
Jan 02 12:39:51 localhost-live lvm[3770]: PV /dev/dm-31 online, VG all_storage incomplete (need 1).
Jan 02 12:39:51 localhost-live dmeventd[1913]: No longer monitoring RAID device nvme_storage-cache_storage for events.
Jan 02 12:39:51 localhost-live dmeventd[1913]: Monitoring RAID device nvme_storage-cache_storage for events.
Jan 02 12:39:51 localhost-live lvm[3799]: PV /dev/dm-18 online, VG all_storage is complete.
Jan 02 12:39:51 localhost-live systemd[1]: Started lvm-activate-all_storage.service - /usr/sbin/lvm vgchange -aay --autoactivation event all_storage.
Jan 02 12:39:51 localhost-live audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg=‘unit=lvm-activate-all_storage comm=“systemd” exe=“/usr/lib/systemd/systemd” hostname=? addr=? terminal=? res=success’
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0
Jan 02 12:39:51 localhost-live kernel: device-mapper: integrity: dm-8: Checksum failed at sector 0x1503f
Jan 02 12:39:51 localhost-live kernel: md/raid1:mdX: dm-8: rescheduling sector 86024
Jan 02 12:39:51 localhost-live kernel: md/raid1:mdX: redirecting sector 86024 to other mirror: dm-8
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0
Jan 02 12:39:51 localhost-live kernel: device-mapper: integrity: dm-8: Checksum failed at sector 0x1503f
Jan 02 12:39:51 localhost-live kernel: md/raid1:mdX: dm-8: rescheduling sector 86024
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0
Jan 02 12:39:51 localhost-live kernel: md/raid1:mdX: redirecting sector 86024 to other mirror: dm-8
Jan 02 12:39:51 localhost-live kernel: device-mapper: integrity: dm-8: Checksum failed at sector 0x1503f
Jan 02 12:39:51 localhost-live kernel: md/raid1:mdX: dm-8: rescheduling sector 86024
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0
Jan 02 12:39:51 localhost-live kernel: md/raid1:mdX: redirecting sector 86024 to other mirror: dm-8
Jan 02 12:39:51 localhost-live kernel: device-mapper: integrity: dm-8: Checksum failed at sector 0x1503f
Jan 02 12:39:51 localhost-live kernel: md/raid1:mdX: dm-8: rescheduling sector 86024
Jan 02 12:39:51 localhost-live kernel: md/raid1:mdX: redirecting sector 86024 to other mirror: dm-8
Jan 02 12:39:51 localhost-live kernel: device-mapper: integrity: dm-8: Checksum failed at sector 0x1503f
Jan 02 12:39:51 localhost-live kernel: md/raid1:mdX: dm-8: rescheduling sector 86024
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0
Jan 02 12:39:51 localhost-live kernel: md/raid1:mdX: redirecting sector 86024 to other mirror: dm-8
Jan 02 12:39:51 localhost-live kernel: device-mapper: integrity: dm-8: Checksum failed at sector 0x1503f
Jan 02 12:39:51 localhost-live kernel: md/raid1:mdX: dm-8: rescheduling sector 86024
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0
Jan 02 12:39:51 localhost-live kernel: md/raid1:mdX: redirecting sector 86024 to other mirror: dm-8
Jan 02 12:39:51 localhost-live kernel: device-mapper: integrity: dm-8: Checksum failed at sector 0x1503f
Jan 02 12:39:51 localhost-live kernel: md/raid1:mdX: dm-8: rescheduling sector 86024
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0
Jan 02 12:39:51 localhost-live kernel: md/raid1:mdX: redirecting sector 86024 to other mirror: dm-8
Jan 02 12:39:51 localhost-live kernel: device-mapper: integrity: dm-8: Checksum failed at sector 0x1503f
Jan 02 12:39:51 localhost-live kernel: md/raid1:mdX: dm-8: rescheduling sector 86024
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0
Jan 02 12:39:51 localhost-live kernel: md/raid1:mdX: redirecting sector 86024 to other mirror: dm-8
Jan 02 12:39:51 localhost-live kernel: device-mapper: integrity: dm-8: Checksum failed at sector 0x1503f
Jan 02 12:39:51 localhost-live kernel: md/raid1:mdX: dm-8: rescheduling sector 86024
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0
Jan 02 12:39:51 localhost-live kernel: md/raid1:mdX: redirecting sector 86024 to other mirror: dm-8
Jan 02 12:39:51 localhost-live kernel: device-mapper: integrity: dm-8: Checksum failed at sector 0x1503f
Jan 02 12:39:51 localhost-live kernel: md/raid1:mdX: dm-8: rescheduling sector 86024
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0
Jan 02 12:39:51 localhost-live kernel: md/raid1:mdX: redirecting sector 86024 to other mirror: dm-8
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86071 res=0
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0
Jan 02 12:39:51 localhost-live audit: DM_EVENT module=integrity op=integrity-checksum dev=253:8 sector=86079 res=0

Hello @bagelcornbreadcarrot , If you do a search on ask.fp.o, there were at least two questions regarding not being able to boot after upgrade to F39. In at least one of them the OP was using a RAID array that no longer was being recognized at bootup due to how the devices ID’s were being interpreted. Sorry I cannot be of more help I only read the posts ATT.
[Edit] Found it … System fails to boot after dnf system upgrade due to missing MD (RAID) devices - #32 by gui1ty

No, that’s another unrelated issue as far as I can tell, blkid shouldn’t be involved here. In the meanwhile I filed a bug report on bugzilla, I have a work-around and testable case so the initial scare of seeing those errors pop up is passed :stuck_out_tongue: