Impossible to mount an LVM partition in Fedora 39

Hello all.

I upgraded to Fedora 39 Kinoite.
On an old F38K system I used LVM partition:

sudo lvscan
  ACTIVE            '/dev/fedora/pool00' [1,75 TiB] inherit
  ACTIVE            '/dev/fedora/root' [315,21 GiB] inherit
  ACTIVE            '/dev/fedora/hub' [1,44 TiB] inherit
  ACTIVE            '/dev/fedora/swap' [8,00 GiB] inherit

After the update, it stopped mounting normally:

sudo lvscan
  inactive          '/dev/fedora/pool00' [1,75 TiB] inherit
  inactive          '/dev/fedora/root' [315,21 GiB] inherit
  inactive          '/dev/fedora/hub' [1,44 TiB] inherit
  ACTIVE            '/dev/fedora/swap' [8,00 GiB] inherit

For some reason, only swap can be activated.
When trying to manually activate, I get this error:

sudo lvchange -ay /dev/fedora/pool00 -v
  Activating logical volume fedora/pool00.
  activation/volume_list configuration setting not defined: Checking only host tags for fedora/pool00.
  Creating fedora-pool00_tmeta
  Loading table for fedora-pool00_tmeta (253:2).
  Resuming fedora-pool00_tmeta (253:2).
  Creating fedora-pool00_tdata
  Loading table for fedora-pool00_tdata (253:3).
  Resuming fedora-pool00_tdata (253:3).
  Executing: /usr/sbin/thin_check -q --clear-needs-check-flag /dev/mapper/fedora-pool00_tmeta
  /usr/sbin/thin_check failed: 64
  Check of pool fedora/pool00 failed (status:64). Manual repair required!
  Removing fedora-pool00_tmeta (253:2)
  Removing fedora-pool00_tdata (253:3)

Since this is Kinoite, I roll back to the F38 version without any problems, and the partition continues to mount there without any problems.

I found a slightly similar problem: 2238099 – LVM thinp installs fail due to incorrect ioctl call
But I already have device-mapper-persistent-data-1.0.6-2.fc39 installed (on F39). So the cause is probably in something else.

This LVM partition was created on another machine, and in the current machine I just use it to access some files, the system does not boot from it (it boots from another BTRFS partition on another disk).

Does anyone have any idea what the reason might be, or how it can be investigated further?

Downgraded the version of device-mapper-persistent-data to 0.9.0-10.fc38 - the problem disappeared, everything works fine, just like on F38.
On 1.0.4-1.fc39 the problem is reproduced. On the latest 1.0.6-2.fc39 too.

But I don’t want to stay on the old version…

The problem is solved.
I found this discussion: Version 1.0.0 does not activate my thinpool. · Issue #244 · jthornber/thin-provisioning-tools · GitHub
Running a check for pool, some problems were found:

sudo thin_check /dev/mapper/fedora-pool00_tmeta -vv > thin_check.log 2>&1
...
data space map
12 data blocks have leaked.
checking data space map: 275.787725ms
metadata space map
checking metadata space map: 6.645753ms
data space map contains leaks

Apparently, in versions of the package before 1.0.0, these errors were considered unimportant and ignored, and in versions after 1.0.0, they began to fail the check. Their automatic fixing using –auto-repair flag solved the problem.

1 Like