LVM: Metadata has wrong VG name


Issue: How do I repair this situation?

sudo pvscan
Metadata on /dev/sdd2 at 12800 has wrong VG name “fedora32 {
id = “gJgZM9-n2Rd-V7us-RWae-cpT6-H84E-g7dAsk”
seqno = 9
format = “lvm2”
status = [“RESIZEABLE”, “READ”, “WRITE”]
flag” expected fedora.
WARNING: Reading VG fedora on /dev/sdd2 failed.
WARNING: PV /dev/sdd2 is marked in use but no VG was found using it.
WARNING: PV /dev/sdd2 might need repairing.

I am a simple long-time developer, attempting to upgrade Fedora, first exposure to LVM.
I cloned my old drive to a new one, same size. Pretty much out-of-the-box installation - contains LVs root, home and swap.
Upon completion, I renamed the Volume Group on the old drive with the intent to mount both drives concurrently:
> sudo lvm vgrename GGUoJ7-Cj8n-yYnW-DjO7-HHJQ-mMey-jOlBXx fedora32
A PVID conflict arose, which I solved by changing the PV UUID on the new drive:
> sudo lvm pvchange -u /dev/sda2
Now I get no access to the logical volumes on the old drive with the error above

So how do either 1) correctly propagate or 2) roll back the Volume Group Name change?

I’ve found some references to “repair”, recreating the LVM metadata, but am hesitant as I would not like to lose the contents of the old drive. I got in this situation thru RTFM - I didn’t realise RTWholeFM was required. Still looking for procedures: where I went wrong, how to fix it.

Is there a simple reference to a procedure that can help me recover this?




When you look to /etc/lvm there you could find backup folder. Check there and open it and find which one from the file that match with your LVM you want. If you found it, there a way to recover your LVM metadata.

The best I can find on the internet is this article because it include a study case. But there also tutorial from RedHat here.

Please read all carefully.


Thanks. I did stumble across said article however it didn’t seem quite applicable - I’ll review it again in detail. backup folder won’t help much as I’m running from a Fedora Live USB and rebooted before the issue became apparent. (There is a back-up present from when I executed pvchange on the modified clone, but after vgrename on the original.) Anyway I’ve now created a drive image of the original physical volume so at least I can try a thing or two out, if I figure out what.

I not sure if this will give a clue or not. Yesterday I install another Linux OS in my laptop (multi boot) with LVM partition layout. When I booting back to Fedora Workstation 35, it’s automatically create a file inside /etc/lvm/backup with metadata match with newer installed Linux OS without any setup done.


I tested it further. I completely delete everything inside folder /etc/lvm/backup and /etc/lvm/archive then reboot the OS. After login, the LVM data come back again inside both of those folders.

Thanks. Due to hardware limitations I’ve been doing all my work on a Fedora Live USB , rebooting occasionally. I’ll check, however I would assume the /etc/lvm directories start off empty after a boot from USB.
In the interim this issue has become academic for me, as in the process of cloning the drive with the unsuccessfully renamed Volume Group for experimentation purposes, I discovered that the renamed Volume Group would now mount successfully, but only when the (first) clone was not present.
That is:
Original: Volume Group A + Clone: Volume Group A => naming conflict
Original: Volume Group A’ + Clone: Volume Group A => error message above
Original: Volume Group A’ alone => no apparent issue
So there appears to still be some unresolved LVM conflict between the original drive and the clone. As my desire was merely to pick some data off the original, I have done this thru an intermediary.

I did discover this post “Renaming the Volume Group the right way” which might have been a procedure worth trying on the clone of the damaged original.

I tried to rename my LVM group then inactivate it and regenerate the volume grup UUID. After activate it again, I found that regenerating the volume grub UUID didn’t regenerate the UUID of it’s member group. Maybe because of it and we also need to regenerate the UUID of it’s member.

I also posted my difficulty on the linux-lvm newsgroup and got an apparent developer who indicated that the metadata is correct on the disk, but the reporting function might be getting confused by the almost identical metadata for two different Logical Volumes and gave me a work-around.
From the pvs -vvvvv report, it does look like there is an issue with perceived PV UUIDs.

1 Like

It appears that this situation was caused by user error. An update for more helpful feedback has been submitted: LVM2: handle duplicate vgids

It’s possible to create this condition without too much difficulty by cloning PVs, followed by an incomplete attempt at making the two VGs unique (vgrename and pvchange -u, but missing vgchange -u.)