F40 VM Hangs During Boot: Job dev-disk-by\x2uuid

I’m trying to boot a Fedora 40 VM (uefi) using Virtual Machine Manager, but the booting process hangs at this point:

[  OK  ] Finished systemd-vconsole-setup.service - Virtual Console Setup.
[  *** ] Job dev-disk-by\2duuid-95ea526a ... (<time> / no limit)

The UUID shown (95ea526a…) matches the UUID of the VM’s root partition in the qcow2 image.

Back story: I’m attempting to create a copy of an existing real (hd) F40 system within a VM (qcow2 image). I’m using Virtual Machine Manager (QEMU/KVM) to manage the VM.

I know the UUID of the VM’s root partition needs to be fixed. Before booting the VM, I mounted the VM’s root partition from the qcow2 image (using nbd) and corrected the root partition UUID in /etc/fstab and /etc/kernel/cmdline:

$ virt-filesystems --add testf40.qcow2 --uuid --long
Name      Type       VFS  Label    Size         Parent UUID
/dev/sda1 filesystem vfat SYSTEM   313929728    -      31EA-6894
/dev/sda2 filesystem ext4 TEST-F40 104776802304 -      95ea526a-5df4-4476-8f0e-e08b7a934e87

$ virt-cat --add testf40.qcow2 /etc/fstab | grep -vE '^(#| *$)'
UUID=95ea526a-5df4-4476-8f0e-e08b7a934e87    /     ext4    defaults        1 1
UUID=31EA-6894          /boot/efi       vfat    umask=0077,shortname=winnt 0 2

$ virt-cat --add testf40.qcow2 /etc/kernel/cmdline
root=UUID=95ea526a-5df4-4476-8f0e-e08b7a934e87 ro selinux=0

I also started a chroot shell and ran grub2-mkconfig to update the grub.cfg and dracut to rebuild initramfs. I’m not sure if both of those are necessary, but I assume it doesn’t hurt.

# mount /dev/nbd0p2 /mnt
# mount --bind /dev /mnt/dev
# mount --bind /sys /mnt/sys
# mount --bind /proc /mnt/proc
# chroot /mnt
    # grub2-mkconfig -o /boot/grub2/grub.cfg
    # dracut --kver 6.10.12-200.fc40.x86_64 --force
    # exit
# umount /mnt/proc
# umount /mnt/sys
# umount /mnt/dev
# umount /mnt

Is there some other place I need to change the UUID?

What exactly is the boot message that hangs doing? Is it attempting create the symlink for /dev/disk/by-uuid/95ea526a-5df4-4476-8f0e-e08b7a934e87? Or has that already been created at this point?

Is it attempted to access that symlink (assuming it’s already been created) and it’s failing because the symlink isn’t going to the correct device? Based on another F40 VM I created that works (this one from scratch - not a copy of an existing partition), I assume that symlink should be going to ../../vda2.

Do I need to do something so the /dev/vda* devices are created?

Any help appreciated.

There’s also the EFI stub config:

/boot/efi/EFI/fedora/grub.cfg

BTW, here’s the instruction:
Restoring the bootloader using the Live disk

You’re correct. I had modified that file as well and fixed the UUID and I should have mentioned that.

I didn’t consider it when I originally posted because I don’t think it’s related to the problem. Unless I’m mistaken (entirely possible), /boot/efi/EFI/fedora/grub.cfg is only used to find /boot/grub2/grub.cfg. And /boot/grub2/grub.cfg is clearly found since the grub menu is displayed and I’m able to at least attempt to boot the OS. In other words, as far as I can tell the failure happens well after /boot/efi/EFI/fedora/grub.cfg is accessed.

Likewise… since I can boot into the grub menu itself, it seems like I’m past the point of a problem with the bootloader. But I will take a closer look at that link, thanks.

I decided to try a different experiment, the results lead me to believe this is at least related to the fact I’m trying to use the copy within a VM. The experiment was to do basically the same thing, but instead of a VM, I created a separate partition on the same physical disk as my main F40. I copied the root partition to the new partition, edited the UUID in /etc/fstab and /etc/kernel/cmdline and regenerated /boot/grub2/grub.cfg using the same method previously mentioned (chroot, grub2-mkconfig).

I also copied /boot/efi/EFI/fedora to /boot/efi/EFI/TESTF40 and edited the UUID in /boot/efi/EFI/TESTF40/grub.cfg, then used efibootmgr to add a new entry.

When I boot to the new physical partition containing the F40 copy (TESTF40) it works just fine.

So, I’m still thinking this has something to do with how the VM is seeing the root partition… and maybe the creation of the /dev/vda* devices. Unfortunately, I’m at a loss how to even determine if that’s the case. I don’t know how to check for those devices since they’re dynamically created via udev. At least I’m pretty sure that’s how they’re created. I know very little about the udev system.

Check if your output is similar:

> virsh dumpxml fedora --xpath "//disk[@device='disk']/target"
<target dev="vda" bus="virtio"/>

Also consider to regenerate all initramfs:

dracut -f --regenerate-all

Same output

virsh dumpxml TEST-F40 --xpath "//disk[@device='disk']/target"
<target dev="vda" bus="virtio"/>

Tried, but booting still fails at the same point.

Well… strangely, I fixed it. But it seems like there should be a better way…

I actually have 3 - F40 systems.

  1. My current main non-VM F40 system. This is what I want a VM copy of.
  2. A VM of F40 I created around Apr 2024 which was a fresh install using the live iso.
  3. A VM of F40 which is the copy I made of the non-VM F40 that wasn’t booting.

As previously mentioned, I used my current main non-VM F40 system to run grub2-mkconfig and dracut via chroot to the nbd mounted qcow2 image. That didn’t boot. So, I decided to try the same thing, but instead I booted into the old working F40 VM, and ran grub2-mkconfig and dracut via chroot to the nbd mounted qcow2 image. Shutdown the old working F40 VM. Started the new copy F40 VM and it works.

Seems like an odd fix to me.

1 Like

Looks like /dev/vda was the key. This seems to fix it:

dracut --force --kver 6.10.12-200.fc40.x86_64 --add-drivers "virtio_blk"

Or build all the kernels with --regenerate-all, the key is --add-drivers "virtio_blk".

Some important places where a UUID is used/stored:

  1. /boot/ (system rescue files)
  2. /boot/efi/EFI/fedora/grub.cfg
  3. /boot/grub2/grub.cfg
  4. /boot/grub2/grubenv
  5. /boot/loader/entries/
  6. /etc/fstab
  7. /etc/kernel/cmdline

If you wish to migrate a GPT UEFI bootable copy of Fedora, the UUID in ALL of these places must be updated to avoid a duplicate UUID error when booting.

Note, manually changing /boot/grub2/grubenv can change its size from the required 1028 bytes which will break the boot process. VIM, used with care, will NOT break the boot, but Fedora documentation stresses that a “safer” method be used.

If a workstation contains two m.2 NVMe SSDs each with at least one installation of Fedora, you MUST be mindful of the fact the NVMe enumeration will NOT be consistent. Whichever device responds to BIOS first during powerup will be enumerated before the other. When the grub2-mkconfig>/boot/grub2/grub.cfg command is executed, it will utilize current NVMe enumeration.

If a Fedora instance other than the one used to create the /boot/grub2/grub.cfg is selected during the boot process your workstation may hang while displaying a partial screen of data because an attempt to boot from the wrong NVMe SSD failed due to an enumeration change.

Do not manually change /boot/grub2/grubenv unless you are a glutton for punishment in case it has even one single byte size difference or a syntax error.

The program grub2-editenv is designed to edit that file and ensure its content is correct and does not get broken. The man page for grub2-editenv shows how to use it.

If the system is installed normally and the UUIDs have not been changed the system will always mount the proper partition at /boot and the nvme enumeration is not a factor. The uuid for /boot is written and configured in /etc/fstab during installation so the /dev enumeration of a drive is never a factor in normal circumstances.

Never an issue with a normal installation since UUIDs are used for mounting and booting.