Fedora dissappeared on boot after Windows reinstall

I am in recovery
sh-5.1# is on the screen now

so at this point we need to know the kernel loaded.
uname -a
and the status of the LVs
lvscan
Also the mounted file systems
mount
add ls /boot if possible

1 Like
sh-5.1# uname -a
Linux fedora 5.19.4-201.fsync.fc.36.x86_64 #1 SMP PREEMPT_DYNAMIC Sat Aug 27 21:47:57 UTC 2022 x86_64 x86_64 GNU/Linux
sh-5.1# lvscan
sh: lvscan: command not found
sh-5.1# mount
rootfs on / type rootfs (rw)
process on /proc type proc (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=4096k,nr_inodes=1048576,mode=755,inode64)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64)
devpts on  /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,size=3258616k,nr__inodes=819200,mode=755,inode64)
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
ramfs on /run/credentials/systemd-sysusers.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
sh-5.1# ls/ boot
sh: ls/: No such file or directory

I made a small mistake on the last one, but that didn’t get any proper output either

sh-5.1# ls /boot
ls: cannot access '/boot': No such file or directory

In this environment try lvm lvscan. You can do that for all voulume group commands.

1 Like

thank you!

sha-5.1# lvm lvscan
  inactive     '/dev/nobaraVG/home' [45.88 GiB] inherit
  inactive     '/dev/nobaraVG/root' [70.00 GiB] inherit

how do I make ls /boot work tho?

1 Like

I was thinking about changing VGs back to default, then I think I would have to change the value in /etc/fstab but after changing the VG back to default I had issues running mount -t ext4 /dev/mapper/VGNAME-root /mntand getting into chroot environment. I could however change the VG name in /etc/fstab first and then actually change the VG name. I personally have no idea what most of the above does but I get some kind of idea about it so I might be wrong.

You can’t. Not until you find out how to mount the root file system and then the /boot and /boot/efi file systems. Doing so is not much different from how you do it from LiveUSB.

Did you at some point change the names of the volume groups? I suspect that the system is still trying to activate the LVM under the original name. Also the /etc/fstab file specifies the LVM name which will use to locate the root file system. I never used LVM myself so where stuff is stored I don’t know. Probably somewhere in the /etc directory.

1 Like

yes I had to change the VGs. i cant exactly explain what was happening but they were inactive and didnt let me mount and get into chroot to reinstall/update grub.

this was alrso updates with the new VG names

The changed VG name worked and did not need changed back, although if it now will activate that is not an issue.

Personally I would recommend leaving it with the new name and not reverting to the original name.

1 Like

After doing the install of the new kernel you may need to boot into the chroot environ again to be able to mount the /boot and /boot/efi file systems. Then we should be able to look at the content of /boot.

It may also be necessary to do vgexport nobara_localhost-live -v to clear out the old VG data, then to do vgimport nobaraVG -v so the system can actually see the new VG name and activate it.

Having the system see two different VG names led me to this possible solution.

Lets do that and then see if it will boot. Though it may be necessary to once again run dracut --force once the vg has been imported before you reboot.

1 Like
# ls /boot
config-5.19.6-200.fc36.x86_64+debug                      loader
efi                                                      symvers-5.19.6-200.fc36.x86_64+debug.gz
grub2                                                    System.map-5.19.6-200.fc36.x86_64+debug
initramfs-0-rescue-8ae5229828014ea2a89e0a795b4b9270.img  vmlinuz-0-rescue-8ae5229828014ea2a89e0a795b4b9270
initramfs-5.19.4-201.fsync.fc36.x86_64.img               vmlinuz-5.19.6-200.fc36.x86_64+debug
initramfs-5.19.6-200.fc36.x86_64+debug.img
# vgexport nobara_localhost-live -v
  VG name on command line not found in list of VGs: nobara_localhost-live
  Volume group "nobara_localhost-live" not found
  Cannot process volume group nobara_localhost-live
# vgimport nobaraVG -v
  Volume group "nobaraVG" is not exported

i think i didnt end up running properly

That was expected since it was not explicitly exported. In fact you would have to dismount the LVs and inactivate them before it could be exported.

What is the result of vgdisplay nobaraVG -v

1 Like
$ sudo vgchange -a y nobaraVG
  2 logical volume(s) in volume group "nobaraVG" now active
[root@localhost-live liveuser]# for PART in /sys /sys/firmware/efi/efivars /proc /run /dev ; do
    mount -o bind $PART /mnt/$PART
done
[root@localhost-live liveuser]# chroot /mnt
[root@localhost-live /]# vgdisplay nobaraVG -v

just showing what i did in case something went wrong

results for vgdisplay nobaraVG -v

# vgdisplay nobaraVG -v
  --- Volume group ---
  VG Name               nobaraVG
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  6
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               115.88 GiB
  PE Size               4.00 MiB
  Total PE              29666
  Alloc PE / Size       29666 / 115.88 GiB
  Free  PE / Size       0 / 0   
  VG UUID               E2u0n4-DQk9-xT1T-CQfG-wL9F-HwZv-HBo6dy
   
  --- Logical volume ---
  LV Path                /dev/nobaraVG/home
  LV Name                home
  VG Name                nobaraVG
  LV UUID                VmyTpL-eGSO-g081-6MfN-stdf-wDaI-qytwxk
  LV Write Access        read/write
  LV Creation host, time localhost-live, 2022-08-10 18:47:32 +0600
  LV Status              available
  # open                 0
  LV Size                45.88 GiB
  Current LE             11746
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     4096
  Block device           253:0
   
  --- Logical volume ---
  LV Path                /dev/nobaraVG/root
  LV Name                root
  VG Name                nobaraVG
  LV UUID                RhAM91-98E8-2CBW-Fr3N-tser-rmCi-pM1PK0
  LV Write Access        read/write
  LV Creation host, time localhost-live, 2022-08-10 18:47:33 +0600
  LV Status              available
  # open                 1
  LV Size                70.00 GiB
  Current LE             17920
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     4096
  Block device           253:1
   
  --- Physical volumes ---
  PV Name               /dev/nvme0n1p6     
  PV UUID               SAX4bc-2pXh-qcTf-Pplt-igbp-4xZU-f6N6tN
  PV Status             allocatable
  Total PE / Free PE    29666 / 0
   
  Archiving volume group "nobaraVG" metadata (seqno 6).
  Creating volume group backup "/etc/lvm/backup/nobaraVG" (seqno 6).

The next time you attempt to boot select the 5.19.6 kernel then press e for edit.

On the line that begins with linux look for a segment similar to this.
rd.lvm.lv=nobaraVG/root. If that is not there then please put it into that line and continue booting with ‘ctrl-X’

I suspect it should activate the LV and finish booting. If so then we can make that change permanent.

EDIT
There should also be root=/dev/mapper/nobaraVG-root ro at the beginning of that line

1 Like

I found ro rd.lvm=Novara_localhost-live/root
should I edit that?

and yes, edit both parts if needed to replace nobara_localhost-live with nobaraVG

1 Like

I GOT IN! thank you so much
Is there anything I need to do now?

Yes, now we need to fix the problem that caused that kernel command line to be invalid.

cat /etc/default/grub so we can see the command line there.

1 Like