Fedora dissappeared on boot after Windows reinstall

I was running a Fedora 36 + Windows11 dual boot on the same drive. I had to reinstall windows11 because I couldn’t run Adobe programs and then ran into this issue.

That is typical since the windows boot loader does not support other OSes. The system reinstall of windows likely wiped out the data from the efi directory that allows grub to control booting.

I suggest that you use a live USB and boot to fedora 36, then mount and chroot into your installed F36 system. Once that is done then a reinstall of grub packages should recover for you.

  1. boot to live usb
  2. identify the UUIDs for / (probably a btrfs subvolume) which can usually be identified by using lsblk -a -o PATH,FSTYPE,UUID and should give an output similar to this.
# lsblk -a -o PATH,FSTYPE,UUID
PATH       FSTYPE UUID
/dev/sr0          
/dev/zram0        
/dev/vda          
/dev/vda1  vfat   3919-EFE0
/dev/vda2  ext4   7a909bd7-d497-4493-84df-29209c11f183
/dev/vda3  btrfs  abd08baf-0c3d-422f-a536-e97f9005c026
  1. Then mount the main / file system (still assuming that you are using btrfs as shown in #2.
    su
    mount -t btrfs -o subvol=root,compress=zstd:1 UUID=<your uuid found above> /mnt
  2. Once that is mounted then mount the other file systems needed to support a chroot.
for PART in /sys /proc /run /dev ; do
    mount -o bind $PART /mnt/$PART
done
  1. now chroot to the new root file system
    chroot /mnt
  2. Mount the remaining necessary file systems
    verify that the efi partition still has the UUID as seen in /etc/fstab here by comparing the one seen with the previous lsblk command to the content of the fstab file. If it does not match then fstab will need to be edited to put the proper UUID in place.
    If that matches then a simple mount -a should properly mount both /boot and /boot/efi
  3. Finally, a reinstall of grub should restore the ability to boot fedora.
    dnf reinstall grub*

Exit out of the chroot with exit; then a reboot at this point should bring up the grub menu for booting either fedora or windows.

4 Likes

Hey! this is what I get when I use lsblk -a -o PATH,FSTYPE,UUID

$ lsblk -a -o PATH,FSTYPE,UUID
PATH                  FSTYPE      UUID
/dev/loop0            squashfs    
/dev/loop1            ext4        61bf0b12-d95b-498b-8b50-182c4994b883
/dev/loop2                        
/dev/sda                          
/dev/sdb              iso9660     2022-08-28-23-36-23-00
/dev/sdb1             iso9660     2022-08-28-23-36-23-00
/dev/sdb2             vfat        BF48-13FB
/dev/sdb3             hfsplus     fa23bcff-e8ac-3ca6-a5a8-272ab8ddab82
/dev/zram0                        
/dev/mapper/live-rw   ext4        61bf0b12-d95b-498b-8b50-182c4994b883
/dev/mapper/live-rw   ext4        61bf0b12-d95b-498b-8b50-182c4994b883
/dev/mapper/live-base ext4        61bf0b12-d95b-498b-8b50-182c4994b883
/dev/nvme0n1                      
/dev/nvme0n1p1        vfat        9839-34AF
/dev/nvme0n1p2                    
/dev/nvme0n1p3        ntfs        1E1C3A571C3A2A63
/dev/nvme0n1p4        ntfs        DE96364096361A0B
/dev/nvme0n1p5        ext4        ae022667-835d-4df9-8e5e-8028d1e40488
/dev/nvme0n1p6        LVM2_member SAX4bc-2pXh-qcTf-Pplt-igbp-4xZU-f6N6tN

as you suggested before I installed Fedora letting it partition automatically on installation, so not sure what its not btrfs and I am pretty sure /dev/nvme0n1p5 and /dev/nvme0n1p6 are my linux partitions

It appears that
nvme0n1p1 is your efi partition
nvme0n1p5 is the /boot partition
and because you used LVM the root partition will be seen by looking in /dev/mapper for the device shown as VGNAME-root (where VGNAME will be what your VG is named) . ls /dev/mapper should give you that (and it should not be ‘live-rw’ or ‘live-base’). I have 2 different VGs with LVs as you can see here.

$ ls /dev/mapper
control  fedora_raid1-home  fedora-root

Thus the mount is not btrfs but likely ext4 so you can mount the root partition as mount -t ext4 /dev/mapper/VGNAME-root /mnt The system should automatically recognize the ext4 file system so it may not be necessary to use the -t option there.

Everything else I gave should still work

2 Likes
 ls /dev/mapper
control  live-base  live-rw

This is what I get, it doesnt seem to show here either

Try pvscan, vgscan, and lvscan

The fact that /dev/nvme0n1p6 shows as LVM2_member indicates it is LVM, and there should be a VG and logical volumes (LVs) associated.

It is possible the LVs that show as live-rw and live-base are the ones involved, but that is not normally what they are named for an installation.

Which spin did you install? Workstation or something else?

1 Like

I installed Nobara Project since I was having issues with installing NVDIA drivers.

I am rebooting to find out now.

Theres a tool called supergrub2. There’s a video of it here if you arent aware of it. Do you think this could help?

heres what I get for pvscan

$ sudo pvscan
  PV /dev/nvme0n1p6   VG nobara_localhost-live   lvm2 [115.88 GiB / 0    free]
  Total: 1 [115.88 GiB] / in use: 1 [115.88 GiB] / in no VG: 0 [0   ]

heres what I get for vgscan

$ sudo  vgscan
  Found volume group "nobara_localhost-live" using metadata type lvm2

heres what I get for lvscan

sudo lvscan
  inactive          '/dev/nobara_localhost-live/home' [45.88 GiB] inherit
  inactive          '/dev/nobara_localhost-live/root' [70.00 GiB] inherit

From here I found how to activate a VG. Web searches can often give you a quick answer.

You should be able to run sudo vgchange -a y nobara_localhost-live and activate your installed LVM volumes, then follow the steps I gave above to mount and recover the system.

With that info your root file system should reside on /dev/mapper/nobara_localhost-live-root for mounting it at /mnt.
mount -t ext4 /dev/mapper/nobara_localhost-live-root /mnt should then work.

1 Like

I think I ran into another issue here

$ sudo vgchange -a y nobara_localhost-live
  2 logical volume(s) in volume group "nobara_localhost-live" now active
[liveuser@localhost-live ~]$ su
[root@localhost-live liveuser]# mount -t ext4 /dev/mapper/nobara_localhost-live-root /mnt
mount: /mnt: special device /dev/mapper/nobara_localhost-live-root does not exist.
       dmesg(1) may have more information after failed mount system call.

Also want to thank you for taking your time and helping me out! really appreciate that!

1 Like

i did do a quick search, wasn’t able to understand how to fix it

Try ls /dev and ls /dev/mapper to see what the LVs are shown as. I suspect the error is in the name of the VG with the hyphen - in it.

It can be renamed if that is the problem. From the man page for vgrename I see

USAGE
       Rename a VG.

       vgrename VG VG_new

You might want to rename it anyway for simplicity.

1 Like

I have to ask for help again since this didn’t make a lot of sense to me

$ ls /dev
autofs           gpiochip0     lp1        ptmx      tty    tty27  tty46  tty8    ttyS26   usbmon3  vcsu1
block            gpiochip1     lp2        pts       tty0   tty28  tty47  tty9    ttyS27   usbmon4  vcsu2
bsg              hidraw0       lp3        random    tty1   tty29  tty48  ttyS0   ttyS28   usbmon5  vcsu3
btrfs-control    hidraw1       mapper     rfkill    tty10  tty3   tty49  ttyS1   ttyS29   usbmon6  vcsu4
bus              hidraw2       mcelog     rtc       tty11  tty30  tty5   ttyS10  ttyS3    usbmon7  vcsu5
char             hidraw3       mem        rtc0      tty12  tty31  tty50  ttyS11  ttyS30   usbmon8  vcsu6
console          hidraw4       mqueue     sda       tty13  tty32  tty51  ttyS12  ttyS31   vcs      vfio
core             hpet          net        sdb       tty14  tty33  tty52  ttyS13  ttyS4    vcs1     vga_arbiter
cpu              hugepages     ng0n1      sdb1      tty15  tty34  tty53  ttyS14  ttyS5    vcs2     vhci
cpu_dma_latency  hwrng         null       sdb2      tty16  tty35  tty54  ttyS15  ttyS6    vcs3     vhost-net
cuse             initctl       nvme0      sdb3      tty17  tty36  tty55  ttyS16  ttyS7    vcs4     vhost-vsock
disk             input         nvme0n1    sg0       tty18  tty37  tty56  ttyS17  ttyS8    vcs5     watchdog
dm-2             kmsg          nvme0n1p1  sg1       tty19  tty38  tty57  ttyS18  ttyS9    vcs6     watchdog0
dm-3             kvm           nvme0n1p2  shm       tty2   tty39  tty58  ttyS19  udmabuf  vcsa     zero
dma_heap         live-base     nvme0n1p3  snapshot  tty20  tty4   tty59  ttyS2   uhid     vcsa1    zram0
dri              log           nvme0n1p4  snd       tty21  tty40  tty6   ttyS20  uinput   vcsa2
drm_dp_aux0      loop0         nvme0n1p5  stderr    tty22  tty41  tty60  ttyS21  urandom  vcsa3
fb0              loop1         nvme0n1p6  stdin     tty23  tty42  tty61  ttyS22  usb      vcsa4
fd               loop2         nvram      stdout    tty24  tty43  tty62  ttyS23  usbmon0  vcsa5
full             loop-control  port       tpm0      tty25  tty44  tty63  ttyS24  usbmon1  vcsa6
fuse             lp0           ppp        tpmrm0    tty26  tty45  tty7   ttyS25  usbmon2  vcsu
[liveuser@localhost-live ~]$ ls /dev/mapper
control  live-base  live-rw

That makes sense if the VG of interest is not active. ls /dev/mapper is showing only the active LVs.

Try the rename as suggested then activate the new VG and try the ls /dev/mapper again.

1 Like

I just thought of something else.
It is clear that /dev/nvme0n1p6 is LVM.
It is also clear that you are seeing 2 LVs that are active. (live-base & live-rw)

IME booting from the live USB does not give any LVs mounted by default.
Please also show us the output of mount as well as sudo fdisk -l

If possible do this before trying the vgrename command

1 Like

For me an easy solution for such an issue was:
search the Web for Super grub disk and burn an starting device and let “Super Grub” searching for other OS’s and start the missing OS.

Follow the Fedora Quick Docs Chapter “Grub…” https://docs.fedoraproject.org/en-US/quick-docs/bootloading-with-grub2/

If you have modified yet your missing OS then you should reinstall your OS again. If not you are lucky and you could repair your missing OS.

Please remember: install always Windows firstly afterwards your preferred Linux OS.

Good luck
:grinning:

1 Like

This is the output for mount

$ mount
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=4096k,nr_inodes=1048576,mode=755,inode64)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,size=3258616k,nr_inodes=819200,mode=755,inode64)
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,seclabel,nsdelegate,memory_recursiveprot)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime,seclabel)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
/dev/sdb1 on /run/initramfs/live type iso9660 (ro,relatime,nojoliet,check=s,map=n,blocksize=2048,iocharset=utf8)
/dev/mapper/live-rw on / type ext4 (rw,relatime,seclabel)
selinuxfs on /sys/fs/selinux type selinuxfs (rw,nosuid,noexec,relatime)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=34,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=21003)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime,seclabel)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime,seclabel)
tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime,seclabel)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel,pagesize=2M)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,seclabel,size=8146540k,nr_inodes=1048576,inode64)
vartmp on /var/tmp type tmpfs (rw,relatime,seclabel,inode64)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=1629304k,nr_inodes=407326,mode=700,uid=1000,gid=1000,inode64)
gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)
portal on /run/user/1000/doc type fuse.portal (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)

and this is the output for sudo fdisk -l

 sudo fdisk -l
Disk /dev/sda: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WD10EZEX-00M
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: EC2F79D8-28A3-4FF8-88BF-B21FB77D7658


Disk /dev/nvme0n1: 232.89 GiB, 250059350016 bytes, 488397168 sectors
Disk model: Samsung SSD 970 EVO 250GB               
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: A720A107-E317-4E63-A89B-BCA5A3318494

Device             Start       End   Sectors   Size Type
/dev/nvme0n1p1      2048    206847    204800   100M EFI System
/dev/nvme0n1p2    206848    239615     32768    16M Microsoft reserved
/dev/nvme0n1p3    239616 242030591 241790976 115.3G Microsoft basic data
/dev/nvme0n1p4 242030592 243267583   1236992   604M Windows recovery environment
/dev/nvme0n1p5 243269632 245366783   2097152     1G Linux filesystem
/dev/nvme0n1p6 245366784 488396799 243030016 115.9G Linux LVM


Disk /dev/sdb: 58.98 GiB, 63333990400 bytes, 123699200 sectors
Disk model: SG Flash        
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x3caf6577

Device     Boot Start      End  Sectors  Size Id Type
/dev/sdb1  *        0 10150559 10150560  4.8G  0 Empty
/dev/sdb2         172    20519    20348  9.9M ef EFI (FAT-12/16/32)
/dev/sdb3       20520    63319    42800 20.9M  0 Empty


Disk /dev/loop0: 4.72 GiB, 5073113088 bytes, 9908424 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop1: 25 GiB, 26845642752 bytes, 52432896 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop2: 32 GiB, 34359738368 bytes, 67108864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/live-rw: 25 GiB, 26845642752 bytes, 52432896 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/live-base: 25 GiB, 26845642752 bytes, 52432896 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/zram0: 8 GiB, 8589934592 bytes, 2097152 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

I haven’t done that yet

I was thinking of using super grub2 but I wasn’t exactly sure how to reinstall grub. During my last install I messed up pretty badly.

I would like to avoid losing my files to be honest.

Yes I just learned it first hand. I already had it setup that way but I had installed Windows 11 N and modified the iso to remove bloatware and components unnecessary to me and I may have removed more than necessary thus the reinstall.

The only thing I see unusual in the output of fdisk -l is that /dev/sda has a gpt partition table, but no partitions have been created. Everything else seems exactly as expected from earlier data you posted.

Now I would suggest that you use the output given earlier for the “vgscan” & “lvscan” commands

heres what I get for vgscan

$ sudo  vgscan
  Found volume group "nobara_localhost-live" using metadata type lvm2
heres what I get for lvscan

sudo lvscan
  inactive          '/dev/nobara_localhost-live/home' [45.88 GiB] inherit
  inactive          '/dev/nobara_localhost-live/root' [70.00 GiB] inherit

Try to rename that VG to something else such as
sudo vgrename 'nobara_localhost-live' nobaraVG or something similar to simplify the name.
Then reboot into your live media and repeat the vgscan & lvscan commands to verify the VG name has changed and hopefully the VG will now be active and you can work with those LVs.

1 Like

it still seems to be inactive after rename

$ sudo  vgscan
  Found volume group "nobaraVG" using metadata type lvm2
$ sudo lvscan
  inactive          '/dev/nobaraVG/home' [45.88 GiB] inherit
  inactive          '/dev/nobaraVG/root' [70.00 GiB] inherit