Missing grub entry to new install of F38

efibootmgr -B -b 000A && efibootmgr -B -b 000B

are you saying that they are not going to be generated anew again?
m.

Let’s break down the command:

  • -B: This option tells efibootmgr to set the boot entry flag to BOOT_DISABLED. This essentially makes the entry unbootable.

  • -b 000B: This option specifies the boot entry you want to modify. Here, 000B refers to
    Boot000B* debian HD(1,GPT,0551b26a-20df-4d7c-9049-727f944c4154)

  • &&: This is the “AND” operator, which executes the second command only if the first command is successful.

I mean, I already tried efibootmgr -B -b 0003, and efibootmgr -B -b 0004, and I got them back as 000A and 000B.
If I try efibootmgr -B -b 000A && efibootmgr -B -b 000B, arent’t they going to be back again? Anyway, I’ll try and report.

This seems to be the boot partition on the second drive and bios automatically sees that.

Entries 0005 & 0009 seem to be a usb device and if so should be removed while trying to solve the other boot issues. The entries for both are in a form that is totally foreign to me. I have never seen such a line under efibootmgr.

1 Like

The UUID 0551b26a-20df-4d7c-9049-727f944c4154 Is the partition uuid which can be matched with the output of lsblk -o NAME,PARTUUID. I guess that this is the ESP on the second disk, and if you really want to get rid of it, you may have to remove the partition that holds the ESP on the second disk. Just make sure not to remove the ESP you are using to boot from.

Some UEFI implementations can have very strange behavior and may generate entries on its own for things it finds in the ESP (EFI System Partition).

1 Like
[mario@fedora ~]$ lsblk -o UUID,NAME,PARTUUID,MOUNTPOINTS
UUID                                 NAME   PARTUUID                             MOUNTPOINTS
                                     sda                                         
66C8-C0B1                            ├─sda1 d37e5584-95c3-4281-ac44-3c98d101a185 /boot/efi
3ec538e7-43b3-4585-a545-fdf4db63799f ├─sda2 26f4a3eb-d729-42c1-b005-0b03d552ea58 /boot
9f1ad084-bcae-4ccf-beed-5177bc7fe703 └─sda3 43602bae-18d7-49db-84c0-f68ba1aabe9f /home
                                                                                 /
                                     sdb                                         
926E-F477                            ├─sdb1 0551b26a-20df-4d7c-9049-727f944c4154 
f950f6cb-5c21-4abc-9711-d67f4e7d3a24 ├─sdb2 22281ac0-8f3f-4a22-ae14-08b63e802e49 
1f669645-e49f-48f3-9bf8-9e6f9a911db1 ├─sdb3 abec815d-fc77-4772-b711-ea70118553e3 
2fc4d96f-2ea7-4322-aed5-cc6e0051654d └─sdb4 8f9da513-d576-4580-b36a-9f24dd4eb810 
                                     sdc                                         
d5b23ea3-59bf-4425-9b46-1fdcaa34a086 └─sdc1 aa0a7a15-01                          /run/media/mario/d5b23ea3-59bf-4425-9b46-1fdcaa34a086
                                     sr0                                         
                                     zram0                                       [SWAP]

So, sda is my main disk, with Fedora 38, working. sdc is a 1T disk with most of my data. sdb is something I don’t remember any more how I got it. If I try to mount its two partitions, i get
Screenshot from 2024-04-10 09-13-44
Screenshot from 2024-04-10 09-19-44

Is there a way to check it and use it somehow? Should I get rid of the disk?
thanks cheers
mario

You could run the command lsblk -f and potentially see if those partitions are formatted with a file system and type.
You could also use sudo gparted /dev/sdb to find out more details on the partitions and modify them if desired

It may be best to just unplug that device if its reliability is questionable. Otherwise you can delete the existing partitions and create a new one and them create a file system there. This can all be done using gparted.

Gnome Disks has options to check drive health. It doesn’t always identify drives that are about to fail, but if it reports the drive is not healthy you want to stop using it (or it will prevent you from using it). My experience up to retiring (6 years ago) with rotating disks (in high-quality systems with UPS’s) has been that manufacturing tolerances are such that, for drives used 7x24, the failure rate zooms up shortly after end of warranty. Drives that are used a few hours a day can last well past the warranty period.