Hi guys!
Chiming in on this topic, since it seems related to my current project: I have set up a btrfs raid1 array with two identical disks, where both have a ext4 (/boot) a raw (/boot/efi) and luks encrypted partition where a btrfs raid1 volume resides (/). now i realised that my ESP on disk 1 is a single point of failure for this machine, although i wanted raid for redundancy.
My question is: how do i configure my EFI grub to be rebuilt to both drives instead of just one? is that even possible? Or would i have to change the boot device config, rebuild as stated here:
…and change the config back to the original boot device? Can i automate this process and hook it to grub updates or something?
I am not sure I understand the config you are using.
Are both the drives in raid 1? or is only the btrfs part in raid config?
There have been issues with booting when either /boot/efi or /boot are in raid since the system does not understand raid until the kernel and initramfs image are loaded.
I suspect you are asking about having an ext4 and an efi partition on each drive but not in raid config and that you are attempting to duplicate the content of those partitions so you have a backup.
If that is the case, it would be better to simply establish a routine where after an update the content of the /boot and /boot/efi file systems were copied to the second drive directly. Any changes then could overwrite the existing files that may have changed, add new files, and remove deleted files on the backup.
A kernel update usually removes all entries in /boot for the removed kernel and the related .conf file from /boot/loader/entries/
New files for the new kernel are created in the same locations.
It appears the /boot/grub2/grubenv file is updated with every boot
The /boot/efi/EFI/fedora/grub.cfg file is static and never changes for any given installation.
You suspect correcly I would like my second (non raid) ESP to be bootable without the other drive.
I poked around the files in /boot and /boot/EFI on the primary drive and there are definetly references to the uuids of the primary drive, so i would guess grub would get confused if i just copy the ESP contents.
So i wanted to ask if there is a way to install the ESP contents “natually” for the second drive?
EDIT: the /boot/efi2 mount point doesnt really make sense i think. it should be /boot2/efi or /boot2/efi2 since the /boot mount would be unavailable too if /dev/sde fails.
It would simplify things if you had just one partition for the “boot” files. I don’t know if Grub supports that sort of configuration, but systemd-boot does. I do something similar, but I did it by switching to systemd-boot and writing a custom script that runs rsync to duplicate the relevant contents of the primary/active ESP to the secondary ESP on the other drive.[1]
It looks like Fedora is planning to make significant changes to the way the bootloader works in the near future. That might include support for mirrored system drives: Changes/BootLoaderUpdatesPhase1 - Fedora Project Wiki
Thank you! I will check it out!
I have read about unified kernel images as well, but it sounded like the tooling for that was not super stable yet?
Can i switch to systemd-boot without reinstalling everything? Also: Does your script work with Secure boot or measured boot?
Cheers!
EDIT: since the changes page also mentions this i want to explicitly link bootupd. it seems like this will be the tool to do the job i want to do in the future?
It can be done,[1] but it is not a “supported” configuration. Unless you enjoy messing with that sort of thing, you are probably better off waiting for the official support from bootupd.
Thanks for the suggestion, but i’m looking for a more generic solution to this problem, since it affects everyone using raid setups with uefi. It seems like the official rollout of bootupd tool would be in F44, so it would be nice if there was a way to have these systems be reliably bootable until then.
As discussed in this thread there might be a way doing this with existing tooling, and i think fstab would actually be fine if we can use different mount points for our second EFI partition like /boot2/efi2 for example. then put the degraded option for the btrfs raid, so it can boot with just one drive.
The question remains: (How) can i tell grub to install to /boot2/efi2 using the correct drive uuid?
So i tried fuzzing with the commands provided here and it seems like grub2-mkconfig -o /boot/grub2/grub.cfg uses fstab to get the disk uuids for the grub configuration. I modified my /etc/fstab file switching the mount points of the initial /boot and /boot/efi with /boot2 and /boot2/efi2 and then tried to install efi grub. It generated all the files, but the folder /boot/loader/entries remains empty even after i regenerate my initrd using dracut. interestingly it was possible to boot with the switched fstab: the new bootmenu doesnt have any entries, so it sends me to the firmware, which still points to the old entries on /boot2/loaders/entries. Then i moved these in hopes of generating new entries and now my testing vm is borked