After a few months of giving Fedora 35 Gnome Workstation a try, I decided recently to switch to F36 KDE Workstation in the next few days. For a proper clean reinstall I thought to ask here a few questions.
My use case is an AMD7 3700 Desktop with 2x nvme of 1TB each, 2x ssd of 1TB each, and several smaller once. On advice of the very helpful Wolfshappen during the install of F35, I installed F35 with btrfs on one nvme. To benefit from btrfs, I merged the second with the first nvme for / and /home, and the two ssd’s with each other for my btrbk snaps.
My question for my clean reinstall of Fedora KDE, shall I do the same and merge the two nvme’s after the install? Or shall I create a RAID1 of them during the install with the installer?*
The (sole) reason why I did merge them after install, was/is the advice of Wolfshappen: restoring the kernels, etc. with RAID1 is a pain in the ass.
My question is thus: shall I merge after install again, or RAID1 during the install?*
*It is indeed true that this question has been asked before, but I thought those could be out of date. If I followed those blindly, then I could use an old method with newer software. Thus I preferred to ask.
I was running SIlverblue using two 240GB SSD’s in a RAID 1 array setup with two separate subvolumes mounted at / and at /var. The third 1TB SSD was setup as a BTRFS volume with a subvolume that was mounted at /home. So it worked fine and though I found nothing really wrong with it, I was feeling guilty about the waste of storage space with the RAID1 setup. Combined with the ease with which to create backups using the snapshot feature of BTRFS, convinced me to forgoe the RAID approach for my desktop.
Now I have each subvolume on a separate device, so /dev/sda gets to have the ESP and the EXT4 /boot partition and the rest I setup as a BTRFS volume with a mount point at /root/fedora-root/btrfs-top-lvl and a subvolume mounted at /. This allows me to take snapshots for backup saved under /root/fedora-root/btrfs-top-lvl/ that I do read only so I can btrfs send/receive to an external USB SDD. The same was done for /var, it has it’s own device with a swap partition of 16GB, and a BTRFS volume mounted at /var/fedora-var/btrfs-top-lvl and be able to handle snapshots the same. And the same for home except the whole 1TB SSD is BTRFS. So the beauty of this is only seen when you need to recover since the snapshots sent externally could be the last resort recovery, and the one on device if still good would be usable to make a RW snapshot from the simply do a mv command to the desired mount point after deleting the subvolume being replaced because yes you can move subvolumes.
There’s different use cases for raid1 and backups (using any method, including btrfs send/receive).
btrfs raid1 on separate physical drives gets you a really different setup than mdadm raid1. Good: Btrfs unambiguously knows if metadata or data is good on every read, and if there’s a problem (which is rare) it automatically grabs the good copy on the 2nd drive, and overwrites the bad copy on the 1st drive. Not great: we don’t have automatic degraded (missing/failed drive) boot. If either drive fails, boot will fail right now. This is a limitation in dracut, at the moment.
For most folks, I’m more of a fan of regularly backing up. Even if you do raid1. And btrfs snapshots with send/receive makes this quite cheap for the workload that has few updates. It’s so cheap you can do it every hour, because Btrfs can very cheaply figure out what files have been created, deleted or modified since the last snapshot without having to do deep traversal like other tools. You can of course still use other tools that you’re familiar with. But if you have tons of files that don’t change a lot, there can be an inhibition to doing frequent backups because of the “scan” on both sides needed to figure out what files have changed, to know what files need the incremental backup. Btrfs figures this out in a few seconds.
While you can do it manually on the command line, you might consider looking at btrbk it is in Fedora repos.