To merge or to RAID1 on F36?

Hello fellow users of Fedora,

After a few months of giving Fedora 35 Gnome Workstation a try, I decided recently to switch to F36 KDE Workstation in the next few days. For a proper clean reinstall I thought to ask here a few questions.

My use case is an AMD7 3700 Desktop with 2x nvme of 1TB each, 2x ssd of 1TB each, and several smaller once. On advice of the very helpful Wolfshappen during the install of F35, I installed F35 with btrfs on one nvme. To benefit from btrfs, I merged the second with the first nvme for / and /home, and the two ssd’s with each other for my btrbk snaps.

My question for my clean reinstall of Fedora KDE, shall I do the same and merge the two nvme’s after the install? Or shall I create a RAID1 of them during the install with the installer?*

The (sole) reason why I did merge them after install, was/is the advice of Wolfshappen: restoring the kernels, etc. with RAID1 is a pain in the ass.

My question is thus: shall I merge after install again, or RAID1 during the install?*

*It is indeed true that this question has been asked before, but I thought those could be out of date. If I followed those blindly, then I could use an old method with newer software. Thus I preferred to ask.

Hello @a86ul
I was running SIlverblue using two 240GB SSD’s in a RAID 1 array setup with two separate subvolumes mounted at / and at /var. The third 1TB SSD was setup as a BTRFS volume with a subvolume that was mounted at /home. So it worked fine and though I found nothing really wrong with it, I was feeling guilty about the waste of storage space with the RAID1 setup. Combined with the ease with which to create backups using the snapshot feature of BTRFS, convinced me to forgoe the RAID approach for my desktop.
Now I have each subvolume on a separate device, so /dev/sda gets to have the ESP and the EXT4 /boot partition and the rest I setup as a BTRFS volume with a mount point at /root/fedora-root/btrfs-top-lvl and a subvolume mounted at /. This allows me to take snapshots for backup saved under /root/fedora-root/btrfs-top-lvl/ that I do read only so I can btrfs send/receive to an external USB SDD. The same was done for /var, it has it’s own device with a swap partition of 16GB, and a BTRFS volume mounted at /var/fedora-var/btrfs-top-lvl and be able to handle snapshots the same. And the same for home except the whole 1TB SSD is BTRFS. So the beauty of this is only seen when you need to recover since the snapshots sent externally could be the last resort recovery, and the one on device if still good would be usable to make a RW snapshot from the simply do a mv command to the desired mount point after deleting the subvolume being replaced because yes you can move subvolumes.

1 Like

There’s different use cases for raid1 and backups (using any method, including btrfs send/receive).

btrfs raid1 on separate physical drives gets you a really different setup than mdadm raid1. Good: Btrfs unambiguously knows if metadata or data is good on every read, and if there’s a problem (which is rare) it automatically grabs the good copy on the 2nd drive, and overwrites the bad copy on the 1st drive. Not great: we don’t have automatic degraded (missing/failed drive) boot. If either drive fails, boot will fail right now. This is a limitation in dracut, at the moment.

For most folks, I’m more of a fan of regularly backing up. Even if you do raid1. And btrfs snapshots with send/receive makes this quite cheap for the workload that has few updates. It’s so cheap you can do it every hour, because Btrfs can very cheaply figure out what files have been created, deleted or modified since the last snapshot without having to do deep traversal like other tools. You can of course still use other tools that you’re familiar with. But if you have tons of files that don’t change a lot, there can be an inhibition to doing frequent backups because of the “scan” on both sides needed to figure out what files have changed, to know what files need the incremental backup. Btrfs figures this out in a few seconds.

While you can do it manually on the command line, you might consider looking at btrbk it is in Fedora repos.

1 Like

Thanks for your two replies.

However, it seems I wasn’t that clear enough, but was due to an “Aha moment” in the middle of the night; my bad. What I mean is

Thanks, and with my clarification above, we both favour btrfs RAID1, it seems.

Thanks with this extra info, very helpful. With my clarification above, you’re advising thus to do option 2. Namely, doing a JBOD install with btrfs on nvme1 and then merge nvme2 (btrfs RAID1).

In case option 2 is indeed the best, then option 1 is indeed “a pain in the ass”, which is “not great”.

Okay, I just did the reinstall of F36 over an existing F35 system, both set up with btrfs RAID1.

It seems that Anaconda automatically recognised the btrfs RAID1 (kinda option 2) of F35 correctly, set every thing correctly, and installed F36 KDE over F35 GNOME correctly without any issues.

Thus my question here seem to an irrelevant one, because Fedora seems to prefer this btrfs RAID1 (kinda option 2). However, it was a quit learningful thread.

Anyhow, thanks for all…

I’m not sure what “and then merge nvme2” means, or how this is any different from starting out with raid1 (option 1). What commands, exactly, do you intend to run to do this merge?

To add nvme disk 2, with nvme disk 1, into btrfs RAID1. See also,

I can not remember it, sorry. Like I already wrote here, Anaconda recognised the previous settings of F35 (thus included the merged btrfs RAID11). Thus I did not have to merge/add nvme2 to nvme1.