How easy is it to add a device to the btrfs filesystem?

I was sitting with a problem for some time. My /var subvolume was getting full. Originally when I installed Workstation on this machine, it was laid out with / and /boot/efi on /dev/sda with /var and a fedora-swap on /dev/sdb. I increased system memory and no longer saw much if any use on the swap partition on disk so wanted to reclaim that. Plus my /var was running out of space while my system root partition was pretty much underutilized. I read a bit about btrfs and was excited to see that the filesystem can exist across multiple devices. So I shrunk my root partition and freed up the available space and simply added the free device to the mount that already existed for /var with the command sudo btrfs device add /dev/sda2 /var then ran a balance on the subvolume using the btrfs assistant. This is just another reason to like btrfs and appreciate it being the default filesystem for Fedora. :fedora: :guitar:

I don’t know what size or type your devices are, but with the current prices of larger devices it would seem prudent to install a larger device for storage and migrate everything to the larger device. I note that they are named sda & sdb, but that may be HDD or SSD sata devices.

I recently purchased a 2 TB NVME device for only $70

Lol, maybe for some, but I got a sufficient amount of storage and just needed to reallocate it accordingly. I did this as a fedora topic more for others who may be in need of similar, and quite frankly I was impressed with how easy it was to do. So I’ll mark the original post as a solution. Guess I can’t do that … :thinking:
Solution is sudo btrfs device add <device path> <mount point> <optional force with> -f Then I used the btrfs assistant application to balance the subvolume for /var. Just way too cool!

1 Like

Not detracting from bttfs but this would have been a simple task using the previous logical volume manager as well. I’ve used btrfs and it has a lot of great features. My difficulty was with a full btrfs file system when snapshots are used - it’s difficult to make room by deleting old files.

But not so easy if you were using the default layout and ext4 without LVM (LVM is still available) which is the most common use case likely for a lot of users.

You are correct.
What a user must remember though is that things progress. ext[2,3,4] came before lvm which came before btrfs. Things constantly change, sometimes better and sometimes worse. Each user must choose to use the default or to select something different as their preferred file system.

I have used LVM for many years and still use it on all my currently installed systems. I do test btrfs on a VM, but as yet am not satisfied enough to commit my running systems to btrfs. One of the factors that is relevant is I use multiple drives in (software) raid 6 or raid 5, and from what I understand btrfs and raid do not play well together as yet, even after over 20 years of development with btrfs. I suspect that if I were using hardware raid this would not really be an issue, but this is a moot point since I do not have hardware raid.

Important distinctions with btrfs device add/remove:

  • it does a kind of mkfs for you, i.e it writes superblocks on the device you’re adding; and it also wipes the signature from the first superblock on device removal
  • the file system is resized (grow when adding, shrink when removing)
  • extents are migrated to other storage upon removal, but not when adding

When you add a device, btrfs doesn’t currently automatically balance. In some sense it’ll passively rebalance, in that depending on the block group profile, it will tend to favor writing to the drive with the most remaining free space. For single and dup profiles, you can just add the device without a manual balance. There’s no advantage to manual balance.

When you remove a device, btrfs will lock the block groups on that device and all free space, shrink the file system, and migrate the (locked) block groups on the device being removed and replicate them onto other devices (that aren’t being removed). This is somewhat like LVM’s pvmove.

So on the one hand btrfs is doing more with fewer commands. On the other hand, by collapsing multiple tasks into one command, it’s not always obvious what steps btrfs is doing and what steps the user is still responsible for.

There is a better command for intentionally migrating data from one device to another: btrfs replace - this leverages scrub code by effectively creating a temporary hidden raid1 between the two devices, and the scrub then replicates the block groups from one drive to the other, maintaining COW semantics to write any file system changes to both drives at the same time. It’s faster and safer than the two step device add followed by device remove.

Definitely consult the man page for btrfs replace, e.g. currently replace does not include resize. If you’re replacing a device with a larger device, you’ll probably want to resize the file system after replace to utilize all the space. If you’re replacing a device with a smaller device, the replace command won’t start, but depending on how much data is on this device you might be able to work around it by shrinking the smaller device file system first.

It should work OK or there’s a bug. Btrfs plays well on mdadm raid or LVM raid. But if Btrfs is not responsible for raid you will miss out on some of its self healing properties, because the underlying software raid doesn’t have access to those features.

LVM + XFS :100:

Hello @chrismurphy ,
Thanks for the elaboration on what device add/remove does. In my case I was trying to add linear storage as opposed to RAID capabilities.

Only for raid 5 and raid 6, but will be solved by the “raid- stripe- tree” introduced in kernel 6.7.

New features:

  • raid-stripe-tree
    New tree for logical file extent mapping where the physical mapping
    may not match on multiple devices. This is now used in zoned mode to
    implement RAID0/RAID1* profiles, but can be used in non-zoned mode as
    well. The support for RAID56 is in development and will eventually
    > fix the problems with the current implementation. This is a backward
    incompatible feature and has to be enabled at mkfs time.

https://lore.kernel.org/lkml/cover.1698679287.git.dsterba@suse.com/

Yeah, Btrfs built-in raid is it’s own thing. Btrfs raid 0, 1, 1c3, 1c4, raid10 are considered stable. Btrfs parity raid (raid56 should probably be pronounced “raid five and six”, not “fifty-six” :smiley: ) is experimental, and while it’s not inherently unsafe, you need to be extra familiar with its idiosyncracies to make sure your workload and expectations can tolerate it.