it looks like there is something missing in your output; i can’t find your root volume
what is the output from lsblk --fs
please format the Output as codeblock; to do so, select the output text in this compose/editor window and click the </> symbol. The result will look like this:
As your Block Device contains a btrfs Filesystem, you may want to read this great article:
Edit:
it looks like your btrfs volume spawns across multible physical disks.
so we need the output from sudo btrfs filesystem usage /
Edit2:
I’m afraid I’m a bit slow.
It looks like that you are already utilize both disk for your logical btrfs volume as a mixed Sinngle/Raid mirror. As for Data; you have selected single mode. And as for Metadata and System, you have selected Raid1
AT YOUR OWN RISK: If you want to use the second disk as a separate volume, you have to remove it from your current setup first by balancing the volume to single data
It’s probably a setup that you did not really want.
The current layout does not allow you to remove any of those two drives from the system.
I think it would be better to have ‘/’ (and /home) on the same disk as /efi and /boot (/dev/sdb3) and create a second filesystem on /dev/sda1 that you can mount. But that’s my preference expressed here.
Thanks, I will read the btrfs article. Meanwhile let me add some info and rephrase my objective. Fedora 40 was installed from the live f40 distribution CD, and defaulted to ‘automatic partitioning’. (Subsequently upgraded to f41.)
The hardware is a 1TB hard drive plus a 256GB SSD drive. The 1TB hard drive got /home, and the SSD drive got /boot. I would like to use the unused 231.3GB space on the SSD (/dev/sdb3) for something, i.e. to mount on /extra (now empty).
FYI here is /etc/fstab:
your partition 3 on Disk sdb is not empty/unused; it contains metadata and system data from btrfs as you clearly posted in your previous reply here:
it only looks “empty/unused” to lsblk, but in fact it’s not.
formatting sdb3 may destroy your btrfs volume on sda1 (your 1TB Disk), too. But most likely, the raid will transform to degraded state and your system may no longer boot - i didn’t test it by myself, so i’m not sure…
Anyway, you should disband your raid configuration as mentioned in my previous post.
or as an alternate, keep it as is, and initialize sdb3 for Data usage with sudo btrfs balance start -dconvert=single /
your configuration will change from:
Data Metadata System
Id Path single RAID1 RAID1 Unallocated Total Slack
-- --------- ------- --------- -------- ----------- -------- -------
1 /dev/vda3 9.00GiB 512.00MiB 32.00MiB 8.88GiB 18.41GiB -
2 /dev/vdb1 - 512.00MiB 32.00MiB 19.47GiB 20.00GiB 3.00KiB
-- --------- ------- --------- -------- ----------- -------- -------
Total 9.00GiB 512.00MiB 32.00MiB 28.35GiB 38.41GiB 3.00KiB
Used 6.91GiB 257.62MiB 16.00KiB
to
Data Metadata System
Id Path single RAID1 RAID1 Unallocated Total Slack
-- --------- ------- --------- -------- ----------- -------- -------
1 /dev/vda3 3.00GiB 512.00MiB 32.00MiB 14.88GiB 18.41GiB -
2 /dev/vdb1 6.00GiB 512.00MiB 32.00MiB 13.47GiB 20.00GiB 3.00KiB
-- --------- ------- --------- -------- ----------- -------- -------
Total 9.00GiB 512.00MiB 32.00MiB 28.35GiB 38.41GiB 3.00KiB
Used 6.91GiB 257.91MiB 16.00KiB
did you see the change in the Data column?
you may want to add the btrfs tag to your first post so that some “btrfs-experts” can jump in to assist you.
Edit:
Now tested; i wiped vdb1 in my testvm and it doesn’t boot ever since
It seems that "unused space is part of the btrfs volume fedora_ads8 6d175aa0-db5e-4ac8-bddb-c038e1215c67, so not unused. If you did sudo mkdir /extra it should be usable inside the /root subvolume and will be mounted at boot when the root subvolume is mounted.
Formatting sda1 or sdb3 has the same effect - the Btrfs will be damaged as a result.
While it is possible to mount the file system with a missing (formatted) drive as read-only, degraded - the missing drive data will be unrecoverable. Only the data on the surviving drive can be recovered. This is only possible because of raid1 metadata, ensuring two copies of the file system on different drives.
The two partitions are being used for a single file system. The user has no control over the allocation. What I expect is when the free space on sda1 equals the free space on sdb3, Btrfs will alternate allocation of data block groups (in 1GiB increments) between the drives. So it will use sda1 for awhile before it starts allocating new data block groups to sdb3.
But metadata (the file system itself) will be written to both drives at all times. So that’s a plus for reliability. But statistically you might need it, because double the drives, twice the chance one of them will fail in some way.
The main thing to do is consistently backup important data. This axiom leads to less stress.
That hyphen means no data block groups have been allocated on sdb3, yet.
Once sda1 unallocated is less than sdb3 unallocated, Btrfs will create data block groups on sdb3, and alternate between the two drives. Alternate is approximate. The device that has the most unallocated space will have a data block group created on it.
One small adjustment, you will need -f flag on the first command because it’s reducing redundancy to go from raid 1 to single profile.
If OP wants a single drive Btrfs for Fedora, it’s fine to remove either sda1 or sdb3 from the Btrfs file system. The latter will take a while to complete (the command will appear to hang) because Btrfs will need to replicate the block groups from sda1 to sdb3 before sda1 can be removed. The removal of sdb3 would be fairly quick since there’s nothing on it.
For what it’s worth, file system resize is implied any time a device is added or removed. So it really is only these two commands.
Keeping this as it is now doesn’t really hurt anything. It’s true if one drive dies, the system won’t boot. But that’d be true even if it were one drive.
It is possible partly recover the data remaining on the working drive in such a case, because of raid1 metadata. But it will require mount -o ro,degraded to do it. That’s not obvious. And there will be a lot of complaints by the kernel because it’ll still try to copy missing files, so the journal log will have many complaints about that.
From the above discussion I conclude that the btrfs configuration I have is not suitable for what I intended; it was my error in choosing ‘automatic partitioning’ during installation.
I want to be able to use (parts of) both the 1TB drive AND the SSD for (users) files for various purposes. I am willing to scrap my present configuration and reinstall from scratch if I can be assured that I will be able to choose partitioning options during a reinstall to do this. Should I opt for LVM instead of btrfs? Should I pre-partition either drive manually before or during installation? Any suggestions will be appreciated.
it was not your fault by selecting “automatic partitioning”, rather than your mistake was selecting both disk for auto partitioning, thus leading to this result. i just tested it in a VM (i love testing) and it behaves the same…
You may want to start over, scratch everything on both volumes and select only your 1TB disk or 250GB Disk for autopartitioning. Fedora 42 introduced a new installer; looks a bit different. Just make sure, that you select only one disk. This disk will became your primary Disk containing all three partiton for your OS: sdX1 vfat for EFI; sdX2 contains boot and kernel files, and sdX3 will contain your BTRFS Filesystem containing two Subvolumes - root and home, sharing the same disk space.
After your system is installed successfully, go ahead and create a partition on the remaining disk and set the mountpoint to /extra or whatever you desire…
If the 1TB is a mechanical HDD, you my want to install your OS entirely on the 250GB SSD instead for faster Startup and program launch, and use the 1TB disk for /home or extraneous (/extra) data.
if you want completely different, you may pre-allocate your partition and volumes and select “mount point assignment” during installation - this is for advanced users… An example could look like this:
/dev/sda1 with 1GB vfat EFI System partition mounted as /boot/efi
/dev/sda2 with 2GB ext4 labeled Boot mounted as /boot
/dev/sda3 with *GB btrfs Partiton
└─Suvolume “root” mounted as “/”
/dev/sdb1 with * btrfs Partition
└─Subvolume “home” mounted as “/home”
Yeah, this same behavior happens whether Btrfs or LVM. If you tell the installer you want to use multiple drives, it places all the free space (less EFI system partition and boot) into one big Btrfs - or one big LVM Volume Group.
The installer just isn’t smart enough to figure out one drive is SSD the other is HDD, and how to do an optimized partitioning taking that into account.
Chances are OP wants the system, and day to day operations fast, including ~/home since that’s where application cache files get written - therefore ~/ also on the SSD.
But then the HDD separately formatted and mounted persistently somewhere like ~/extra. So it’s still found in the user’s home directory, with user permissions, it’s just slow big storage.
To do that, reinstall using automatic partitioning choosing only the SSD.
Later, after installation, use GNOME DIsks to format the HDD (LUKS and Btrfs recommended, but I don’t think it’s the default in GNOME Disks). Disks includes an option to persistently mount it in a location you choose every time you login.