Use space on /dev/sdb3 after install

I would like to use the available space on /dev/sdb3. lsblk is shown here:

$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 931.5G 0 disk
└─sda1 8:1 0 931.5G 0 part /home
/
sdb 8:16 0 232.9G 0 disk
├─sdb1 8:17 0 600M 0 part /boot/efi
├─sdb2 8:18 0 1G 0 part /boot
└─sdb3 8:19 0 231.3G 0 part
sr0 11:0 1 1024M 0 rom
zram0 251:0 0 7.6G 0 disk [SWAP]
$

I created an empty mount point /extra.
How do I get the unused part of /dev/sdb3 (231.3 GB) to mount at /extra?
Thanks.

Hi @jayomega

it looks like there is something missing in your output; i can’t find your root volume

what is the output from lsblk --fs

please format the Output as codeblock; to do so, select the output text in this compose/editor window and click the </> symbol. The result will look like this:

user@host:~$ lsblk --fs
NAME                                          FSTYPE      FSVER LABEL UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
zram0                                         swap        1     zram0 3cabe722-11b5-47a6-94ac-74c96c747266                [SWAP]
nvme0n1                                                                                                                   
├─nvme0n1p1                                   vfat        FAT32       C37A-4D77                             586,2M     2% /boot/efi
├─nvme0n1p2                                   ext4        1.0         552f586c-2648-4bb0-a0a5-3b113371cd51  379,8M    54% /boot
└─nvme0n1p3                                   crypto_LUKS 2           e32c7e8a-995d-4b33-a7f1-44064da3e098                
  └─luks-e32c7e8a-995d-4b33-a7f1-44064da3e098 btrfs                   1aaa02ed-fe03-40fd-bb63-8b65b3e00e24  493,9G    47% /var/home
                                                                                                                          /var
                                                                                                                          /sysroot/ostree/deploy/fedora/var
                                                                                                                          /etc
                                                                                                                          /sysroot

Thanks for the reply. The root (/) appeared just above the beginning of the sdb block. Here is lsblk --fs :

NAME   FSTYPE FSVER LABEL       UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
sda                                                                                 
└─sda1 btrfs        fedora_ads8 6d175aa0-db5e-4ac8-bddb-c038e1215c67    1.1T     2% /home
                                                                                    /
sdb                                                                                 
├─sdb1 vfat   FAT32             4653-C126                             579.4M     3% /boot/efi
├─sdb2 ext4   1.0               3816c997-b37a-416c-921a-c436c6c03780  547.2M    37% /boot
└─sdb3 btrfs        fedora_ads8 6d175aa0-db5e-4ac8-bddb-c038e1215c67                
sr0                                                                                 
zram0                                                                               [SWAP]
ads@ADS8:~$

Thanks for the reply. The root (/) appeared just above the beginning of the sdb block. Here is lsblk --fs :


NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
sda
└─sda1 btrfs fedora_ads8 6d175aa0-db5e-4ac8-bddb-c038e1215c67 1.1T 2% /home
/
sdb
├─sdb1 vfat FAT32 4653-C126 579.4M 3% /boot/efi
├─sdb2 ext4 1.0 3816c997-b37a-416c-921a-c436c6c03780 547.2M 37% /boot
└─sdb3 btrfs fedora_ads8 6d175aa0-db5e-4ac8-bddb-c038e1215c67
sr0
zram0 [SWAP]
ads@ADS8:~$

````Preformatted text`

your btrfs ( with the subvolumes “/” and /home ) span those two partitions.

to confirm please post output of
sudo btrfs fi usage -T /
it will show two devices /dev/sdb3 and /dev/sda1

1 Like
ads@ADS8:~$ sudo btrfs fi usage -T /
Overall:
    Device size:		   1.13TiB
    Device allocated:		  24.02GiB
    Device unallocated:		   1.11TiB
    Device missing:		     0.00B
    Device slack:		     0.00B
    Used:			  19.54GiB
    Free (estimated):		   1.11TiB	(min: 572.55GiB)
    Free (statfs, df):		   1.11TiB
    Data ratio:			      1.00
    Metadata ratio:		      2.00
    Global reserve:		  45.31MiB	(used: 0.00B)
    Multiple profiles:		        no

             Data     Metadata  System                              
Id Path      single   RAID1     RAID1    Unallocated Total     Slack
-- --------- -------- --------- -------- ----------- --------- -----
 1 /dev/sda1 22.01GiB   1.00GiB  8.00MiB   908.50GiB 931.51GiB     -
 2 /dev/sdb3        -   1.00GiB  8.00MiB   230.29GiB 231.30GiB     -
-- --------- -------- --------- -------- ----------- --------- -----
   Total     22.01GiB   1.00GiB  8.00MiB     1.11TiB   1.13TiB 0.00B
   Used      18.85GiB 351.61MiB 16.00KiB                            
ads@ADS8:~$ 

ads@ADS8:~$ lsblk --fs
NAME   FSTYPE FSVER LABEL       UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
sda                                                                                 
└─sda1 btrfs        fedora_ads8 6d175aa0-db5e-4ac8-bddb-c038e1215c67    1.1T     2% /home
                                                                                    /
sdb                                                                                 
├─sdb1 vfat   FAT32             4653-C126                             579.4M     3% /boot/efi
├─sdb2 ext4   1.0               3816c997-b37a-416c-921a-c436c6c03780  547.2M    37% /boot
└─sdb3 btrfs        fedora_ads8 6d175aa0-db5e-4ac8-bddb-c038e1215c67                
sr0                                                                                 
zram0                                                                               [SWAP]
ads@ADS8:~$ 

that’s fine, looks great now.

As your Block Device contains a btrfs Filesystem, you may want to read this great article:

Edit:
it looks like your btrfs volume spawns across multible physical disks.
so we need the output from
sudo btrfs filesystem usage /

Edit2:
I’m afraid I’m a bit slow. :sweat_smile:

It looks like that you are already utilize both disk for your logical btrfs volume as a mixed Sinngle/Raid mirror. As for Data; you have selected single mode. And as for Metadata and System, you have selected Raid1

AT YOUR OWN RISK: If you want to use the second disk as a separate volume, you have to remove it from your current setup first by balancing the volume to single data

sudo btrfs balance start -sconvert=single -mconvert=single /

Now you can remove the device:
sudo btrfs device remove /dev/sdb3 /
(this may take some time as possible data moved to another disk)

works on my Test VM:

user@fedora:/$ sudo btrfs filesystem usage -T /
Overall:
    Device size:		  38.41GiB
    Device allocated:		  10.06GiB
    Device unallocated:		  28.35GiB
    Device missing:		     0.00B
    Device slack:		   3.00KiB
    Used:			   7.41GiB
    Free (estimated):		  30.44GiB	(min: 16.26GiB)
    Free (statfs, df):		  30.44GiB
    Data ratio:			      1.00
    Metadata ratio:		      2.00
    Global reserve:		  24.91MiB	(used: 0.00B)
    Multiple profiles:		        no

             Data    Metadata  System                               
Id Path      single  RAID1     RAID1    Unallocated Total    Slack  
-- --------- ------- --------- -------- ----------- -------- -------
 1 /dev/vda3 9.00GiB 512.00MiB 32.00MiB     8.88GiB 18.41GiB       -
 2 /dev/vdb1       - 512.00MiB 32.00MiB    19.47GiB 20.00GiB 3.00KiB
-- --------- ------- --------- -------- ----------- -------- -------
   Total     9.00GiB 512.00MiB 32.00MiB    28.35GiB 38.41GiB 3.00KiB
   Used      6.91GiB 257.53MiB 16.00KiB                             
user@fedora:/$ sudo btrfs balance start -sconvert=single -mconvert=single /
ERROR: Refusing to explicitly operate on system chunks.
Pass --force if you really want to do that.
user@fedora:/$ sudo btrfs balance start -sconvert=single -mconvert=single / --force
Done, had to relocate 3 out of 12 chunks
user@fedora:/$ sudo btrfs device remove /dev/vdb1 /
user@fedora:/$ sudo btrfs filesystem usage -T /
Overall:
    Device size:		  18.41GiB
    Device allocated:		   9.53GiB
    Device unallocated:		   8.88GiB
    Device missing:		     0.00B
    Device slack:		     0.00B
    Used:			   7.16GiB
    Free (estimated):		  10.97GiB	(min: 10.97GiB)
    Free (statfs, df):		  10.97GiB
    Data ratio:			      1.00
    Metadata ratio:		      1.00
    Global reserve:		  25.64MiB	(used: 0.00B)
    Multiple profiles:		        no

             Data    Metadata  System                             
Id Path      single  single    single   Unallocated Total    Slack
-- --------- ------- --------- -------- ----------- -------- -----
 1 /dev/vda3 9.00GiB 512.00MiB 32.00MiB     8.88GiB 18.41GiB     -
-- --------- ------- --------- -------- ----------- -------- -----
   Total     9.00GiB 512.00MiB 32.00MiB     8.88GiB 18.41GiB 0.00B
   Used      6.91GiB 257.53MiB 16.00KiB                           
user@fedora:/$ 

It’s probably a setup that you did not really want.
The current layout does not allow you to remove any of those two drives from the system.

I think it would be better to have ‘/’ (and /home) on the same disk as /efi and /boot (/dev/sdb3) and create a second filesystem on /dev/sda1 that you can mount. But that’s my preference expressed here.

Thanks, I will read the btrfs article. Meanwhile let me add some info and rephrase my objective. Fedora 40 was installed from the live f40 distribution CD, and defaulted to ‘automatic partitioning’. (Subsequently upgraded to f41.)

The hardware is a 1TB hard drive plus a 256GB SSD drive. The 1TB hard drive got /home, and the SSD drive got /boot. I would like to use the unused 231.3GB space on the SSD (/dev/sdb3) for something, i.e. to mount on /extra (now empty).
FYI here is /etc/fstab:

...
UUID=6d175aa0-db5e-4ac8-bddb-c038e1215c67 /                       btrfs   subvol=root,compress=zstd:1 0 0
UUID=3816c997-b37a-416c-921a-c436c6c03780 /boot                   ext4    defaults        1 2
UUID=4653-C126          /boot/efi               vfat    umask=0077,shortname=winnt 0 2
UUID=6d175aa0-db5e-4ac8-bddb-c038e1215c67 /home                   btrfs   subvol=home,compress=zstd:1 0 0
ads@ADS8:~$ 

So is there a way to somehow do the equivalent of mkfs on /dev/sdb3 ? Can gparted be used?

sdb      8:16   0 232.9G  0 disk 
├─sdb1   8:17   0   600M  0 part /boot/efi
├─sdb2   8:18   0     1G  0 part /boot
└─sdb3   8:19   0 231.3G  0 part 

Thanks,

WAIT!

your partition 3 on Disk sdb is not empty/unused; it contains metadata and system data from btrfs as you clearly posted in your previous reply here:

it only looks “empty/unused” to lsblk, but in fact it’s not.
formatting sdb3 may destroy your btrfs volume on sda1 (your 1TB Disk), too. But most likely, the raid will transform to degraded state and your system may no longer boot - i didn’t test it by myself, so i’m not sure…

Anyway, you should disband your raid configuration as mentioned in my previous post.

or as an alternate, keep it as is, and initialize sdb3 for Data usage with sudo btrfs balance start -dconvert=single /

your configuration will change from:


             Data    Metadata  System                               
Id Path      single  RAID1     RAID1    Unallocated Total    Slack  
-- --------- ------- --------- -------- ----------- -------- -------
 1 /dev/vda3 9.00GiB 512.00MiB 32.00MiB     8.88GiB 18.41GiB       -
 2 /dev/vdb1       - 512.00MiB 32.00MiB    19.47GiB 20.00GiB 3.00KiB
-- --------- ------- --------- -------- ----------- -------- -------
   Total     9.00GiB 512.00MiB 32.00MiB    28.35GiB 38.41GiB 3.00KiB
   Used      6.91GiB 257.62MiB 16.00KiB                             

to

             Data    Metadata  System                               
Id Path      single  RAID1     RAID1    Unallocated Total    Slack  
-- --------- ------- --------- -------- ----------- -------- -------
 1 /dev/vda3 3.00GiB 512.00MiB 32.00MiB    14.88GiB 18.41GiB       -
 2 /dev/vdb1 6.00GiB 512.00MiB 32.00MiB    13.47GiB 20.00GiB 3.00KiB
-- --------- ------- --------- -------- ----------- -------- -------
   Total     9.00GiB 512.00MiB 32.00MiB    28.35GiB 38.41GiB 3.00KiB
   Used      6.91GiB 257.91MiB 16.00KiB                             

did you see the change in the Data column?

you may want to add the btrfs tag to your first post so that some “btrfs-experts” can jump in to assist you.

Edit:
Now tested; i wiped vdb1 in my testvm and it doesn’t boot ever since


recovery through live ISO goes like this:

liveuser@localhost-live:~$ sudo mount /dev/vda3 /mnt
mount: /mnt: fsconfig system call failed: No such file or directory.
       dmesg(1) may have more information after failed mount system call.
liveuser@localhost-live:~$ sudo dmesg | tail
[   61.383430] BTRFS error (device vda3): devid 2 uuid 1b46289f-65ab-47ae-862b-ca2b05bead29 is missing
[   61.383433] BTRFS error (device vda3): failed to read chunk tree: -2
[   61.384031] BTRFS error (device vda3): open_ctree failed: -2
[  126.842822] BTRFS: device label fedora devid 1 transid 1170 /dev/vda3 (253:3) scanned by mount (5220)
[  126.843218] BTRFS info (device vda3): first mount of filesystem e6df369e-e41b-47ba-8f7f-e984a8baa96c
[  126.843233] BTRFS info (device vda3): using crc32c (crc32c-x86_64) checksum algorithm
[  126.843236] BTRFS info (device vda3): using free-space-tree
[  126.844569] BTRFS error (device vda3): devid 2 uuid 1b46289f-65ab-47ae-862b-ca2b05bead29 is missing
[  126.844571] BTRFS error (device vda3): failed to read chunk tree: -2
[  126.845010] BTRFS error (device vda3): open_ctree failed: -2
liveuser@localhost-live:~$ sudo mount -odegraded /dev/vda3 /mnt
liveuser@localhost-live:~$ sudo btrfs balance start -sconvert=single -mconvert=single /mnt --force
Done, had to relocate 3 out of 12 chunks
liveuser@localhost-live:~$ sudo btrfs device remove missing /mnt
liveuser@localhost-live:~$ sudo umount /mnt
(reboot and boot from Disk)

It seems that "unused space is part of the btrfs volume fedora_ads8 6d175aa0-db5e-4ac8-bddb-c038e1215c67, so not unused. If you did sudo mkdir /extra it should be usable inside the /root subvolume and will be mounted at boot when the root subvolume is mounted.

Formatting sda1 or sdb3 has the same effect - the Btrfs will be damaged as a result.

While it is possible to mount the file system with a missing (formatted) drive as read-only, degraded - the missing drive data will be unrecoverable. Only the data on the surviving drive can be recovered. This is only possible because of raid1 metadata, ensuring two copies of the file system on different drives.

The two partitions are being used for a single file system. The user has no control over the allocation. What I expect is when the free space on sda1 equals the free space on sdb3, Btrfs will alternate allocation of data block groups (in 1GiB increments) between the drives. So it will use sda1 for awhile before it starts allocating new data block groups to sdb3.

But metadata (the file system itself) will be written to both drives at all times. So that’s a plus for reliability. But statistically you might need it, because double the drives, twice the chance one of them will fail in some way.

The main thing to do is consistently backup important data. This axiom leads to less stress.

Hi Chris,

The first column on the second partition (sdb3) shows only a hyphen.

so i assume that it is not used for single data at all?

Edit: Nevermind, I have confused single file system with single data - it’s time to go to bed

That hyphen means no data block groups have been allocated on sdb3, yet.

Once sda1 unallocated is less than sdb3 unallocated, Btrfs will create data block groups on sdb3, and alternate between the two drives. Alternate is approximate. The device that has the most unallocated space will have a data block group created on it.

Ah, that makes sense. But anyway, In this scenario, I would say he should be safe to disband the raid as I described before, right?

I gotta go. Good Night. :sleeping_face:

One small adjustment, you will need -f flag on the first command because it’s reducing redundancy to go from raid 1 to single profile.

If OP wants a single drive Btrfs for Fedora, it’s fine to remove either sda1 or sdb3 from the Btrfs file system. The latter will take a while to complete (the command will appear to hang) because Btrfs will need to replicate the block groups from sda1 to sdb3 before sda1 can be removed. The removal of sdb3 would be fairly quick since there’s nothing on it.

For what it’s worth, file system resize is implied any time a device is added or removed. So it really is only these two commands.

Keeping this as it is now doesn’t really hurt anything. It’s true if one drive dies, the system won’t boot. But that’d be true even if it were one drive.

It is possible partly recover the data remaining on the working drive in such a case, because of raid1 metadata. But it will require mount -o ro,degraded to do it. That’s not obvious. And there will be a lot of complaints by the kernel because it’ll still try to copy missing files, so the journal log will have many complaints about that.

From the above discussion I conclude that the btrfs configuration I have is not suitable for what I intended; it was my error in choosing ‘automatic partitioning’ during installation.

I want to be able to use (parts of) both the 1TB drive AND the SSD for (users) files for various purposes. I am willing to scrap my present configuration and reinstall from scratch if I can be assured that I will be able to choose partitioning options during a reinstall to do this. Should I opt for LVM instead of btrfs? Should I pre-partition either drive manually before or during installation? Any suggestions will be appreciated.

it was not your fault by selecting “automatic partitioning”, rather than your mistake was selecting both disk for auto partitioning, thus leading to this result. i just tested it in a VM (i love testing) and it behaves the same…

You may want to start over, scratch everything on both volumes and select only your 1TB disk or 250GB Disk for autopartitioning. Fedora 42 introduced a new installer; looks a bit different. Just make sure, that you select only one disk. This disk will became your primary Disk containing all three partiton for your OS: sdX1 vfat for EFI; sdX2 contains boot and kernel files, and sdX3 will contain your BTRFS Filesystem containing two Subvolumes - root and home, sharing the same disk space.

After your system is installed successfully, go ahead and create a partition on the remaining disk and set the mountpoint to /extra or whatever you desire…

If the 1TB is a mechanical HDD, you my want to install your OS entirely on the 250GB SSD instead for faster Startup and program launch, and use the 1TB disk for /home or extraneous (/extra) data.

if you want completely different, you may pre-allocate your partition and volumes and select “mount point assignment” during installation - this is for advanced users… An example could look like this:
/dev/sda1 with 1GB vfat EFI System partition mounted as /boot/efi
/dev/sda2 with 2GB ext4 labeled Boot mounted as /boot
/dev/sda3 with *GB btrfs Partiton
└─Suvolume “root” mounted as “/”

/dev/sdb1 with * btrfs Partition
└─Subvolume “home” mounted as “/home”

will look like this

keep in mind: in my example: sda = vda; sdb = vdb

Note: minimum required Partitons/Volumes are:

  • EFI System partition (/boot/efi)
  • ext4 Partition for Boot-Files (/boot)
  • partition for the root File System (/)

Hope, this will help you :slight_smile:

Yeah, this same behavior happens whether Btrfs or LVM. If you tell the installer you want to use multiple drives, it places all the free space (less EFI system partition and boot) into one big Btrfs - or one big LVM Volume Group.

The installer just isn’t smart enough to figure out one drive is SSD the other is HDD, and how to do an optimized partitioning taking that into account.

Chances are OP wants the system, and day to day operations fast, including ~/home since that’s where application cache files get written - therefore ~/ also on the SSD.

But then the HDD separately formatted and mounted persistently somewhere like ~/extra. So it’s still found in the user’s home directory, with user permissions, it’s just slow big storage.

To do that, reinstall using automatic partitioning choosing only the SSD.

Later, after installation, use GNOME DIsks to format the HDD (LUKS and Btrfs recommended, but I don’t think it’s the default in GNOME Disks). Disks includes an option to persistently mount it in a location you choose every time you login.