BTRFS problems: "no space left on device" but there is enough, multiple block group profiles

I have a Fedora 33 with issues on the BTRFS file system.

When I do dnf update, the disk reports to be full:

# dnf update
[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid'

When I do df, it reports enough space:

# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        7.8G     0  7.8G   0% /dev
tmpfs           7.8G  4.6M  7.8G   1% /dev/shm
tmpfs           3.2G  1.9M  3.2G   1% /run
/dev/dm-1       216G  177G   39G  83% /
tmpfs           7.8G  1.2M  7.8G   1% /tmp
/dev/dm-1       216G  177G   39G  83% /home
/dev/sda1       477M  207M  241M  47% /boot
tmpfs           1.6G  496K  1.6G   1% /run/user/1000

But note that /dev/dm-1 is mounted on both / and /home

# mount | grep /home
/dev/mapper/luks-e5fbe4ab-0ae9-4428-87c0-5c98b5acadd1 on /home type btrfs (rw,relatime,seclabel,compress=zstd:1,ssd,space_cache,subvolid=258,subvol=/home)
# mount | grep "on / "
/dev/mapper/luks-e5fbe4ab-0ae9-4428-87c0-5c98b5acadd1 on / type btrfs (rw,relatime,seclabel,compress=zstd:1,ssd,space_cache,subvolid=257,subvol=/root)

And BTRFS also gives a warning about multiple block group profiles:

# btrfs filesystem df /home
Data, single: total=209.24GiB, used=170.95GiB
System, DUP: total=8.00MiB, used=48.00KiB
System, single: total=4.00MiB, used=0.00B
Metadata, DUP: total=3.00GiB, used=2.56GiB
Metadata, single: total=8.00MiB, used=0.00B
GlobalReserve, single: total=451.16MiB, used=32.00KiB
WARNING: Multiple block group profiles detected, see 'man btrfs(5)'.
WARNING:   Metadata: single, dup
WARNING:   System: single, dup

In /etc/fstab I have:

UUID=d7b908f7-49fb-41a0-8c1f-69f62f3001a1 /                       btrfs   subvol=root,compress=zstd:1,x-systemd.device-timeout=0 0 0
UUID=dde3d3f6-9350-420f-b0ac-964ed556bf09 /boot                   ext4    defaults        1 2
UUID=d7b908f7-49fb-41a0-8c1f-69f62f3001a1 /home                   btrfs   subvol=home,compress=zstd:1,x-systemd.device-timeout=0 0 0
/dev/mapper/luks-d53b5c50-2ac1-4800-aa17-a326b88144c1 swap                    swap    defaults,x-systemd.device-timeout=0 0 0

Actually, I do not understand all of this, nor what is the root case of dnf reporting that there is no disk space left. Do you have any suggestions?

1 Like

First post the resuts from the following commands:

uname -r
btrfs version
sudo btrfs filesystem usage /
cd /sys/fs/btrfs/d7b908f7-49fb-41a0-8c1f-69f62f3001a1
grep -R . allocation/

That’ll get us debugging info to see why you’re getting a no space left error. It might be a bug. But if we try to “fix” it with a work around, it clobbers all the state information needed to debug it.

The mixed block group error is not serious but can be fixed with the following:
sudo btrfs balance start -mconvert=dup,soft /

See if that alone fixes the dnf error. Let me know either way. If it doesn’t fix it, try this:
sudo btrfs balance start -dusage=5 /

And also let me know if that does or doesn’t fix it. Thanks!

3 Likes

Thanks Chris,

Before I read your reply, I solved the problem. What I did is that I removed a few large files to create more space. After that, I could do the btrfs balance command. The problems of no disk space occurred when compressing the btrfs following these instructions at fedora magazine.

I looked back in my terminal to find some information of the issue before it was resolved:

$ sudo btrfs fi df /
Data, single: total=209.24GiB, used=170.99GiB
System, DUP: total=8.00MiB, used=48.00KiB
System, single: total=4.00MiB, used=0.00B
Metadata, DUP: total=3.00GiB, used=2.56GiB
Metadata, single: total=8.00MiB, used=0.00B
GlobalReserve, single: total=451.16MiB, used=0.00B
WARNING: Multiple block group profiles detected, see 'man btrfs(5)'.
WARNING:   Metadata: single, dup
WARNING:   System: single, dup

After that I deleted some large files, I could do:

# btrfs balance start -musage=0 /
Done, had to relocate 1 out of 219 chunks

From that point on I could proceed with balancing, compressing and converting the btrfs system further.

So I converted to only the “single” policy for the data and “dup” for the metadata. This could also only be done after that some disk space was freed up. The system that I use has been in use for several years, and has been very full at some times.
Anyway, I’m happy that there has been no data loss. And I gained many GB of space by using the btrfs compression.

The current state is now:

# btrfs filesystem usage /
Overall:
    Device size:		 215.27GiB
    Device allocated:		 215.27GiB
    Device unallocated:		   1.00MiB
    Device missing:		     0.00B
    Used:			 132.80GiB
    Free (estimated):		  81.50GiB	(min: 81.50GiB)
    Free (statfs, df):		  81.50GiB
    Data ratio:			      1.00
    Metadata ratio:		      2.00
    Global reserve:		 451.72MiB	(used: 0.00B)
    Multiple profiles:		        no

Data,single: Size:209.24GiB, Used:127.74GiB (61.05%)
   /dev/mapper/luks-e5fbe4ab-0ae9-4428-87c0-5c98b5acadd1	 209.24GiB

Metadata,DUP: Size:3.00GiB, Used:2.53GiB (84.19%)
   /dev/mapper/luks-e5fbe4ab-0ae9-4428-87c0-5c98b5acadd1	   6.01GiB

System,DUP: Size:8.00MiB, Used:48.00KiB (0.59%)
   /dev/mapper/luks-e5fbe4ab-0ae9-4428-87c0-5c98b5acadd1	  16.00MiB

Unallocated:
   /dev/mapper/luks-e5fbe4ab-0ae9-4428-87c0-5c98b5acadd1	   1.00MiB

From this, I’d say you most definitely did reach full usage of the filesystem.

Be aware that when working with BTRFS, standard utilities such as df might lie to you a bit :slight_smile: Most accurate way to analyze is to look at BTRFS statistics. Especially allocation and Free section.

Although not really important, I see that you could benefit from running rebalance, just in case some other data section (metadata for example) needs to allocate additional chunk, it has spare space in unallocated section.

4 Likes

This means Btrfs has allocated all the space to either data or metadata block groups. It does this on-demand, thereby ensuring the data/metadata ratio is reflected in the allocation. As long as the ratio stays the same (basically the same usage pattern as in, types of files) then it’s ok. But if the usage pattern changes enough, then the existing allocation might be suboptimal and lead to premature out of space. For example…

Metadata block groups are more full than data block groups. If the future allocation becomes more metadata demanding, metadata block groups will become full before data block groups. If either block group type becomes full, the file system is “out of space” even if there’s unused space in the other block group type.

Ordinarily the data to metadata usage ratio doesn’t change much. Also, btrfs tends to slightly bias allocation of block groups to metadata type. But this logic can be thwarted by manual balance that balances more metadata block groups, relative to data block groups. This happens simply because there are far fewer metadata block groups.

The general rule of thumb is “don’t balance metadata”. The only time to fully balance metadata is when also fully balancing data: e.g. conversions between profiles; and following ‘btrfs-convert’ from another file system format. Yes, you could just do a full balance, but this is kindof an expensive operation, by reading all blocks and rewriting them out elsewhere. It doesn’t hurt anything other than just being expensive. So you’ll read a lot of advice like “just balance the file system” as a sort of sledgehammer to fix everything.

In your case, you could just leave it alone but it actually does look to me like the metadata to data ratio has changed over time, or possibly metadata block groups were more fully balanced, while data blocks were only partially balanced (or not at all). Therefore I expect you’ll hit premature out of space again at some point.

About the simplest one size fits all for your case given the above?

sudo btrfs balance start -dusage=30 /

There’s a nifty tool “btrfs-balance-least-used” in python-btrfs that balances the most empty data block groups first. So it ends up going faster to achieve the same result. You can use that, or you can get an approximate equivalent by starting with -dusage=1, moving up by 1 until you get to -dusage=5, then increment by 5 until you get to -dusage=30. For your file system this might save a minute or two. If it were a huge file system it could save hours.

Next you can post btrfs fi us / again and we’ll see how that looks and if it’s worth doing more. The tools right now are definitely so granular and rudimentary that it is easy for users to get into splitting hairs over what values and strategies to use, which is why you see so much varied advice in this area.

So as for the future does this imply you need some regular maintenance? Probably not but maybe. I personally would just chock it up to one of the explanations I’ve already given, do the above balance, then forget about it. But it’s a completely reasonable opinion to say, “look, i’d rather not run into this again, is there some script i can run to avoid it? i hate file bugs!”

Answer is yes. You can install the btrfsmaintenance package (in Fedora repos), and enable the btrfs-balance.timer - voila!

5 Likes

You are right. I ran out of space again.

[root@rainbowdash /]# uname -r
5.14.13-200.fc34.x86_64
[root@rainbowdash /]# btrfs version
btrfs-progs v5.14.2 
[root@rainbowdash /]# sudo btrfs filesystem usage /
Overall:
    Device size:		 215.27GiB
    Device allocated:		 215.27GiB
    Device unallocated:		   1.00MiB
    Device missing:		     0.00B
    Used:			 158.25GiB
    Free (estimated):		  56.26GiB	(min: 56.26GiB)
    Free (statfs, df):		  56.26GiB
    Data ratio:			      1.00
    Metadata ratio:		      2.00
    Global reserve:		 374.73MiB	(used: 752.00KiB)
    Multiple profiles:		        no

Data,single: Size:209.24GiB, Used:152.98GiB (73.11%)
   /dev/mapper/luks-e5fbe4ab-0ae9-4428-87c0-5c98b5acadd1	 209.24GiB

Metadata,DUP: Size:3.00GiB, Used:2.64GiB (87.75%)
   /dev/mapper/luks-e5fbe4ab-0ae9-4428-87c0-5c98b5acadd1	   6.01GiB

System,DUP: Size:8.00MiB, Used:48.00KiB (0.59%)
   /dev/mapper/luks-e5fbe4ab-0ae9-4428-87c0-5c98b5acadd1	  16.00MiB

Unallocated:
   /dev/mapper/luks-e5fbe4ab-0ae9-4428-87c0-5c98b5acadd1	   1.00MiB
[root@rainbowdash /]# cd /sys/fs/btrfs/d7b908f7-49fb-41a0-8c1f-69f62f3001a1
[root@rainbowdash d7b908f7-49fb-41a0-8c1f-69f62f3001a1]# grep -R . allocation/
allocation/metadata/disk_used:5660606464
allocation/metadata/bytes_pinned:0
allocation/metadata/bytes_used:2830303232
allocation/metadata/dup/used_bytes:2830303232
allocation/metadata/dup/total_bytes:3225419776
allocation/metadata/disk_total:6450839552
allocation/metadata/total_bytes:3225419776
allocation/metadata/bytes_reserved:81920
allocation/metadata/bytes_readonly:65536
allocation/metadata/bytes_zone_unusable:0
allocation/metadata/bytes_may_use:394821632
allocation/metadata/flags:4
allocation/system/disk_used:98304
allocation/system/bytes_pinned:0
allocation/system/bytes_used:49152
allocation/system/dup/used_bytes:49152
allocation/system/dup/total_bytes:8388608
allocation/system/disk_total:16777216
allocation/system/total_bytes:8388608
allocation/system/bytes_reserved:0
allocation/system/bytes_readonly:0
allocation/system/bytes_zone_unusable:0
allocation/system/bytes_may_use:0
allocation/system/flags:2
allocation/global_rsv_reserved:392937472
allocation/data/disk_used:164259119104
allocation/data/bytes_pinned:0
allocation/data/bytes_used:164259119104
allocation/data/single/used_bytes:164259119104
allocation/data/single/total_bytes:224672088064
allocation/data/disk_total:224672088064
allocation/data/total_bytes:224672088064
allocation/data/bytes_reserved:0
allocation/data/bytes_readonly:0
allocation/data/bytes_zone_unusable:0
allocation/data/bytes_may_use:0
allocation/data/flags:1
allocation/global_rsv_size:392937472
[root@rainbowdash d7b908f7-49fb-41a0-8c1f-69f62f3001a1]# cd /
[root@rainbowdash /]# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        7.8G     0  7.8G   0% /dev
tmpfs           7.8G  2.9M  7.8G   1% /dev/shm
tmpfs           3.2G  1.9M  3.2G   1% /run
/dev/dm-1       216G  159G   57G  74% /
tmpfs           7.8G  544K  7.8G   1% /tmp
/dev/dm-1       216G  159G   57G  74% /home
/dev/sda1       477M  210M  239M  47% /boot
tmpfs           1.6G  284K  1.6G   1% /run/user/1000
/dev/loop0       33M   33M     0 100% /var/lib/snapd/snap/snapd/13640
/dev/loop1       56M   56M     0 100% /var/lib/snapd/snap/core18/2128
/dev/loop2      4.8M  4.8M     0 100% /var/lib/snapd/snap/rustup/1027

But when I try to run dnf update, I get:

Total download size: 47 M
Is this ok [y/N]: y
Downloading Packages:
[Errno 28] No space left on device: '/var/cache/dnf/download_lock.pid'
The downloaded packages were saved in cache until the next successful transaction.
You can remove cached packages by executing 'dnf clean packages'.

@chrismurphy: I did not do anything yet to try to solve the current issue, in case you would like me to provide some additional info.

That’s strange…

Do you have any btrfs messages in dmesg? dmesg | grep -i btrfs

1 Like

What does
df -hi
return?
Maybe you used 100% of your inodes?

I suggest doing btrfs balance start -dlimit=5 / which will free up enough data block groups, turning their space back into unallocated space. And then unallocated can be turned into one or more metadata block groups, which is where I suspect Btrfs is getting stuck. It looks like it’s overcommiting metadata and it can’t create another metadata block group because all the space is in use for other data or metadata.

Maybe you used 100% of your inodes?

Btrfs dynamically allocates inodes. This file system is constrained by its size, not the number of inodes that can be created.

A problem with the inodes I come across mostly was like this:

  • A process is creating new files every new second, all with size 0, but with a new name and new meta information like timestamps and such.

  • Now the files accumulate by time, don’t actually consume space but each file has a corresponding inode storing said meta information.

  • the file system hat a fixed amount of inodes, so sooner or later they all were consumed.

The file system reported to be full.

I’m not so deep in btrfs, but wouldn’t something similar possible?

It may not have a fixed amount of inodes, but it still stores meta information in dynamically created inodes, or did I understand that wrong?

Given each file system has a fixed number of blocks, what would happen if that runs full?

If your / and /home are on same partition (default Fedora partition layout), I believe maybe it’s better to add more disk to save/backup any data on your /home directory since from your post above showing that your total device size are around 215.27GiB (from # btrfs filesystem usage /) and your /home directory use 170.95GiB (from quote above).

no

Filesystem     Inodes IUsed IFree IUse% Mounted on
devtmpfs         2.0M   589  2.0M    1% /dev
tmpfs            2.0M     1  2.0M    1% /dev/shm
tmpfs            800K  1.2K  799K    1% /run
/dev/dm-1           0     0     0     - /
tmpfs            400K    65  400K    1% /tmp
/dev/dm-1           0     0     0     - /home
/dev/sda1        126K   448  125K    1% /boot
tmpfs            398K   244  398K    1% /run/user/1000
/dev/loop0        479   479     0  100% /var/lib/snapd/snap/snapd/13640
/dev/loop1        11K   11K     0  100% /var/lib/snapd/snap/core18/2128
/dev/loop2         10    10     0  100% /var/lib/snapd/snap/rustup/1027

This gives:

ERROR: error during balancing '/': No space left on device
There may be more info in syslog - try dmesg | tail
[root@rainbowdash ~]# dmesg | grep -i btrfs
[26990.960096] BTRFS info (device dm-1): balance: start -dlimit=5
[26990.960283] BTRFS info (device dm-1): relocating block group 232308867072 flags data
[26991.395723] BTRFS info (device dm-1): 5 enospc errors during balance
[26991.395736] BTRFS info (device dm-1): balance: ended with status: -28

Do this:

mount -o remount,enospc_debug /
btrfs balance start -dlimit=1 /
dmesg | grep -i btrfs

I’ll file a rhbz and include all the information so far, it really shouldn’t get stuck like this. Next see if any of these work:

btrfs balance start -dusage=0 /
btrfs balance start -dusage=1 /
btrfs balance start -dusage=5 /

If 0 doesn’t work, the next two won’t either. It is still possible to fix this, but you’ll need some physical drive space you can spare, either on this drive or a USB stick. It can be a small (10G or so) empty partition. Chances are even 1G would do it.

btrfs device add /dev/sdXY /
btrfs balance start -dusage=0 /
btrfs balance start -dlimit=5 /
btrfs device remove /dev/sdXY /

It’s really important once you start the above that you keep the 2nd device you’re adding available for the entire duration until the remove command returns to a prompt. Part of the file system will be temporarily relocated to that 2nd device, so if it goes missing for any reason, the file system will end up broken. I don’t recommend using a ram disk of any kind for this, because should there be a power failure or a reboot needed, the ramdisk contents go away of course and now that part of the file system is simply gone and not recoverable. Whereas with a USB stick or other drive partition, it’ll survive a reboot.

1 Like

Before this response I started trying to get my system working. What I did is that I removed quite a lot of files from the system, and kept trying this until I did not get the error message any more:
btrfs balance start -v -musage=0 /
I also tried this in between:
btrfs filesystem defrag -czstd -rv / /home/
Initially the defrag gave also No space left on device errors, but later it worked (after removing files in an attempt to free up space).
Here are some snippets from dmesg | grep -i btrfs, but note that they are also partially before I did the remount as you suggested:

[ 2150.255309] BTRFS info (device dm-1): balance: start -musage=0 -susage=0
[ 2150.255601] BTRFS info (device dm-1): 1 enospc errors during balance
[ 2150.255603] BTRFS info (device dm-1): balance: ended with status: -28
[ 2176.063300] BTRFS info (device dm-1): balance: start -dusage=0
[ 2176.063665] BTRFS info (device dm-1): balance: ended with status: 0
[ 2694.513497] BTRFS info (device dm-1): balance: start -musage=0 -susage=0
[ 2694.513693] BTRFS info (device dm-1): relocating block group 232308867072 flags metadata|dup
[ 2694.614748] BTRFS info (device dm-1): balance: ended with status: 0
[ 2700.244667] BTRFS info (device dm-1): balance: start -dusage=0
[ 2700.245019] BTRFS info (device dm-1): balance: ended with status: 0
[ 2706.597774] BTRFS info (device dm-1): balance: start -dusage=1
[ 2706.598094] BTRFS info (device dm-1): balance: ended with status: 0
[ 2712.430864] BTRFS info (device dm-1): balance: start -musage=1 -susage=1
[ 2712.431178] BTRFS info (device dm-1): relocating block group 232294711296 flags system|dup
[ 2712.537731] BTRFS info (device dm-1): balance: ended with status: 0
[ 2717.244173] BTRFS info (device dm-1): balance: start -musage=2 -susage=2
[ 2717.244466] BTRFS info (device dm-1): balance: ended with status: 0
[ 2722.061152] BTRFS info (device dm-1): balance: start -dusage=2
[ 2722.061519] BTRFS info (device dm-1): balance: ended with status: 0

Some additional info, now there seem to be not out of disk space errors any more:

# btrfs filesystem usage /
Overall:
    Device size:		 215.27GiB
    Device allocated:		 211.27GiB
    Device unallocated:		   4.00GiB
    Device missing:		     0.00B
    Used:			 151.07GiB
    Free (estimated):		  62.42GiB	(min: 60.42GiB)
    Free (statfs, df):		  62.42GiB
    Data ratio:			      1.00
    Metadata ratio:		      2.00
    Global reserve:		 372.73MiB	(used: 0.00B)
    Multiple profiles:		        no

Data,single: Size:204.24GiB, Used:145.82GiB (71.40%)
   /dev/mapper/luks-e5fbe4ab-0ae9-4428-87c0-5c98b5acadd1	 204.24GiB

Metadata,DUP: Size:3.50GiB, Used:2.62GiB (74.90%)
   /dev/mapper/luks-e5fbe4ab-0ae9-4428-87c0-5c98b5acadd1	   7.01GiB

System,DUP: Size:8.00MiB, Used:48.00KiB (0.59%)
   /dev/mapper/luks-e5fbe4ab-0ae9-4428-87c0-5c98b5acadd1	  16.00MiB

Unallocated:
   /dev/mapper/luks-e5fbe4ab-0ae9-4428-87c0-5c98b5acadd1	   4.00GiB
[root@rainbowdash btrfs]# cd /sys/fs/btrfs/d7b908f7-49fb-41a0-8c1f-69f62f3001a1
[root@rainbowdash d7b908f7-49fb-41a0-8c1f-69f62f3001a1]# grep -R . allocation/
allocation/metadata/disk_used:5638127616
allocation/metadata/bytes_pinned:0
allocation/metadata/bytes_used:2819063808
allocation/metadata/dup/used_bytes:2819063808
allocation/metadata/dup/total_bytes:3763863552
allocation/metadata/disk_total:7527727104
allocation/metadata/total_bytes:3763863552
allocation/metadata/bytes_reserved:0
allocation/metadata/bytes_readonly:65536
allocation/metadata/bytes_zone_unusable:0
allocation/metadata/bytes_may_use:390840320
allocation/metadata/flags:4
allocation/system/disk_used:98304
allocation/system/bytes_pinned:0
allocation/system/bytes_used:49152
allocation/system/dup/used_bytes:49152
allocation/system/dup/total_bytes:8388608
allocation/system/disk_total:16777216
allocation/system/total_bytes:8388608
allocation/system/bytes_reserved:0
allocation/system/bytes_readonly:0
allocation/system/bytes_zone_unusable:0
allocation/system/bytes_may_use:0
allocation/system/flags:2
allocation/global_rsv_reserved:390840320
allocation/data/disk_used:156572409856
allocation/data/bytes_pinned:0
allocation/data/bytes_used:156572409856
allocation/data/single/used_bytes:156572409856
allocation/data/single/total_bytes:219300233216
allocation/data/disk_total:219300233216
allocation/data/total_bytes:219300233216
allocation/data/bytes_reserved:0
allocation/data/bytes_readonly:0
allocation/data/bytes_zone_unusable:0
allocation/data/bytes_may_use:0
allocation/data/flags:1
allocation/global_rsv_size:390840320
# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        7.8G     0  7.8G   0% /dev
tmpfs           7.8G     0  7.8G   0% /dev/shm
tmpfs           3.2G   18M  3.1G   1% /run
/dev/dm-1       216G  152G   63G  71% /
tmpfs           7.8G  4.0K  7.8G   1% /tmp
/dev/dm-1       216G  152G   63G  71% /home
/dev/loop1      4.8M  4.8M     0 100% /var/lib/snapd/snap/rustup/1027
/dev/loop0       56M   56M     0 100% /var/lib/snapd/snap/core18/2128
/dev/loop2       33M   33M     0 100% /var/lib/snapd/snap/snapd/13640
/dev/sda1       477M  210M  239M  47% /boot
tmpfs           1.6G   76K  1.6G   1% /run/user/1000
1 Like

An additional remark: When I keep doing:
btrfs balance start -dusage=30 /
I keep getting:
Done, had to relocate 1 out of 215 chunks
Shouldn’t there be no reallocations, after I have done this command?
This are the last lines from dmesg | grep -i btrfs:

[ 4398.172628] BTRFS info (device dm-1): balance: start -dusage=30
[ 4398.172945] BTRFS info (device dm-1): relocating block group 238235942912 flags data
[ 4398.887876] BTRFS info (device dm-1): found 1 extents, stage: move data extents
[ 4399.101469] BTRFS info (device dm-1): found 1 extents, stage: update data pointers
[ 4399.286179] BTRFS info (device dm-1): balance: ended with status: 0
[ 4402.673170] BTRFS info (device dm-1): balance: start -dusage=30
[ 4402.673483] BTRFS info (device dm-1): relocating block group 239309684736 flags data
[ 4403.368624] BTRFS info (device dm-1): found 1 extents, stage: move data extents
[ 4403.575012] BTRFS info (device dm-1): found 1 extents, stage: update data pointers
[ 4403.761878] BTRFS info (device dm-1): balance: ended with status: 0
[ 4405.230120] BTRFS info (device dm-1): balance: start -dusage=30
[ 4405.230434] BTRFS info (device dm-1): relocating block group 240383426560 flags data
[ 4405.927356] BTRFS info (device dm-1): found 1 extents, stage: move data extents
[ 4406.151310] BTRFS info (device dm-1): found 1 extents, stage: update data pointers
[ 4406.348279] BTRFS info (device dm-1): balance: ended with status: 0
[ 4459.453829] BTRFS info (device dm-1): balance: start -dusage=30
[ 4459.454171] BTRFS info (device dm-1): relocating block group 241457168384 flags data
[ 4460.156043] BTRFS info (device dm-1): found 1 extents, stage: move data extents
[ 4460.392919] BTRFS info (device dm-1): found 1 extents, stage: update data pointers
[ 4460.594600] BTRFS info (device dm-1): balance: ended with status: 0

Looks much better now.

I filed the following bug: 2017895 – btrfs: 74% used, dnf reports no space left on device

-dusage=30 filter means “only balance data block groups that have up to 30% usage” And I think that’s adequate. Generally speaking you don’t need to balance metadata or system block groups. It might be that it’s balancing the same block group over and over, as it remains just under 30% used, so I would ignore it.

2 Likes