Running out of inodes on btrfs

Hi. I have Fedora 36, running with LUKS + BTRFS using the standard automatic partition that installer offered.

Earlier today, my system failed to boot, and upon closer inspection, noticed that the system is saying that I’ve run out of space. I logged into the user using CTRL + ALT + F3 and tried mkdir test and sure enough, it failed due to lack of storage.

So I ran df -h and my root partition shows 24GB free. So, it looks like I’ve run out of inodes. Except, I thought that btrfs can dynamically allocate inodes, and it does not really have a practical limit of number on inodes. Some Googling around suggests that the limit is approximately 18.4 quintillion.

In any case, I deleted some files. I had two WINE prefixes that I didn’t need. After removing those and rebooting, my system booted right back up. So, I guess I did run out of inodes.

This would suggest that my 128GB root drive consumed all 18.4 quintillion btrfs inodes, which sounds impossible. Running a script to show the list of directories with most files returned following as the worst offenders:

6755 ./usr/src/kernels/5.19.9-200.fc36.x86_64/include/config
6756 ./usr/src/kernels/5.19.7-200.fc36.x86_64/include/config
6756 ./usr/src/kernels/5.19.8-200.fc36.x86_64/include/config
11259 ./usr/share/man/man3

but the total list probably doesn’t account for more than 30k files. From what I know, these are not astronomical number of files.

The only other thing that might be relevant is that I have daily Timeshift snapshots, but at no time am I keeping more than 8 snapshots (the rest are auto-pruned) but I don’t know enough about Timeshift and btrfs snapshots to say if this has any impact.

Can anyone offer a theory on why this happened and how I might avoid it in the future?

Without having an exact image of your system for testing, I can only guess.
I wonder what size your btrfs volume is?
I also note that the number of available inodes you spoke of is approximately the maximum number btrfs is capable of handling and not the number available on the hardware you are using. Your hardware is certainly much smaller than the maximum that btrfs can handle.

There is no set ratio for the space occupied by files and inodes used. You may have one file that occupies 2 bytes but uses one full inode, and another that spans 2 or more full inodes. The file size is in bytes, but the inode count on your system is fixed. A lot of smaller files may use a large number of inodes without filling the available space.

Another factor may be involved in the way the file manager deletes files. It merely hides them in the trash can until you manually empty the trash so those files still occupy the same inodes while they remain in the trash. Have you emptied your trash recently?

Thanks for trying to help out.

I’ve emptied the trash, but I only had a couple of text files in there.

Googling further, I encountered mentions of btrfs balance problem, such as here: linux - btrfs: HUGE metadata allocated - Super User

Running the suggested diagnostic command, I get:

[user@fedora /]$ btrfs filesystem df /
Data, single: total=111.62GiB, used=85.61GiB
System, DUP: total=8.00MiB, used=16.00KiB
Metadata, DUP: total=3.00GiB, used=2.74GiB
GlobalReserve, single: total=241.16MiB, used=0.00B

Seems that metadata is pretty close to being full, and may have been full before those prefixes were deleted.

Running the proposed solutions btrfs balance start -m / and btrfs balance start -dusage=5 / resulted in this:

[user@fedora /]$ btrfs filesystem df /
Data, single: total=95.62GiB, used=65.98GiB
System, DUP: total=32.00MiB, used=16.00KiB
Metadata, DUP: total=2.97GiB, used=1.12GiB
GlobalReserve, single: total=201.72MiB, used=16.00KiB

So, my data usage went down by 20Gb, and the metadata went down…

I’m not really sure what happened here. Should I be worried any further?