"No space left on device" But I have free space left

I have free space left in the root partition and home partition but when doing any operations that needs writing to the storage like creating files/directories a “No space left on device” error pops up.

$ mkdir new
mkdir: cannot create directory ‘new’: No space left on device

This error also pops up when creating folder with GNOME gui.

df -h output:

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/nvme0n1p7  197G  172G   25G  88% /
devtmpfs        4.0M     0  4.0M   0% /dev
tmpfs            12G   96K   12G   1% /dev/shm
efivarfs        268K  221K   43K  84% /sys/firmware/efi/efivars
tmpfs           4.7G   13M  4.7G   1% /run
tmpfs           1.0M     0  1.0M   0% /run/credentials/systemd-journald.service
/dev/nvme0n1p7  197G  172G   25G  88% /home
tmpfs            12G  360K   12G   1% /tmp
/dev/nvme0n1p5  974M  485M  423M  54% /boot
/dev/nvme0n1p1  256M   62M  195M  25% /boot/efi
/dev/nvme0n1p3  265G  234G   31G  89% /mnt/D810F06910F0504C
tmpfs           1.0M     0  1.0M   0% /run/credentials/systemd-resolved.service
tmpfs           2.4G  188K  2.4G   1% /run/user/1000

df -i output:


$ df -i
Filesystem       Inodes   IUsed    IFree IUse% Mounted on
/dev/nvme0n1p7        0       0        0     - /
devtmpfs        3027019     828  3026191    1% /dev
tmpfs           3037422       6  3037416    1% /dev/shm
efivarfs              0       0        0     - /sys/firmware/efi/efivars
tmpfs            819200    1688   817512    1% /run
tmpfs              1024       2     1022    1% /run/credentials/systemd-journald.service
/dev/nvme0n1p7        0       0        0     - /home
tmpfs           1048576      59  1048517    1% /tmp
/dev/nvme0n1p5    65536      44    65492    1% /boot
/dev/nvme0n1p1        0       0        0     - /boot/efi
/dev/nvme0n1p3 34214884 1069259 33145625    4% /mnt/D810F06910F0504C
tmpfs              1024       2     1022    1% /run/credentials/systemd-resolved.service
tmpfs            607484     205   607279    1% /run/user/1000

My guess is that this is caused due to some improper shutdown of my system, but I have no clue.

Even normal operations like downloading files from browser fails, and dnf gets stuck when trying to install or update any packages.

I cannot afford to delete files in the process of trying to fix this issue.

It is probably a problem with Btrfs. I would suggest adding the btrfs tag.

Try running a btrfs balance.

sudo btrfs balance start -dusage=50 /
2 Likes

According to this your btrfs volume is almost 90% used.

At that level it becomes critical to provide additional space very soon to avoid file system problems in the future.

As far as dnf is concerned you may be able to clear out some unnecessary files with sudo dnf4 clean all (dnf4 is no longer used in fedora 41 & 42 and the update to dnf5 leaves the dnf4 cache isolated). You also can clean out unused data from the dnf5 cache with sudo dnf clean all since that data will be updated the next time you run dnf.

Btrfs uses space for metadata and duplicate data that older file systems don’t support and df doesn’t include – see man 8 btrfs-filesystem. Here:

% btrfs fi df /              
Data, single: total=141.00GiB, used=131.62GiB
System, DUP: total=32.00MiB, used=16.00KiB
Metadata, DUP: total=3.00GiB, used=2.26GiB
GlobalReserve, single: total=252.30MiB, used=0.00B
% df -BG /
Filesystem     1G-blocks  Used Available Use% Mounted on
/dev/nvme0n1p3      465G  137G      327G  30% /

For explanation of DUP see: Btrfs Profiles.

1 Like

The OP still has 25GiB available. That probably isn’t pure metadata overhead.

Without more information, in order of likelihood I would suspect:

  • The OP is unable to provision new metadata blocks. This is what the balance command above can help with.
  • The filesystem is corrupt and needs repair.
  • The filesystem is being mounted read-only for some reason.
1 Like
$ sudo btrfs balance start -dusage=50 /
ERROR: error during balancing '/': No space left on device
There may be more info in syslog - try dmesg | tail

also ran ‘dmesg | tail’

$ sudo dmesg | tail
[  133.934521] rfkill: input handler enabled
[  136.939696] traps: ibus-daemon[3231] trap int3 ip:7fe0e032ed53 sp:7fffe12558b0 error:0 in libglib-2.0.so.0.8400.2[4bd53,7fe0e02e3000+a5000]
[  136.941704] rfkill: input handler disabled
[  137.271748] traps: ibus-daemon[3652] trap int3 ip:7f22853e5d53 sp:7ffdd15a6d20 error:0 in libglib-2.0.so.0.8400.2[4bd53,7f228539a000+a5000]
[  138.215162] traps: ibus-daemon[3970] trap int3 ip:7f50f6b18d53 sp:7ffe3a59a0b0 error:0 in libglib-2.0.so.0.8400.2[4bd53,7f50f6acd000+a5000]
[  138.496845] traps: ibus-daemon[4055] trap int3 ip:7f7ebb92dd53 sp:7ffe9bea49e0 error:0 in libglib-2.0.so.0.8400.2[4bd53,7f7ebb8e2000+a5000]
[  138.813807] traps: ibus-daemon[4097] trap int3 ip:7fe56951ad53 sp:7ffef5c2c0f0 error:0 in libglib-2.0.so.0.8400.2[4bd53,7fe5694cf000+a5000]
[  165.237559] BTRFS info (device nvme0n1p7): balance: start -dusage=50
[  165.285142] BTRFS info (device nvme0n1p7): 10 enospc errors during balance
[  165.285144] BTRFS info (device nvme0n1p7): balance: ended with status: -28

And it doesn’t exactly feel like the device is simply being mounted as read-only, it’s as if every operation fails due to lack of free storage, even opening normal applications like firefox fails.

Even these commands fail due to “Lack of free space”,
But I do have around 25 gb of free space.

$ sudo dnf4 clean all
Failed to load expired repos cache: [Errno 28] No space left on device: '/var/cache/dnf/expired_repos.json'
[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid'
Failed to store expired repos cache: [Errno 28] No space left on device: '/var/cache/dnf/expired_repos.json'

$ sudo dnf clean all

dnf clean all just gets stuck with no output on the terminal.

you still have not showed us output of

 mount -t btrfs 
 btrfs fi df /

add also output of

sudo dmesg |grep -i -e btrfs -e nvme
sudo btrfs subvolume  list -a   /
$ mount -t btrfs 
/dev/nvme0n1p7 on / type btrfs (rw,relatime,seclabel,compress=zstd:1,ssd,discard=async,space_cache=v2,subvolid=256,subvol=/root)
/dev/nvme0n1p7 on /home type btrfs (rw,relatime,seclabel,compress=zstd:1,ssd,discard=async,space_cache=v2,subvolid=257,subvol=/home)


$ btrfs fi df /
Data, single: total=191.98GiB, used=167.09GiB
System, DUP: total=8.00MiB, used=48.00KiB
Metadata, DUP: total=2.50GiB, used=2.14GiB
GlobalReserve, single: total=368.59MiB, used=1.50MiB
$ sudo dmesg |grep -i -e btrfs -e nvme
[    0.766557] Btrfs loaded, zoned=yes, fsverity=yes
[    1.741886] nvme 0000:06:00.0: platform quirk: setting simple suspend
[    1.741961] nvme nvme0: pci function 0000:06:00.0
[    1.785768] nvme nvme0: allocated 64 MiB host memory buffer (16 segments).
[    1.805419] nvme nvme0: 12/0/0 default/read/poll queues
[    1.815108]  nvme0n1: p1 p2 p3 p4 p5 p6 p7
[    2.282989] BTRFS: device label fedora devid 1 transid 3371109 /dev/nvme0n1p7 (259:7) scanned by mount (592)
[    2.283175] BTRFS info (device nvme0n1p7): first mount of filesystem b905e74f-9956-48c1-9304-5ed8d405587c
[    2.283192] BTRFS info (device nvme0n1p7): using crc32c (crc32c-x86_64) checksum algorithm
[    2.283196] BTRFS info (device nvme0n1p7): using free-space-tree
[    3.609930] BTRFS info (device nvme0n1p7 state M): use zstd compression, level 1
[    4.150470] Adding 12582908k swap on /dev/nvme0n1p6.  Priority:-2 extents:1 across:12582908k SS
[    4.571826] EXT4-fs (nvme0n1p5): mounted filesystem 3dfcb496-0641-4d88-9999-bf0a9a9662c3 r/w with ordered data mode. Quota mode: none.
[    6.493002] nvme nvme0: using unchecked data buffer
[    6.623760] block nvme0n1: No UUID available providing old NGUID

$ sudo btrfs subvolume  list -a   /
ID 256 gen 3373273 top level 5 path <FS_TREE>/root
ID 257 gen 3373268 top level 5 path <FS_TREE>/home
ID 258 gen 3371153 top level 256 path root/var/lib/machines

let see if this can free some chunks

sudo btrfs balance start -dusage=0 /

do you really require the swap device /dev/nvme0n1p6?

swapon -s
sudo btrfs device usage /
sudo btrfs fi show /

see man btrfs-balance / search for ENOSPC

This is because BTRFS requires large chunks of space for some operations. I have run into this before. Say you have 2GB free space left on a 120GB partition and you try to do rebalancing - it will complain about no space left. I think it’s because it has to move large chunks and store them temporarily during balancing. I added more space to partition, and it worked normally then.

Is there information anywhere about what that mandatory extra free space is for BTRFS to do certain operations? It’d be nice if it told you there’s no space left if you try to do this and this or this.

You can add an extra device to make the filesystem writable, delete files, and balance the filesystem.

In the meantime, you can enable “dynamic balance”, which automatically handles space recovery.
You need to enable it at each boot, or you can create a systemd service to enable it.
For example, I enable it using a systemd service:

4 Likes

It’s not obvious but in effect these two lines are saying that all the metadata block groups are full. btrfs fi us / probably shows no unallocated space or else Btrfs would just create a new empty metadata block group and there’d be no problem.

Counter intuitively, deleting files on Btrfs does not increase free space, it reduces it! That’s because the delete requires a write of metadata into free space, due to the COW requirement, to indicate the result in the file btree that the file is now deleted. Only after that metadata write commits to stable media can the data extents be freed.

The problem is we can’t write metadata, it’s full. But also even if it could write, the freed up space happens in data block groups which doesn’t help the problem in metadata block groups.

I suggest rebooting with USB stick. Any recent kernel is OK. While the file system is mounted for a running root file system, all kinds of things are being written all the time, mostly logs. So we need to stop writing to the file system. And then see if it’s possible to delete something you don’t need.

Ideally it would be one really big file. Like a VM image you don’t need would be great. But it could also be large video files that you have backed up elsewhere - or if you can copy them somewhere else and then delete the original on this file system.

That’s safer than adding another device - which can work but sometimes this is a problem too with such a full file system. And now you have the file system on two devices, and people don’t really understand the implication of that - and many times have seen people just remove the device thinking they’re done with it - and that kills the file system. Usually permanently.

So just be careful with too much exotic advice. File system stuff takes patience. I understand that it’s not easy.

Is the single GlobalReserve an extra block group or it this reserve part of the overall Metadata (2.5GiB)?
2.5G - 2.14G ≃ 368M

The FS has to allocate new empty block groups as Metadata.
I would have temporarily disabled the swap partition /dev/nvme0n1p6 and then

# btrfs device add /dev/nvme0n1p6 / 
# btrfs  balance start -ddevid=1,usage=50  /    
# # try 50 first, otherwise try 89 or 90  

# # check GlobalReserve is unused &&  Metadata > 2.5GiB
# btrfs fi df /  
# btfs fi usage /
  
# btrfs device remove  /dev/nvme0n1p6 /
# mkswap  /dev/nvme0n1p6   

re-enable swap

1 Like

Metadata always allocates more space, as it gets used up. I have never seen it more than 50% used up…it always bumps up the total size if usage goes up. Why would it be used up in this guy’s case?

Use case? I think small files may be stored in metadata.

Two of my systems (in other buildings, one server and one old iMac mostly used to check if new stuff still works) are at 50%, the 3rd (daily driver laptop I use for email and web surfing) is at 75%. All have btrfs-assistant with default timing: weekly balance, monthly scrub, and never defrag. The two non-Apple systems are “over-provisioned” since I removed Windows; iMac uses a 128G nvme SSD in a USB3 case.

Oh, it’s the DUP part of it. Metadata is duplicated by default, so you’ll always have <50% ‘used’ shown. If it’s not DUP, then you will get closer to 100% of metadata space used.

nonsense… there is a second copy.

# btrfs fi usage /
[deleted ].

Data,single: Size:114.00GiB, Used:108.48GiB (95.15%)
   /dev/nvme0n1p6	 114.00GiB

Metadata,DUP: Size:2.00GiB, Used:1.02GiB (50.98%)
   /dev/nvme0n1p6	   4.00GiB

System,DUP: Size:32.00MiB, Used:16.00KiB (0.05%)
   /dev/nvme0n1p6	  64.00MiB

btrfs-progs: Documentation/btrfs-filesystem.rst

GlobalReserve/total + Metadata/used = Metadata/total

368MiB + 2.14GiB = just barely under 2.5GiB. In effect metadata block groups are full.

I would also disable swap and add that partition as a 2nd device to hopefully get the file system unwedged. But it’s not enough to balance. It really needs files deleted, then do a filtered balance. It’s a bit of a guess what value to use. You can just start with 1 and increment up to 10. But also need to check dmesg and see if any block groups were actually consolidated. That’s not obvious.

There is a debug tool in upstream btrfs-progs that can help make it more obvious:

Use with -b flag and point to / or /home or wherever this btrfs is mounted.

The result is a list of data block groups, their sizes, and how full they are. If there’s a bunch of block groups that are less than 10% full, then it makes sense to do a filtered balance, e.g. -dusage=10 to limit the balance to block groups that are 10% or less full.

The block groups that have the most free space are the fastest to consolidate, making that space unallocate, making it possible for the kernel to allocate that space as a metadata block group (instead of data block group). And now the file system can go back to single device.


So the reason this can happen is a) data and metadata block groups are allocated dynamically on demand b) the workload can change, thereby being more or less metadata centric, therefore the metadata to data ratio changes. And if the ratio changes once all space is allocated, Btrfs can’t really do anything.

Later kernels are more aggressive at reserving metadata block groups as the file system ages in order to prevent this problem at the expense of possible enospc when data block groups are full, metadata block groups have some free space still, but no unallocated space to create new data block groups.

So it’s a catch-22.

The way forward in the nearish future is probably the dynamic reclaim feature in the kernel. I won’t get into that here because I don’t think it’s a way to fix the current problem, but it might be a way to (automatically) prevent such problems from happening in the future. It’s not enabled by default, but it’s considered safe and ready for testing by users interested in giving feedback to upstream developers.

1 Like