No idea why storage space is full(df says it is full, but du tells otherwise)

Well, this sound tougher than others and I don’t know how to make it too, but I will consider it and try when I figure out

You can inspect the folders that take up the most space with “ncdu”. I use it, and I find it very convenient.

sudo dnf install ncdu
sudo ncdu -x /
sudo ncdu /home

1 Like

ncdu also writes that only 70.8 gb is taken

No. It looks like 70GB are used by /home, and 97GB by /var + 14GB by /usr
The sum is not 460GB anyway…
Btrfs snapshots? @emanuc are they counted by du/ncdu?

@uoin982,

I wonder why

Metadata ratio:		      2.00

Was this btrfs made fresh during a fedora workstation install from the official media? Or was a conversion ever done? Did it always have zstd compression enabled? Was it always on a single partition on the nvme?

Take a look at

man btrfs-balance

Maybe that could help.

It is frustrating that btrfs does not make it easy to see where all the space went. And what is being reported is not easy to follow either, for instance, are these numbers before or after compression?

To answer my questions I can start with no knowledge and run

btrfs device scan --all-devices
btrfs filesystem show --all-devices

I do not know if you can query an unmounted btrfs and find out the mount point of the last successful mount but the filesystem needs to be mounted before

btrfs subvolume list /mountpoint

So before mounting how do I determine what subvolumes exist? Which subvolume is the default? Seems important to know beforehand.

I am sure my frustration is do to lack of knowledge so I will keep reading and experimenting.

Huh?

Maybe a screenshot would help.

A while ago my storage was full so I deleted some files, but it remained pretty full until I ran a btrfs balance. So maybe try that (using Btrfs Assistant if you want a GUI). It may take a while since it has to rearrange pretty much everything on the disk. It should be safe, but feel free to back things up if not already.

1 Like

sudo find / -type f -size +1G -exec ls -lh {} \; | less

I’d suggest running a full btrfs balance as well.

I am not that professional so I don’t usually play with disk while instalation, so I will I try answer as far as I know: Yes, it is from official media, since then I haven’t touched anything about disk. Conversion, probably no. Maybe yes, since I don’t know how to enable/disable it. And yes it always was on a single partition on nvme.

This numbers are the newest that I can get, I can renew them, but from that time and now the difference only is that I deleted some old snaps.

I would like to answer, but I can’t understand, how to check it now

I will run commands that you wrote, but will make some backup before this

1 Like
-r--------. 1 root root 128T nov 19 ##:## /proc/kcore
-rw-------. 1 root root 129G nov 19 ##:## /var/lib/libvirt/images/win10_VM.qcow2

I don’t know what kcore writes T, but the lowest one is virtual machine

/proc/kcore seems to be the size of addressable physical memory in the linux kernel

man proc_kcore

.
.
Deleting files in btrfs is not instantaneous but after reading about garbage collection in btrfs, which I am not sure is in the implementation in fedora, I wonder how to query btrfs about it.

There is also the possibility of file sparseness which is a feature qcow can use. There are articles on finding sparse files and I would also like to find a more direct way in btrfs if possible.

So unfinished garbage collection and file sparseness are two possible ways reported usage and real usage can differ.

It doesn’t seem that “/proc/kcore” is the culprit, also because it shows “128T”. I ran that command as well [1], and it shows “128T” for me too. I have a 500 GB NVMe SSD, and it’s not full. If you add up the “qcow” image with other used data, does it actually match the space being occupied? Also, qcow images on Btrfs aren’t ideal because of “cow + cow.” I use “raw” images for VMs.

[1] -rwxr--r--. 1 emanu emanu 11G 6 ott 15.15 /home/emanu/.local/share/gnome-boxes/images/ubuntu24.04

-r--------. 1 root root 128T 20 nov 20.05 /proc/kcore

sudo btrfs filesystem usage -T /
--cut--
                  Data      Metadata System
Id Path           single    DUP      DUP      Unallocated Total     Slack
-- -------------- --------- -------- -------- ----------- --------- -----
 1 /dev/nvme0n1p2 133.00GiB 16.00GiB 64.00MiB   290.73GiB 439.80GiB     -
-- -------------- --------- -------- -------- ----------- --------- -----
   Total          133.00GiB  8.00GiB 32.00MiB   290.73GiB 439.80GiB 0.00B
   Used           118.03GiB  2.51GiB 16.00KiB

Also consider defragmenting only the VM image.

EDIT:

Could you check how fragmented the qcow image is:
sudo compsize /var/lib/libvirt/images/win10_VM.qcow2

Even though I don’t think this is the issue or a problem related to Btrfs, I believe the space you’re seeing is indeed occupied by the total sum of all the data.

Thanks everyone for your time and efforts to help!
I’m sorry, that I wasn’t answering for so long.
Shortly, what happened:
The last thing that I was trying to do is “btrfs balance”, but it just hasn’t been completed, it stopped at around 50%, but it was because after some time I didn’t have stable power grid, so my battery just died.
More important thing was that my disk storage started to fill up on my eyes, it reached around 654kb(basically to 0) and got back to 40 gigs(it started to happen before I launched “btrf balance”), it wasn’t a big problem to me at first glance, but the second time I tried to launch btrf balance, it just was so slow and laggy, app start to not respond and all others like chromium too. This process of filling up and “comeback” repeated several times.
So, I backuped some important data someway and just reinstalled the system.