My 1.8TB home subvolume was suddenly 99+% full this weekend (according to btrfs fi usage -T /var/mnt/data).
Tried moving hundreds of GB of data to different btrfs drives, but the space recovered was much smaller than the size of files moved (due to compression?).
So I figured I’d change the profile for data on the subvol from dup to single to recover a bunch of space at once.
btrfs fi usage -T /var/mnt/data seems to report the subvol as
/dev/sda 501GB
Total 501GB
Used 500.2GB
And 1.32TB unallocated.
Here’s the kicker:
The system subvol / which is on a smaller 256GB SSD, is still read-only! btrfs fi usage -T /
/dev/nvme0 134GB
Total 134GB
Used 106GB
And 72GB unallocated
How do I fix this?
Ps. If someone can explain to me how to connect to wifi using iw or something, I can provide logs
I ran a live systemrescue usb, ran btrfs check and got some errors about missing inodes and/or files in /home/user/.config/.
Removed the files by copying the directory, removing the old and renaming the new one to .config.
Then I ran memtest, which showed one faulty ram unit that I promptly removed.
First ordinary boot after this, the system entered emergency mode / maintenance mode. I still do not know how to fix it.
Opening filesystem to check...
parent transid verify failed on 2757296128 wanted 249324 found 249319
parent transid verify failed on 2757296128 wanted 249324 found 249319
Ignoring transid failure
Checking filesystem on /dev/mapper/sysVol
UUID: ab093a6e-4634-4015-bcc4-9cd2d90a1c94
[1/7] checking root items
[2/7] checking extents
data backref 1090502656 root 2650 owner 1300840 offset 16384 num_refs 0 not found in extent tree
incorrect local backref count on 1090502656 root 2650 owner 1300840 offset 16384 found 1 wanted 0 back 0x56529b11caf0
incorrect local backref count on 1090502656 root 602 owner 1300840 offset 16384 found 0 wanted 1 back 0x5652d941df60
backref disk bytenr does not match extent record, bytenr=1090502656, ref bytenr=0
backpointer mismatch on [1090502656 4096]
ERROR: errors found in extent allocation tree or chunk allocation
[3/7] checking free space cache
[4/7] checking fs roots
[5/7] checking only csums items (without verifying data)
[6/7] checking root refs
[7/7] checking quota groups skipped (not enabled on this FS)
ERROR: transid errors in file system
found 122533748736 bytes used, error(s) found
total csum bytes: 102188076
total tree bytes: 8443166720
total fs tree bytes: 7468777472
total extent tree bytes: 807501824
btree space waste bytes: 1872113249
file data blocks allocated: 2195595395072
referenced 630891356160
Did a backup of the filesystem using restore. A weird thing occurred, is it normal for snapshots taken with snapper to have such a deep seemingly recursive directory structure?
ps. I know the target device is out of space, it had 3 times more free space than the source filesystem took up, before restore started.
btrfs restore -mSsx /dev/mapper/sysVol backupmnt/clevoSysVol/
parent transid verify failed on 2757296128 wanted 249324 found 249319
parent transid verify failed on 2757296128 wanted 249324 found 249319
Ignoring transid failure
ERROR: cannot write data: 28 No space left on device
ERROR: copying data for backupmnt/clevoSysVol/home00/.snapshots/1/snapshot/.snapshots/1/snapshot/.snapshots/1/snapshot/.snapshots/1/snapshot/.snapshots/1/snapshot/Jacob/.local/share/baloo/index failed
ERROR: searching directory backupmnt/clevoSysVol/home00/.snapshots/1/snapshot/.snapshots/1/snapshot/.snapshots/1/snapshot/.snapshots/1/snapshot/.snapshots/1/snapshot/Jacob/.local/share/baloo/index failed: -1
Edit - deleted all the nested/recursed snapshot directories. I think maybe I had not added the correct snapshot exemptions to snapper. Re-ran the restore command without -s, and got the whole image to fit on the target drive.
Will reformat and repopulate the system drive tomorrow, I guess.