A lot of files that should be compressible aren’t getting compressed by btrfs.
Note: I have my disk mounted with
compress-force=zstd:6 but it seemed to happen with the default
compress=zstd:1 as well.
yes ladsfjlkdsafjl > test.bin
ctrl-c after a bit
sudo compsize test.bin
dd if=test.bin of=test2.bin bs=1M
sudo compsize test2.bin
Repeat 4 and 5 a few times
test2.bins are fully compressed
Sometimes, the files have random amounts of non-compressed data
The amount not compressed changes with each duplication of the file
I think this is due to a mismatch between the system page size and the btrfs sector size:
This file has been truncated.
* Special check for subpage.
* We lock the full page then run each delalloc range in the page, thus
* for the following case, we will hit some subpage specific corner case:
* 0 32K 64K
* | |///////| |///////|
* \- A \- B
* In above case, both range A and range B will try to unlock the full
* page [0, 64K), causing the one finished later will have page
* unlocked already, triggering various page lock requirement BUG_ON()s.
* So here we add an artificial limit that subpage compression can only
* if the range is fully page aligned.
* In theory we only need to ensure the first page is fully covered, but
* the tailing partial page will be locked until the full compression
* finishes, delaying the write of other range.
Would it make more sense for the default asahi install image to use a sector size of 16K instead of the 4K size it uses currently, since it’s not like it’ll be used with any 4K page systems?
images are built on 4K page ARM64 builders, so that is not possible. It would also make it a weird configuration, since the btrfs tools default to 4K now on all systems (for compatibility), and weird configurations are less likely to be well tested especially going forward. We also don’t want to make running 4K kernels natively completely impossible.