I own a 5 TB WD external disk with SMR ( Shingled Magnetic Recording):
I usually do weekly backup (in sum 150 GB) via rsync with Option “-vauRHX --delete --progress --stats -h --numeric-ids”
I had the disk initial formated with ext4 but I realized during the subsequence rsync run’s a decreased (write ?) performance so I reformated it with brtfs.
with it it seems decreased (write ?) performance is (until now) gone (under investigation !)
I guess it has something to do with SMR (what seems to have an impact on write performance) and the specific filesystem.
if so: but why only with ext4 ?
I wonder what is the technical explanation of my observation ?
Is it so cause - if I got it right - btrfs writes new/changed files to a new place and marks the old space a free/unused ?
is it safe to use btrfs as a backup file system with the view on a damaged one, on ext4 I at least have e2fsck ?
AFAIK the decrease in performance is not related to the file system in use. It is related to the SMR and the amount of actual data written. Once the drive has enough of a data load that it has to start ‘shingling’ the data in multiple layers the write load multiplies since it has to read and rewrite each layer as it writes to the lower underlying layers.
Time involved is dependent upon how much data has to be read and written back.
I personally have selected to spend the extra $ and purchase the drives that are sold as enterprise or NAS drives. I always verify that ones purchased use the older and more consistent CMR tech.
currently backup data are 150 GB on a 5 TB disk.
the weekly backup delta is about < 5 GB only.
your theory should apply to btrfs too, but - and that was my main question - it’s only on ext4.
it seems on ext4 rsync take some noticeable breaks before going further.
currently on btrfs backup data flow without interrupts to the disk !
- maybe I didn’t hit the disks cache limit up to now… -
but your theory backup’s my thesis that a changed file gets in whole written to a new place, so no impact here.
noticed it to late it’s an SMR !
Theoretically a 5 TB drive should never need to begin shingling the data until it reaches about 1 TB of data. That however is up to the drive firmware when and how it happens.
I suspect the “noticable breaks” you mention regarding rsync on ext4 is a result of buffering. Reading the data and reported write speeds are quick until the buffers are full then it drops to actual write speeds (on all media). If the total data written does not fill the buffer then the reported write speeds are the speed at which data is written into the buffer (which is usually much faster than the speed it actually is written to the media, especially for HDD)
maybe, but why not on btrfs ?
I need to mention that I also own another external disk (2.5", 500 GB, CMR, formated with ext4) for monthly backup’s.
No breaks with rsync.
I see only breaks with ext4 on the 5TB disk, but not with brtfs that makes me wonder …
breaks: sometimes ~10-20 sec where the terminal output seems to stops versus continuous scrolling.
5 TB disk has no bad blocks or something in smartctl output
both disks: 5400 rpm and nearly same write speed ( with dd: 142 MB/s (5 TB disk), 134 MB/s (500 GB disk))
Unless you’re wanting to do a bunch of benchmarking on this in order to potentially contribute something to the ext4 kernel driver upstream, why not just use btrfs if it works better for you?
quiet clear I already use (and investigate) btrfs as backup disk FS, but as said “on ext4 I have e2fsck” in case the FS gets damaged and to my (minor) knowledge on btrfs I do not have such thing (?)
The main trigger to post (ext4 or btrfs on backup disks ?) here was to get an explanation WHY does ext4 behave as described
In that case, without any other data, I’m inclined to +1 @computersavvy’s response here. While both are putting data on a disk, they do it in different ways. The TL;DR is btrfs is a copy-on-write filesystem and ext4 isn’t. To make ext4 faster, especially on spinning disk, some data gets written to memory before going to disk (ie, buffer) so your whole system doesn’t slow down from IO for most regular writing events. If that buffer gets full, that disk has a whole lot of writing to do at once to make up for it, so these write buffers aren’t a free lunch, but since they’re generally more helpful than not for most systems, they’re generally a good idea, but when they get full, you’ll suddenly notice all that IO latency that the memory buffers had been attempting relieve you of.
(async write buffers are reportedly going to eventually be an option for btrfs as well.)
You’re not the only one who noticed this difference, there are also users on Reddit: https://www.reddit.com/r/btrfs/comments/lizj7o/use_btrfs_on_smr_drives/
Btrfs since kernel 5.12 started support for “Zoned” devices but I don’t know if it also covers “SMR” devices: https://btrfs.readthedocs.io/en/latest/Zoned-mode.html
I guess this is kernel related ( or not ?), so buffering writes on an external disk should be the same with both FS (or ?).
as said I have a 500 GB disk with CMR and ext4 too.
running the second rsync on it backup data are continuous flowing without interrupts or stucks.
with the 5TB disk with SMR and ext4 it behaves different:
running the second rsync  on it backup data sometimes stucks for up to 20 sec before it continues. (feels like the box is dead, but no dmesg output)
same disk with brtfs: no stucks.
the initial rsync on an empty disk independantly if ext4 or brtfs and even when it is the 5TB one, dito without stucks/hickup’s/breaks
No, as I said at the bottom of my post, async buffered writes for btrfs are an upcoming kernel feature. Also, regardless, how ext4 and btrfs actually do the writing is very different. Yes, they’re both writing something to a disk, but using very different ways of doing that. It’s just that in this case, the behavior you are seeing is consistent to what happens when a memory write buffer fills up when writing to an ext4 filesystem.
but why do I not see breaks with the smaller 500 GB (ext4) disk or when I’m doing the initial rsync on an empty disk where I write some ~20 GB files within a full ~150 GB backup set too ?
There are a variety of interacting factors.
Drive write speed
drive buffer size
system buffer size (cache)
data read speed
other things the system may be doing, especially drive IO
All these interact when doing IO to a drive. Sometimes the system is waiting for IO and other times it may be performing other tasks before it returns to IO. It may even do IO for another app in the middle of IO for the one you are watching.
yes and no !
the most parameter stay nearly the same (system buffer, RAM, swap)
not apps are running, apart from an terminal on an desktop (rsync via script)
disk write speed are nearly the same (with dd: 134MB for the small disk, 142 MB for the bigger)
hdparm reports for the bigger disk “cache size: unknown”, the smaller disk has 16 MB.
I would expect that the bigger disk has a bigger cache, I guess 128 MB and what’s important - in case I’m right - it wouldn’t explain the breaks, cause bigger cache should fullfill the opposite of breaks - to some degree -
anyway, it’s - with all your comments, thanks ! - technical not clear to me, what causes the breaks.
I guess I’ll see breaks on brtfs in some time too, cause of fragmentation.
I guess btrfs’s COW writes on my PC changed files during rsync to a new place on the external disk and marks the place for the unchangeded one as free (=> TRIM)
I will say it is due the the native support level of Zoned Storage Device (such as SMR disk) of btrfs and ext4.
If the time comes, please also try to use ext4 on top of dm-zoned to see if the performance is better than just ext4, as described in the above link.