I’m trying to copy data from one disk to another on Fedora 37 Silverblue. Any attempt (be it
rsync, Nautilus or
cp) ends up being killed by systemd-oomd. It’s the first time in my life I’m having such a problem.
I have default RAM and systemd-oomd configuration, which means swap on zram. I use SSD, so I’m not very keen to use swap file or partition.
The thing I don’t understand is why copying data eats up so much memory. Systemd-oomd reports over 80% RAM usage for the scope owning copying process, which seems ridiculous for > 30 GB RAM machine.
Is it because of too high
dirty_ratio and swap on zram or too aggressive systemd-oomd settings?
find <path> -mindepth -maxdepth 1 -exec (...) the only sensible way to optimize the copy process, or perhaps lowering dirty_ratio, dirty_background_ratio and systemd-oomd aggressiveness is the way too?
Could I perhaps utilize
systemd-run for this particular use case?
Did you ever get anywhere with this? I am seeing exactly the same issue on a fresh install of Sericea 38. I just installed, ran the update and rebooted. I ran 2 terminals. In the first I ran top, in the second a “cp -a” on a large directory of about 70GB of photos.
Like you my cp process got killed. Interestingly top was showing 29GB of available memory (yes GB, this is on a 32GB machine) - though also about the same in buff/cache. So it seems that the oomd is looking a the wrong measure of memory perhaps and is killing things when the ram is filled with cache rather than actual process memory?
Yeah, it looks like the filesystem cache is being taken into account by systemd-oomd.
I don’t remember exactly now, but I think that in the end all I had to do was
systemctl stop systemd-oomd.service systemd-oomd.socket, so you might try the same thing: stop the services and do the copy.
If it won’t help, please let me know. I’ll try to dig deeper into this unfortunate past.
Systemd-oomd is also worth keeping an eye on, because some people reported that it can kill btrfs operations, which is totally not cool, if true. 
 Reddit - Dive into anything
 Reddit - Dive into anything
Thanks - I have gone for stopping the oomd and also masking it with
systemctl mask systemd-oomd to ensure it does not come back. I have used linux on this machine for almost 5 years and never went over 50% RAM used so this should do for now.
I will try and read up more on oomd. I did see a comment somewhere about a fix regarding it not looking at the correct memory measurement - that certainly made it into Ubuntu 22.04 though I assume it should be included by now.
Just for future reference the item I saw about a fix to oomd to use MemAvailable instead of MemFree is oomd: calculate 'used' memory with MemAvailable instead of MemFree by enr0n · Pull Request #22965 · systemd/systemd · GitHub. However, I believe this was included in systemd v251, where my system now has systemd v253.
There is a comment in that github PR that notes the cgroup memory used is from memory.current so does include the page cache. That would explain why I saw such a high memory usage in the journalctl logs (excerpt below) but I still do not understand why oomd kicked in as top was reporting 29G available mem and only two shells were open, one running top and the other a “cp -a”.
I will stick to having oomd disabled. Have been running virtual machines and doing my usual computer work with no memory issues since.
Memory Pressure Limit: 0.00%
Pressure: Avg10: 97.39 Avg60: 46.29 Avg300: 12.05 Total: 39s
Current Memory Usage: 27.9G
Memory Min: 0B
Memory Low: 0B
Last Pgscan: 14157696
app-foot-1688.scope: systemd-oomd killed 4 process(es) in this unit.
app-foot-1688.scope: Failed with result ‘oom-kill’.
It bit me again some time ago when I was copying lots of files. Just like you, I did:
systemctl disable systemd-oomd.service
systemctl mask systemd-oomd.service systemd-oomd.socket
and that’s it. Enough is enough!