I’m having some memory management issues where I will quickly run out of memory and when that happens the computer will kill the process with the highest memory (usually visual studio code or the browser tab running Slack). Here are the applications that I’m usually running when it happens:
- A win 10 x86 vm which I have allotted 1 CPU and 1 GB memory
- Chromium-based browser with a handful of tabs (including Slack)
- Visual Studio Code
- 4-5 docker containers
- A CRA react app
- A dotnet core project (run via
When I’m watching the memory usage this combo will eventually (after 5-10 minutes) drive the usage into the > 95% range which results in something getting killed (potentially after a 5 minute slow down). I can usually “fix” things by killing something before the system does and then reopening it. It buys me a few minutes but it is not a sustainable or fun practice. Does anyone have any suggestions to improve the situation? (Aside from using lower memory consumption programs, please assume that is not an option)
I am NOT seeing this issue with the same apps running in MacOS so somehow it is managing the memory usage better (or maybe there is some memory leak in the aarch64 version of one or more of these applications?)
BTW, I see the same issue using an Ubuntu VM within MacOS so I don’t think this is an Asahi specific issue but I thought this might be an OK place to discuss it all the same
On an 8GB machine? You’ll probably need a swap file. MacOS uses swap by default and the SSD is pretty fast, so that masks the admittedly anemic memory the base models have.
See: If I want to add a swapfile on Fedora Workstation with btrfs, where should I put it?
Thanks Hector. I have 16GB memory. I did have some swap (8GB) but maybe I set it up wrong I’ve followed the guide you shared to add 32GB of swap, hopefully that will do the trick!
Note that Fedora by default enables 8GB of zram swap. This is not normal swap, it’s compressed memory so it fights with your regular memory usage. It means up to 8GB of RAM may be compressed in-place, not outright swapped out. It improves the situation (macOS does memory compression too) but it can’t do magic, you need real swap if you’re pushing memory usage too much.
If you do find your machine becomes unusable over time please try to identify apps that use more memory over time. There could be a memory leak.
Thank you, that explains the 8 GB I had already. Adding to the swap seems to be doing the trick. I’ve been punishing my system with everything I could think to throw at it and it seems to be settled in at about 90% memory usage, 20% swap usage and it is not increasing over time so I don’t think there’s a leak. Thank you so much for your help!
EDIT: I added this entry to my
/etc/fstab to mount the swap at boot
/var/swap/swapfile1 none swap defaults,nofail 0 0
I was actually talking with the other devs about adding 8GB of real swap for 8G/16G machines and that’s a good data point to have. If with 32G swap you’re at 20% usage, that’s a bit under 8G so that sounds like a good default, considering your use case is fairly demanding (adding too much swap can be a waste of disk space for people that don’t need it, so we don’t want the default to be too high).
Well that percentage actally includes the 8GB built-in so it’s closer to 9 GB used.
total used free shared buff/cache available
Mem: 14Gi 7.9Gi 877Mi 4.4Gi 6.2Gi 1.5Gi
Swap: 39Gi 9.0Gi 31Gi
Then it’s probably less than that on disk, if the priorities are right it should be mostly using the 8GB first and then the 32GB
Ahh yeah, that makes sense, thanks!
I’ve had memory issues too, and studied a little bit about zram settings and also zswap.
These are my (and one of my friend’s) findings:
- If you found that your on-disk swap is used often, you might get better results using zswap than zram
- Fedora defaults to maximum 8G zram, but its not necessarily ideal. Note that this is the amount of uncompressed swap, which might get compressed to around 1/3 to 1/4 of original size.
- You can change zram compression algorithm to
zstd, which should result in generally better compression while not affecting speed much. WIth
zstd, the compression ratio is usually more close to 1/4 (although, certainly it depends on the contents).
For my use-case, I found that I do not need on-disk swap much, so I didn’t go for
zswap yet. However, I changed
zram algorithm to
zstd, and increased the uncompressed size of
zram swap to RAM size (around 16G). Although I also still have a small on-disk swap file, as I’m still experimenting to see if it is needed anymore.
On my system, 16G zram swap would take around 4-5G of RAM when full, leaving around 10G for active use. Which seems to be mostly working fine for me so far.
This is the configuration I put in
/etc/systemd/zram-generator.conf (should be created):
zram-size = max(ram, 12288)
max() function is no-op here, its just for experimenting with values/functions. You can just use
zram-size = ram).
And hey, it might even useful in some cases to set the size to even a higher value! I might even try 24G zram swap one day