My machine has 64 GB of RAM and 4GB of swap space. It is configured to have swappiness=1 (cat /proc/sys/vm/swappiness
== 1).
Until a couple months ago, swap usage remained at 0 usage, since most of the time I never fill up all my ram. However, after some system updates, it started using all available swap, even though my memory usage is 30 GB. I cannot pinpoint exactly since which update this started happening, but I think it was after my kernel updated to the 6.1 series, along with all system libraries and core desktop programs.
One thing that I discovered is that this happens when I download torrents. Swap remains at 0 (or near-zero), until I open transmission or ktorrent and start downloading, at which point swap usage skyrockets. This didn’t happen before. Do you know if something changed recently in the kernel or in those programs that can be causing this?
I’m currently using Fedora 37 Workstation (KDE Spin), with kernel 6.1.14-200.fc37.x86_64.
Plasma version: 5.27.2
KDE frameworks: 5.103.0
QT: 5.15.8
Graphics platform: X11
Processor: AMD Ryzen 9 5950X
Graphics card: AMD Radeon RX 6600
Default swappiness on my system and I believe the default for most fedora installs is 60
.
That yours is at 1
seems really strange. The affect of swappiness is related here.
What is the output of free
and zramctl
?
The default swap for fedora uses virtual swap, 8GB, in RAM, which would be shown with the zramctl output, so when you say you only have 4GB swap I wonder what is configured different than default.
Yes, I’m not using the default settings. I customized my install to put swappiness at 1, and deactivated zram. I’m using a plain old partition of 4 GB in my SSD as the swap partition for the system.
zramctl
prints nothing, and here’s the output of free -m
:
total used free shared buff/cache available
Mem: 64225 30625 662 666 32936 32227
Swap: 4095 3290 805
That does you a disservice.
Physical swap is much less responsive (time wise) than zram, and having only 4GB is also not enough as you can see. Zram uses compression and actually allows close to 16GB of data into the default 8GB of zram, though zram size can also be adjusted according to needs.
For some things swap is mandatory, even if the system has a considerable amount of RAM.
I think if you would consider adding back in the default zram config your problems would be resolved.
I think that is correctly explained here:
https://linuxhint.com/understanding_vm_swappiness/
I’m pretty sure there is nothing really wrong with that swap usage, especially since 1
is a very low value for that parameter. 10
would be better and even higher isn’t bad.
You are effectively managing the tradeoff of cache space (which ought to reduce disk accesses) vs. the anonymous memory of very stale processes, processes that have been idle far longer than the period over which your computer used and reused cache space likely more than your entire 64GB.
I expect the reason you didn’t use swap before, is you never had use for 32GB of file cache before. Alternately, maybe you never had processes idle that long before. But I doubt it.
When/if those idle processes finally start up again, that will be a little slower than it would have been if their anonymous memory were never swapped. But meanwhile various file I/O ran a tiny bit faster because of the extra caching. Neither matters much for just 4GB. But in terms of total wait time, it is very unlikely that the swapping increased total wait time.
How responsive it is would matter if you were thrashing anonymous memory: if you had so much anonymous memory across the total of active processes that you are frequently kicking out pages of anonymous memory to read in other pages.
The output of free -m
in this case makes it overwhelmingly clear that is not happening. There might not yet have been any reads from swap. Certainly, there haven’t been enough that the speed would matter. Many processes go idle and stay that way until killed at system shutdown. Getting written out once to physical swap and left there indefinitely is far more efficient than getting compressed to half size and then being left in ram indefinitely.
How could that claim make any sense? Nothing was mentioned about any sluggishness caused by swapping (because there obviously isn’t any). The only complaint was about the use of swap space. With zram
the same 3.2GB of swap space would be used, but instead of freeing 3.2GB for additional caching, that swap usage would only free 1.6GB for additional caching.
I doubt you could feel the performance difference across any of these decisions (in this system). The 1.6GB or 3.2GB of extra cache makes some operations faster, but in such tiny increments spread over many 10’s of GB of file I/O that you wouldn’t feel it. It would take extremely careful controlled measurement to detect it. Any performance cost in the one-time swap out of the stale pages is too tiny for any possibility of measurement. The one-time read back likely never even happens. But to the extent that there is a difference, a slightly higher swappiness (10 for a typical SSD) and real physical swap file is best.
It is a harder (and potentially different) decision if the programs you are running use anonymous memory that is a major fraction of (or even more than) total physical ram. But in this case, the fact that 4GB of swap space didn’t get filled proves that total anonymous memory use is a small fraction of total ram (so no swap configuration would be dramatically wrong).
… and it’s not supposed to happen, especially with swappiness=1
I just tested your scenario on a notebook with 12GB RAM. Added tree large torrents (fedora isos with a total download size of 15GB). My system still has the default swappiness=60
. Swap usage is 0, before and after starting the torrent download.
free -h
total used free shared buff/cache available
Mem: 11Gi 3,5Gi 2,5Gi 861Mi 5,6Gi 6,9Gi
Swap: 8,0Gi 0B 8,0Gi
EDIT:
some 50% into the download, I get (free -m
):
total used free shared buff/cache available
Mem: 11859 3453 173 911 8232 7177
Swap: 8191 9 8182
Still, very insignificant Swap usage. Swap usage didn’t exceed the 9MB seen above all the way through downloading 15GiB using Transmission.
Is there a performance problem? Are you noticing latency swapping stuff back in?
Of course, it’s also perfectly fine to simply wonder for the sake of knowledge, but pragmatically, if it’s not causing a problem, it’s probably good — now you’ve got free RAM to use for something else.
if you are still working with a traditional swap partition on SSD instead of zram, you have more R/W access to your SSD (not sure if it matters to you)
It is important to understand swap space that is occupied long term vs. actively read and written. In the cases discussed in this thread, it is very likely each used page of swap space was written once, not yet read once and never rewritten (since the last boot). That is a trivial write load on the SSD (compared to all the files that temporarily exist during ordinary use of a computer).
In some other system with massive use of anonymous memory, the effect of swap on the life of the SSD might be significant, but in that case the fact that zram only frees half the space is very significant, so accepting the slightly shortened life of the SSD is likely the right answer (if you expect computer parts to continue improving price performance over that period of time).
In a system with more ram than it really needs (your system and the original one discussed) a very large amount of file I/O is necessary to trigger some swap space usage, but not sufficient. You also need some processes that have been idle for a long time and use significant anonymous memory. Likely you don’t have as much anonymous memory used by your long term idle processes, or maybe you tested too soon after the last reboot so idle processes had not been idle long enough.
I’m not having issues, it’s just that it surprised me that change of behavior. I explicitly configured my system to swap as little as possible, and the fact that it is using all available swap, even with a lot of ram still available, made me think something was wrong, or a program misbehaving.
I’ve never heard before about anonymous memory. I’ll investigate a bit about it. For context, I do have some long-running processes that use a lot of memory: my browser Firefox, and its dozens of processes (I keep the browser open for days, with over 70 tabs open), my IDEs (IntelliJ, Pycharm, which are kept open for a day or two and then I usually close them when I finish the task I’m working on). However, even with all that open, I use around 30 GB of ram. Without downloading torrents, swap space used barely goes beyond a few megabytes. But as soon as I start using those programs, swap fills up quickly. Interestingly, if I open ktorrent or transmission, and leave them idle, this doesn’t happen. It only happens when I start downloading.
Do you guys know how to debug what’s in swap or to which process that belongs to? That could help see if maybe it’s the torrent apps that are using it. Or maybe is there a way to log what is causing swapping in the kernel?
I’ll try disabling swap for a while and downloading torrents to see how the system reacts… maybe main memory will fill out this time and that can lead to something…
One way is with the top
program. If you haven’t reconfigured it to include swap by default, after starting top
you can press f
then arrow down to the word SWAP then press d
s
q
then you will see which processes are using swap and how much. That should be the anonymous memory of processes that have been idle for a long time.
But with some programs, maybe a browser (I’m not sure), there can be long term idle contents within a process that are swapped even when the process itself is not idle. I hope that makes sense.
When those processes (or idle content within a process) become active again, their swap usage will make their transition to active take a fraction of a second longer than it would if swap were disabled. Decide for yourself whether that matters. On more memory limited systems with some processes using vast amounts of anonymous memory, reactivating an idle process can take many seconds. But I don’t think you will run into that.
That means instead of having a little under 33GB of cache, you would have a little over 29GB (assuming the same set of programs running/idle as what you showed earlier). The difference in performance from that tiny change in cache should not be anything you could feel as a user, probably not even anything you could measure with a carefully controlled comparison.
Three major categories of memory use that compete with each other for physical ram are mapped memory, anonymous memory and cache.
The mapped memory included the executable binary of all the processes that are running. It might or might not include large amounts of other stuff, depending on what you are running.
The anonymous memory includes the stack and heap (or similar things for some programming languages that don’t structure it that way) of all your running processes.
The cache is recently read or written chunks of files.
When something must be bumped to make room for something else, the central concept is “least recently used”. Whatever hasn’t been used for the longest time is expected to be least likely to be needed again soon. But you don’t want those three types to compete equally. It is intentionally skewed (with exact rules that seem to be documented nowhere) such that cache is more likely to be bumped than mapped which is more likely to be bumped than anonymous, while a big difference in LRU can make some anonymous get bumped while still holding cache. The intent is: if the oldest anonymous is a little older than the oldest cache, bump cache. But if the oldest anonymous is a lot older than the oldest cache, bump anonymous.
When memory use is bumped, only anonymous needs swap space, cache and mapped can be bumped without taking any extra space in the fs, because they are already in the fs.
When you disable swap, all the anonymous memory must stay in physical ram so mapped memory and cache memory are hit. You have so much more ram than you really need that the hit should fall almost entirely on cache and not really change the amount of mapped memory that is bumped. If you had less excess ram then disabling swap actually slows the recovery of previously idle processes because it is slower for them to recover their mapped memory.