I’m setting up a system here that is to be used for quite a lot of heavy computation and data analysis. It’s got 64G of RAM, but the data we usually analyse is a lot more than this, and we try and do it in chunks etc.
For this type of system, where we expect to exhaust RAM frequently, what is the recommended swap set up?
Should one keep the default zram swap (but, what happens to it when we run out of RAM)? Or should one also add a swap file system at lower priority, so that it can be used after zram swap is used up (does it work this way?)?
Hi, keeping zram and adding new swap is possible. And the system will use the physical drive swap if needed. But I don’t know about the performance.
By the way, maybe you also need to check the max-zram-size in /usr/lib/systemd/zram-generator.conf and considering to increase it (please refer to zram-generator.conf.example).
I’m not sure, but couple times installing Fedora Linux, this config always set to 8192 although my system only have 4GB and the other one 6GB. Usually people will recommend half of our physical ram, but from Fedora WIki said we can use all.