How to create a swap file with dynamic/unbounded size?

Query

I’d like to know whether it’s possible to create a swap file that isn’t confined to a specific size - rather, which can dynamically grow to the space of a drive, and shrink to 0b as is necessary.

Rationale

If you’re interested in my rationale, 493321 – At boot, the browser integrator crashed so many times that it crashed Fedora twice. serves as the most recent incident to make me ask.

I’m thankful for any responses, but if you happen to think this question has fallen folly to the X/Y problem, please don’t be dissuaded from actually answering the posed query too - it’s interesting to me, irrespective of what rationale I provide.

Where there is a will there might be a way. :slightly_smiling_face: However, last I checked, the space allocated to a swap file must be on contiguous sectors. Your swap file cannot be fragmented.

Edit: You’d probably have to incrementally allocate swap files with mkswap and swapon as space is needed and incrementally deallocate swap files with swapoff and rm as you are able.

1 Like

As noted by @glb the swap space must be contiguous.
This is not like windows that can (or at least could) have a swapfile that grows as needed and becomes fragmented.

Either allocate a swap space that is large enough to work and leave it alone, or as noted create new swap files and add or remove them at will.

I had one thing I was doing and even though I have 32G of ram with 8G of zram swap, I was forced to allocate an additional 256G swap space so that particular job was able to complete without being killed by the oomd process. Normally 8G of swap is more than adequate for what I do.

1 Like

@computersavvy, do you know why? I see https://bbs.archlinux.org/viewtopic.php?pid=2010501#p2010501 about this, but I dare say that it’s not particularly illuminating to me. That’s a surprising deficiency.

@glb, I imagine you mention that because you envision a “dynamic” swap to be comprised of multiple separate files?

No, I say that because I was envisioning someone trying to use a sparse file or something similar (maybe something where the dynamism was implemented with fuse). I think it should work if you do it using multiple files. I never tried creating a fragmented swap file to see what would happen, but decades ago, I used to create them and I remember reading that, due to some code deep in the kernel, the swap file had to be one continuous file. I used to use dd to create the files and I had to read from /dev/zero and write out the whole file. I couldn’t use any “shortcuts” that just reserved the space in the filesystem.

1 Like

Windows and Linux are different animals and do not do the same in the kernel area. Details unknown to me, but I do remember dynamic (and often fragmented) swap files in windows up through the early 2000s. Have not used windows regularly for over 20 years now though.

From the bug report I see you have 32GiB memory.
It’s a big surprise that you run out of memory.
Fedora has moved to use zram swap and no disk swap.
This is because disk swap is rarely a fix on modern desktops.
You could tune zram swap to use more memory.

I think you where getting the OOM killer reacting to memory running our.
What I would be interest in seeing it what OOM killer reported was the state of the system just before it started killing big processes.

1 Like

The fix would be a patch to the software that fills up your RAM, right? Adding more RAM or swap seems to an questionable workaround.
Any amount of RAM or swap will be filled, just a matter of time… I guess you dont want to end up with 1TB of swap files on your disk.

The bug you are mentioning seems to be fixed, right? 488653 – plasma-browser-integration-host crashes in Firefox 127.0 after upgrade to Plasma 6.1

1 Like

@barryascott, likewise. Unfortunately, because Revision - Super User is as of yet unanswered, I can’t provide a satisfactory diagnosis for how it occurred.

zram - ArchWiki appears to demonstrate that ZRAM swap exists to increase the speed at which frequently utilized information can be access - a form of secondary caching which utilizes currently otherwise unitilized RAM - rather than a method of increasing the amount of available logical RAM exposed to the OS. Consequently, yur proposal would merely cause my SDRAM to be consumed more quickly. Surely it would be antithetical? Is this incorrect?

How do you propose I access this information? (Does journalctl store it?)


@augenauf, 1 TiB isn’t entirely problematic if it prevents the system crashing. I currently have about 1.5 TiB spare, and at least 2 TiB available to purchase if need be. I would be seriously surprised if DrKonqi or ABRT could fill that.

Excepting the maliciously designed, there shall always be badly designed software. If I mitigate its effects by temporarily sacrificing some storage, that’s acceptable to me. This latest example is not my first encounter with this issue, despite damn 32 GiB of RAM!


@computersavvy, indeed, but this is my first time hearing of something that Linux cannot do which the NT kernel can. I expect that most people, like myself, generally think rather lowly of NT (I’ve had enough BSODs due to malformed kernel-space drivers than I like to think of).

Ankur has recommended psi-notify for similar problems: Is there a way to be notified of oomd getting ready to kill things before it does it? - #2 by ankursinha

3 Likes

The reasoning works like this.

When the system is under memory pressure it will do a number of things to make RAM available. It can drop cached data, drop pages that map to code that can paged back in.
As a last resort it will page out data to a swapfile (bad historyic name its a pagefile).
But becuase disk based paging, even to SSD, is very slow the memory pressure may not be relived fast enough.

Using zram is very fast and prevents a system from being unresponsive.

But at the end of the day if your work load is much bigger then the RAM needed to run it your system will get into trouble.

There was a blog from a researcher that went into more details then I remember, but I cannot track down reference at the moment.

But your case is a memory leak and nothing you can do will prevent the system getting into a mess. The leak will need fixing.

Yes you should find the info in the journal. Look for OOM.

1 Like

@barryascott, this Reddit comment certainly corroborates that it’s slower, but the exact speed difference appears to demonstrate that the speed decrease isn’t substantial enough for me to understand how you concluded that the system wouldn’t be able to move data into the pagefile quickly enough to prevent itself hanging.

To explicitly confirm, not even a pagefile on a (PCIe Gen. 4) M.2 SSD would be able to even alleviate this? IO ask because it seemed to work on Windows on my Dell Optiplex 3010 with an HDD and only 4 GiB of DDR3 SDRAM, back when I used Windows 10 with a pagefile. I can’t think of why else it would be offered as a feature.

@barryascott, there are certainly records about it:

PS /home/RokeJulianLockhart> journalctl -b -2 | Select-String -AllMatches -SimpleMatch -Pattern 'OOM' | Select-Object -Property @('LineNumber', 'Line')          

LineNumber Line
---------- ----
      1979 Sep 19 14:06:02 sayw4i systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket.
      2375 Sep 19 14:06:03 sayw4i systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer...
      2384 Sep 19 14:06:03 sayw4i systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer.
      2385 Sep 19 14:06:03 sayw4i audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
     21544 Sep 19 16:00:28 sayw4i systemd[1]: Stopping systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer...
     21553 Sep 19 16:00:28 sayw4i systemd[1]: systemd-oomd.service: Deactivated successfully.
     21554 Sep 19 16:00:28 sayw4i systemd[1]: Stopped systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer.
     21555 Sep 19 16:00:28 sayw4i systemd[1]: systemd-oomd.service: Consumed 3.254s CPU time, 4.4M memory peak, 0B memory swap peak.
     21556 Sep 19 16:00:28 sayw4i audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
     21671 Sep 19 16:00:30 sayw4i systemd[1]: systemd-oomd.socket: Deactivated successfully.
     21672 Sep 19 16:00:30 sayw4i systemd[1]: Closed systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket.

PS /home/RokeJulianLockhart>

Though, I’ve had to remove all duplicates for brevity:

PS /home/RokeJulianLockhart> . $args[0] journalctl -b -2 | Select-String -AllMatches -SimpleMatch -Pattern 'Crash' | 
    Select-Object -Property @('LineNumber', 'Line') | 
    Group-Object -Property Line | 
    ForEach-Object { $_.Group | Select-Object -First 1 }
    # https://poe.com/s/4xVRaxYhE1iNVMpgtS9E

LineNumber Line
---------- ----
      4514                                                #2  0x00007f2899bfa7b2 _ZN6KCrash19defaultCrashHandlerEi (libKF6Crash.so.6 + 0x77b2)
      4501                                                Module libKF6Crash.so.6 from rpm kf6-kcrash-6.5.0-1.fc40.x86_64
      2783 Sep 19 14:06:04 sayw4i systemd[1]: abrt-vmcore.service - ABRT kernel panic detection was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/var/crash).
      3067 Sep 19 14:06:06 sayw4i abrt-server[1501]: Lock file '.lock' was locked by process 1511, but it crashed?
      3086 Sep 19 14:06:07 sayw4i abrt-server[1520]: Lock file '.lock' was locked by process 1530, but it crashed?
      3239 Sep 19 14:06:11 sayw4i systemd[1638]: drkonqi-sentry-postman.timer - Submitting pending crash events was skipped because of an unmet condition check (ConditionPathExistsGlob=/home/RokeJulianLockhart/.cache/drkonqi/sentry-envelopes/*).
      3249 Sep 19 14:06:11 sayw4i systemd[1638]: Listening on drkonqi-coredump-launcher.socket - Socket to launch DrKonqi for a systemd-coredump crash.
      3259 Sep 19 14:06:11 sayw4i systemd[1638]: Started drkonqi-coredump-cleanup.service - Cleanup lingering KCrash metadata.
      3237 Sep 19 14:06:11 sayw4i systemd[1638]: Started drkonqi-coredump-cleanup.timer - Cleanup lingering KCrash metadata.
      3235 Sep 19 14:06:11 sayw4i systemd[1638]: Started drkonqi-sentry-postman.path - Submitting pending crash events (file monitor).
      3362 Sep 19 14:06:14 sayw4i systemd[1638]: Started drkonqi-coredump-pickup.service - Consume pending crashes using DrKonqi.
      4182 Sep 19 14:06:55 sayw4i steam[3957]: assert_20240919140653_30.dmp[429]: file ''/tmp/dumps/assert_20240919140653_30.dmp'', upload yes: ''CrashID=bp-305b07d9-0bd7-41f7-aff8-375492240919''
      4181 Sep 19 14:06:55 sayw4i steam[3957]: assert_20240919140653_30.dmp[429]: response: CrashID=bp-305b07d9-0bd7-41f7-aff8-375492240919
      4982 Sep 19 14:07:11 sayw4i abrt-notification[7588]: Process 7242 (plasma-browser-integration-host) crashed in KCrash::defaultCrashHandler(int)()
      4849 Sep 19 14:07:11 sayw4i drkonqi-coredump-launcher[7492]:                 #2  0x00007f2899bfa7b2 _ZN6KCrash19defaultCrashHandlerEi (libKF6Crash.so.6 + 0x77b2)
      4836 Sep 19 14:07:11 sayw4i drkonqi-coredump-launcher[7492]:                 Module libKF6Crash.so.6 from rpm kf6-kcrash-6.5.0-1.fc40.x86_64
      4650 Sep 19 14:07:11 sayw4i systemd[1638]: Started drkonqi-coredump-launcher@0-7314-0.service - Launch DrKonqi for a systemd-coredump crash (PID 7314/UID 0).
      4990 Sep 19 14:07:14 sayw4i drkonqi-coredump-launcher[7783]: Unable to find file for pid 557402 expected at "kcrash-metadata/kioworker.bb929bbffea34995a8c9e46f5411c96f.557402.ini"
      4994 Sep 19 14:07:14 sayw4i drkonqi-coredump-launcher[7785]: Unable to find file for pid 536090 expected at "kcrash-metadata/code-insiders.bb929bbffea34995a8c9e46f5411c96f.536090.ini"
      5046 Sep 19 14:07:14 sayw4i drkonqi-coredump-launcher[7889]: Unable to find file for pid 379455 expected at "kcrash-metadata/luajit-2.1.1720049189.a2ce1d3f99d34b9eabbbd0c50181939f.379455.ini"
      5058 Sep 19 14:07:14 sayw4i drkonqi-coredump-launcher[7907]: Unable to find file for pid 998205 expected at "kcrash-metadata/soffice.bin.841bd02060594305ab67c23592a276e8.998205.ini"
      5102 Sep 19 14:07:14 sayw4i drkonqi-coredump-launcher[8026]: Unable to find file for pid 8362 expected at "kcrash-metadata/krunner.31bcd8e103064c6aa5269026a0b691c5.8362.ini"
      5104 Sep 19 14:07:14 sayw4i drkonqi-coredump-launcher[8031]: Unable to find file for pid 10607 expected at "kcrash-metadata/plasma-browser-integration-host.31bcd8e103064c6aa5269026a0b691c5.10607.ini"
      4989 Sep 19 14:07:14 sayw4i systemd[1638]: Started drkonqi-coredump-launcher@1-2114-1000.service - Launch DrKonqi for a systemd-coredump crash (PID 2114/UID 1000).
      5112 Sep 19 14:07:15 sayw4i drkonqi-coredump-launcher[8043]: Unable to find file for pid 11780 expected at "kcrash-metadata/plasma-browser-integration-host.31bcd8e103064c6aa5269026a0b691c5.11780.ini"
      5153 Sep 19 14:07:15 sayw4i drkonqi-coredump-launcher[8112]: Unable to find file for pid 61636 expected at "kcrash-metadata/dolphin.b0fe282b8df4441c97b6089019cde3fe.61636.ini"
      5113 Sep 19 14:07:15 sayw4i systemd[1638]: Started drkonqi-coredump-launcher@63-2114-1000.service - Launch DrKonqi for a systemd-coredump crash (PID 2114/UID 1000).
     19156 Sep 19 15:48:22 sayw4i packagekitd[158821]: Failed to get cache filename for kf6-kcrash
     19461 Sep 19 15:48:47 sayw4i PackageKit[158821]: in /5633_ccbbdcdb for update-packages package kf6-kcrash;6.6.0-1.fc40;x86_64;updates-testing was updating for uid 1000
     21300 Sep 19 16:00:27 sayw4i sddm[1603]: Auth: sddm-helper (--socket /tmp/sddm-auth-c1cfe672-2b4b-42c0-b2a3-5c4b2251b80e --id 1 --start /usr/libexec/plasma-dbus-run-session-if-needed /usr/bin/startplasma-wayland --user RokeJulianLockhart --autologin) crashed (exit code 1)
     21299 Sep 19 16:00:27 sayw4i sddm[1603]: Authentication error: SDDM::Auth::ERROR_INTERNAL "Process crashed"
     21270 Sep 19 16:00:27 sayw4i systemd[1638]: Closed drkonqi-coredump-launcher.socket - Socket to launch DrKonqi for a systemd-coredump crash.
     21520 Sep 19 16:00:28 sayw4i systemd[1638]: Stopped drkonqi-coredump-cleanup.timer - Cleanup lingering KCrash metadata.
     21517 Sep 19 16:00:28 sayw4i systemd[1638]: Stopped drkonqi-sentry-postman.path - Submitting pending crash events (file monitor).

PS /home/RokeJulianLockhart>

Thanks.

I found the blog I was thinking of. Better you read these references that I work from memory (to details swapped out of by brain!).

  1. The Fedora change proposal: Changes/SwapOnZRAM - Fedora Project Wiki
  2. And this research: In defence of swap: common misconceptions
2 Likes

I don’t think the actual speed of paging in/out matters in what happens other than processing time. The kernel waits until the page has been swapped before using that memory space and the overall processing speed is slowed down.

It does, however, certainly impact the overall speed of the operation in that writing to physical storage may be much slower than using zram so any operation that requires swapping to disk will be slowed in overall completion times.

Memory leaks are different in that with a memory leak the ram is filled up and the app loses control of that ram space. Since the data is no longer controlled by the app it just stays there and wastes space which can no longer be used by anything. Less ram available to use means more swapping may be required, and more problems develop. Not directly related to swapping but does affect it.

@computersavvy, indeed, but surely with enough disk space, that would prevent a crash for some time (even if the PC became notably slower until rebooted)? I only want this functionality so that I can acquire useful traces before I reboot - GNOME ABRT can’t get a useful trace after a reboot, and although KDE’s “Crashed Processes Viewer” can:

[Thread debugging using libthread_db enabled]                                                                                                                                                                                                                                  
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `/usr/bin/plasma-browser-integration-host /usr/lib64/mozilla/native-messaging-ho'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=11, no_tid=no_tid@entry=0) at pthread_kill.c:44
44            return INTERNAL_SYSCALL_ERROR_P (ret) ? INTERNAL_SYSCALL_ERRNO (ret) : 0;
[Current thread is 1 (Thread 0x7f2890e42b00 (LWP 7242))]
(gdb) bt
#0  __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=11, no_tid=no_tid@entry=0) at pthread_kill.c:44
#1  0x00007f28978a86d3 in __pthread_kill_internal (threadid=<optimized out>, signo=11) at pthread_kill.c:78
#2  0x00007f289784fc4e in __GI_raise (sig=11) at ../sysdeps/posix/raise.c:26
#3  0x00007f2899bfa7b2 in QDebug::operator<< (this=0x1c4a, t=0x1c4a <error: Cannot access memory at address 0x1c4a>) at /usr/include/qt6/QtCore/qdebug.h:123
#4  0x00007f289784fd00 in <signal handler called> () at /lib64/libc.so.6
#5  std::__atomic_base<QObjectPrivate::SignalVector*>::load (this=0x9, __m=std::memory_order_relaxed) at /usr/include/c++/14/bits/atomic_base.h:831
#6  std::atomic<QObjectPrivate::SignalVector*>::load (this=0x9, __m=std::memory_order_relaxed) at /usr/include/c++/14/atomic:582
#7  QAtomicOps<QObjectPrivate::SignalVector*>::loadRelaxed<QObjectPrivate::SignalVector*> (_q_value=<error reading variable: Cannot access memory at address 0x9>) at /usr/src/debug/qt6-qtbase-6.7.2-6.fc40.x86_64/src/corelib/thread/qatomic_cxx11.h:202
#8  QBasicAtomicPointer<QObjectPrivate::SignalVector>::loadRelaxed (this=0x9) at /usr/src/debug/qt6-qtbase-6.7.2-6.fc40.x86_64/src/corelib/thread/qbasicatomic.h:170
#9  QObjectPrivate::maybeSignalConnected (this=this@entry=0x55bf01bc1180, signalIndex=signalIndex@entry=6) at /usr/src/debug/qt6-qtbase-6.7.2-6.fc40.x86_64/src/corelib/kernel/qobject.cpp:485
#10 0x00007f2897ffc456 in doActivate<false> (sender=0x55bf01bbea00, signal_index=6, argv=0x7ffe2a35fa70) at /usr/src/debug/qt6-qtbase-6.7.2-6.fc40.x86_64/src/corelib/kernel/qobject.cpp:3986
#11 0x00007f2897ff2bc7 in QMetaObject::activate (sender=<optimized out>, m=m@entry=0x7f2899ee2560 <TaskManager::PlasmaWindowManagement::staticMetaObject>, local_signal_index=local_signal_index@entry=1, argv=argv@entry=0x7ffe2a35fa70)
    at /usr/src/debug/qt6-qtbase-6.7.2-6.fc40.x86_64/src/corelib/kernel/qobject.cpp:4146
#12 0x00007f2899e8f56a in TaskManager::PlasmaWindowManagement::stackingOrderChanged (this=<optimized out>, _t1=...) at /usr/src/debug/plasma-workspace-6.1.4-2.fc40.x86_64/redhat-linux-build/libtaskmanager/taskmanager_autogen/include/waylandtasksmodel.moc:857
#13 TaskManager::PlasmaStackingOrder::org_kde_plasma_stacking_order_done (this=0x55bf01bd8760) at /usr/src/debug/plasma-workspace-6.1.4-2.fc40.x86_64/libtaskmanager/waylandtasksmodel.cpp:397
#14 0x00007f28956d0056 in ffi_call_unix64 () at ../src/x86/unix64.S:104
#15 0x00007f28956cc6a0 in ffi_call_int (cif=cif@entry=0x7ffe2a35fc80, fn=<optimized out>, rvalue=<optimized out>, avalue=<optimized out>, closure=closure@entry=0x0) at ../src/x86/ffi64.c:673
#16 0x00007f28956cf4ee in ffi_call (cif=cif@entry=0x7ffe2a35fc80, fn=<optimized out>, rvalue=rvalue@entry=0x0, avalue=avalue@entry=0x7ffe2a35fd50) at ../src/x86/ffi64.c:710
#17 0x00007f2898fa310e in wl_closure_invoke (closure=closure@entry=0x7f2874002dc0, target=<optimized out>, target@entry=0x55bf01bd5770, opcode=opcode@entry=1, data=<optimized out>, flags=1) at ../src/connection.c:1228
#18 0x00007f2898fa3979 in dispatch_event (display=display@entry=0x55bf018ddf80, queue=queue@entry=0x55bf018de078) at ../src/wayland-client.c:1670
#19 0x00007f2898fa3d73 in dispatch_queue (display=0x55bf018ddf80, queue=0x55bf018de078) at ../src/wayland-client.c:1816
#20 wl_display_dispatch_queue_pending (display=0x55bf018ddf80, queue=0x55bf018de078) at ../src/wayland-client.c:2058
#21 0x00007f2897d75c52 in QtWaylandClient::QWaylandDisplay::flushRequests (this=<optimized out>) at /usr/src/debug/qt6-qtwayland-6.7.2-4.fc40.x86_64/src/client/qwaylanddisplay.cpp:227
#22 0x00007f2897ffcc60 in doActivate<false> (sender=0x55bf018fc680, signal_index=4, argv=0x7ffe2a35ffa8) at /usr/src/debug/qt6-qtbase-6.7.2-6.fc40.x86_64/src/corelib/kernel/qobject.cpp:4098
#23 0x00007f2897ff2bc7 in QMetaObject::activate (sender=sender@entry=0x55bf018fc680, m=m@entry=0x7f2898488a60 <QAbstractEventDispatcher::staticMetaObject>, local_signal_index=local_signal_index@entry=1, argv=argv@entry=0x0)
    at /usr/src/debug/qt6-qtbase-6.7.2-6.fc40.x86_64/src/corelib/kernel/qobject.cpp:4146
#24 0x00007f2897f934e7 in QAbstractEventDispatcher::awake (this=this@entry=0x55bf018fc680) at /usr/src/debug/qt6-qtbase-6.7.2-6.fc40.x86_64/redhat-linux-build/src/corelib/Core_autogen/include/moc_qabstracteventdispatcher.cpp:158
#25 0x00007f28982851db in QEventDispatcherGlib::processEvents (this=0x55bf018fc680, flags=...) at /usr/src/debug/qt6-qtbase-6.7.2-6.fc40.x86_64/src/corelib/kernel/qeventdispatcher_glib.cpp:401
#26 0x00007f2897fa3bc3 in QEventLoop::exec (this=this@entry=0x7ffe2a3600f0, flags=..., flags@entry=...) at /usr/src/debug/qt6-qtbase-6.7.2-6.fc40.x86_64/src/corelib/global/qflags.h:34
#27 0x00007f2897f9fa7c in QCoreApplication::exec () at /usr/src/debug/qt6-qtbase-6.7.2-6.fc40.x86_64/src/corelib/global/qflags.h:74
--Type <RET> for more, q to quit, c to continue without paging--c
#28 0x00007f28987d66ed in QGuiApplication::exec () at /usr/src/debug/qt6-qtbase-6.7.2-6.fc40.x86_64/src/gui/kernel/qguiapplication.cpp:1926
#29 0x00007f289958b189 in QApplication::exec () at /usr/src/debug/qt6-qtbase-6.7.2-6.fc40.x86_64/src/widgets/kernel/qapplication.cpp:2555
#30 0x000055beeb66bf3f in main (argc=<optimized out>, argv=<optimized out>) at /usr/src/debug/plasma-browser-integration-6.1.5-1.fc40.x86_64/host/main.cpp:113
(gdb)

…it doesn’t catch all crashes, in my experience.

This is why creating a larger than required swap space is suggested.
In the situation I noted in post 3 above I actually tried 3 different times to create adequate swap. I tried 64G, 128G and finally succeeded with 256G.

I still have that swap partition available but not used.

1 Like

@computersavvy, indeed, although I’d rather solely utilize it as a fallback, because it seems like a shame to allocate so much storage if it might never be utilized.

Irrespective, that’s significantly more than I expected would be necessary for most people. Why do you need 256 GiB? For reference, I don’t think Windows allocated that much, when last I utilized it (although the total capacity of my system storage device might have been merely 256 GiB at that time).

In that one instance I just kept trying until the amount of swap allowed the app to succeed. It was getting killed by oomd with smaller amounts of swap.

I have never needed extra swap beyond the default 8G of zram before nor since. I also have used the same app for similar processes on different files before and since without needing extra swap.

What I was doing was using ffmpeg to convert an mkv video that had a 7.1 audio track to mp3 stereo and that particular file was problematic. Other similar files did not exhibit the same issues.

It was a rare situation but since I have the storage available I have not repurposed that swap partition for other use.

1 Like