Correct location for temporary files, to save write endurance on NVME

Hi all,

My first post here, i hope it in the correct location.

What would be the correct location for holding ~150 files totaling ~300Mb in a “ram drive” that should never hit physical storage. The files are temporary .ts / .m3u8 files for a simple “live camera system”. The video streams are not recorded, the files live for about 70 seconds and then get removed, there is no desire for any persistent storage.

The above setup chewed up it’s first ssd i had laying around in about 2 months, given it’s age i did not think much about it, and replaced it with a new ssd drive and called it a day. Just shy of a year passed, and it went down again with a broken storage. So i sent the drive in for repair, and got a no warranty reply. reason: It had used all it’s write endurance ?!?.

Looking back it made sense quickly, the above writes about 300Mb/min to storage, in a month that makes ~13TiB, the 250GiB Samsung 980 drive has a warranty for 5 years OR 150 TBW, oops… i did not think of that.

A quick google for with keywords as “tmpfs / ramdrive / best practice” resulted in a lot of confusing pointers from sites like stack exchange and the likes. What i have found (not-)suggested so far:

  • /tmp → This is just a folder on the os drive that serves as a common place for any temporary files.
  • /var/tmp → Same as /tmp, just a longer retention time?
  • /dev/shm → many sites have different views it seems, some claim trouble as it is intended for POSIX C lib only, others say it can be used safely as it’s just a tmpfs mountpoint?
  • Any “tmpfs” implementation → Should not be used? Because it may still cause swapping to disk? even if enough free ram exists?
  • Any “ramfs” implementation → Should not be used? Because it may steal memory from the kernel in certain conditions? apparently as a result potentially causing the same swapping?

Most sites that tried providing info as mentioned above had some links to other sites to “empower” statements made, though on a global view, there seems to be no clear path or way to handle my use case, most reasoning for using ram drives seems to be for performance reasons, not to save storage write endurance, i have also seen suggestions / hints at using “direct storage in ram” potentially is unsafe and could open up security holes… all in all… what seemed to be a simple question… turned confusing.

So what would be the correct “ram drive” to create the mount in the web-root in a way that is safe, provides me the filesystem like mount / bind / loop device so i can keep from doing needless (in)direct writes to physical storage and allow ffmpeg to write and nginx to read as it is right now?

Thanks for any pointers / solid advice that leads to a clear choice in finding a "correct way "to decide how to do this seemingly simple thing

Steven

The kernel docs Ramfs, rootfs and initramfs — The Linux Kernel documentation say that ramfs has no limit on the files it will store.
That is the risk as each file written will get memory from kernel.
Its is no stealing randomly, from my reading.
If you know your application uses bounded file space then it will use bounded memory it seems.
Should be easy to test.

3 Likes

On Fedora, /tmp lives in RAM. You could simply save your data there and it won’t hit your storage drive.

Is your device short of “free” RAM, or can it handle the additional 300MB? You want to avoid swapping. (Check swap usage after change of file destination and you may want to play with the swapiness setting).

/var/tmp doesn’t live in RAM, I would not recommend using it in your use case.

1 Like

thank you for the feedback

my “application” is literally a shell script running an instance of ffmpeg for each of the camera’s we have. ffmpeg is set to use 6 files each 10 seconds in length for every resolution transcoded. given i have 1 hi-res and 1 low res stream, this effectively results in 18 “looping” files, 9 per “stream” :

  • 1 playlist specific to a resolution
  • 6 active video .ts files in that playlist
  • 1 temp .ts file that gets in the next update of the playlist.
  • 1 playlist.tmp file that gets renamed to make sure the playlist is always a complete file when read by nginx.

All of this is done using standard options on ffmpeg nothing magic outside of that. So i assume it should not use more then the ~300MiB of space

I will try to see if i can graph the folder sizes over some time to see if there are unexpected fluctuations outside of the 24 to 30MiB per camera I have seen while randomly watching the folders.

$ mount | grep /tmp
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           3.9G   12K  3.9G   1% /tmp
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,seclabel,nr_inodes=1048576,inode64)

From the kernel documentation link provided by @barryascott, I read that tmpfs can roll over to use swap, following the link to the tmpfs specific documentation (Tmpfs — The Linux Kernel documentation) i read:

noswap: Disables swap. Remounts must respect the original settings. By default swap is enabled.

Given the mount output it seems /tmp does not have the “noswap” option set I would expect I would still risk wearing out my drive’s write endurance. I am not a 100% sure as I could be missing more specific details on fedora’s implementation.

This leads me to the idea that a specific tmpfs mount with the noswap option set would give me the best of both worlds:

  • Pure in ram
  • Isolated from general /tmp
  • The bonus options tmpfs provides over ramfs
  • Given the use of tmpfs → the option of setting limits / uid / gid preventing overfilling ram if ffmpeg for some reason acts in unexpected ways,

Anything i may have missed?

1 Like

You may also consider that for writes reads and updates there are writes to the ssd as the system updates the access time for the file.

I have used the noatime option in fstab for my SSDs to avoid the write involved with updating the atime when reading from a file. This is especially critical when running a server and numerous files are accessed repeatedly (each counting as a write for the SSD)

Fedora does not create disk swap by default, only zram.

Hi all,

Sorry for the slow reply with a follow up on the question and the solutions i have chosen to use. The reason for the slow update in the end.

first i created a mount on tmpfs in fstab

tmpfs /home/streams/public_html/streams tmpfs size=4G,mode=0775,noswap,uid=streams,gid=streams,fscontext=unconfined_u:object_r:httpd_user_content_t:s0 0 0

Then i added:

  • noswap option to prevent this mount from using swap,
  • Set the uid= and gid= to the user intended to use this mount,
  • Set a fscontext to have selinux play nice with the mount point
  • Set size= limiting the mount to 4Gb, preventing committing too much ram in unforeseen events.

The ffmpeg instances are launched from a shell script, that takes care of the creation of folders for structure and sets the permissions on those folders.

The NginX side of things was easy, i have set up a location block for the stream files and excluded those from the logs using access_log off;

After doing this things worked again and the writes to the drive have gone down as the goal was. Though for some reason the system became less stable with streams randomly stopping, ffmpeg reporting errors like max delay reached and more.

Turns out this was not a result of the changes made on the setup, though a result of upgrading from fedora 39 to 40. I have multiple internet connections and it turns out in that upgrading to F40 it has messed with the multiple routing table setup i have going for this. since the system is multi-homed this routing table change causes timeouts on the system as a whole. For this i will open a new topic.

2 Likes

Added hardware, nvme, sustainability, tmpfiles