SilverBlue and SSD

Excuse my English.

I’m thinking about trying Silverblue on the SSD (disc). Taking into account the writing cycles on the SSD and Silverblue running in read-only mode, how long will it last?

Hi mx1:

I don’t have any actual measurements, but in theory, all the containerization and layering in silverblue actually amount to more (a lot more) data being written/updated on the disk than a traditional/flat/single OS. Each container is, in essence, an entire OS of its own. With containers, it is as if every time you install an application, you are now installing an entire OS along with the application. Only the installed software packages (/usr) are read-only. The write-heavy parts of the OS (/var and /home) are still quite writable and quite often written to in silverblue.

Thank you very much for the answer.

Is this really true? I thought that the images are effectively hardlinks to a software repository, so you are not actually rewriting all the files when making a new image, only the newly downloaded ones. This should lead to a similar amount of file writing as on a regular Fedora installation.

If someone with more knowledge of libostree internals can let me know if I misunderstood something, that would be appreciated.

1 Like

Sorry Greg, but that is not the case for either the OS, which is a hybrid image/package system, or for flatpaks, which are containerized apps.
to quote libostree docs

The core OSTree model is like git in that it checksums individual files and has a content-addressed-object store. It’s unlike git in that it “checks out” the files via hardlinks, and they thus need to be immutable to prevent corruption. Therefore, another way to think of OSTree is that it’s just a more polished version of Linux VServer hardlinks.

and another about it’s features…


  • Transactional upgrades and rollback for the system
  • Replicating content incrementally over HTTP via GPG signatures and “pinned TLS” support
  • Support for parallel installing more than just 2 bootable roots
  • Binary history on the server side (and client)
  • Introspectable shared library API for build and deployment systems
  • Flexible support for multiple branches and repositories, supporting projects like flatpak which use libostree for applications, rather than hosts.

and finally from the docs …

rpm-ostree is a next-generation hybrid package/image system for Fedora and CentOS, used by the Atomic Host project. By default it uses libostree to atomically replicate a base OS (all dependency resolution is done on the server), but it supports “package layering”, where additional RPMs can be layered on top of the base. This brings a “best of both worlds”" model for image and package systems.

flatpak uses libostree for desktop application containers. Unlike most of the other systems here, flatpak does not use the “libostree host system” aspects (e.g. bootloader management), just the “git-like hardlink dedup”. For example, flatpak supports a per-user OSTree repository.

So to try to answer the original question I would say it will not be any more read/write cycles and may be less since the base image is immutable and if no or minimal layering is used. Both rpm-ostree and flatpaks use the deduping feature of libostree.

In my case I run two 240GB ssd’s and a 1TB spinning disk in an LVM. The ssd’s are both over 5 years old. I’ve been using Silverblue since F28.

I guess I still don’t see how it can be less. Even if it just uses hard links, “replicating a base OS” still has to amount to more, not less I/O. A typical /usr directory might have around 300,000 files. So that’s an additional 300,000 hard links per replicated OS if nothing else (I’m sure there is at lease some configuration file overhead as well, but that may not amount to much). How much I/O 300,000 hard links amounts to is probably file system dependent, but in some cases, it might amount to an entire 4K sector write per hard link. So every replicated OS would be performing an additional 300,000 x 4K = 1.2GB of writes. In some cases that is being done per-application!

In a traditional OS, when I install a 5M application, it requires an additional 5M.

/usr being read-only at run time isn’t really saving any I/O. Few applications ever did much writing there anyway.

You are mixing flatpaks and the base+layered image into what, one? Here is a good place for you to get more info on the internals of flatpaks I will do some more digging for info for you but I am going to likely message you with it since we are technically hijacking this thread.

Hello @mx1,
From my experience, my Kingston Memory SSD and my Samsung SSD have been performing fine with Silverblue on them for the past two years. The filesystem chosen, likely ext4+LVM, will determine how much thrashing your drives take. Flatpaks share common lib’s and runtimes, and their technology is based on libostree (like the OS which uses rpm-ostree for image/package mgmt). What this means in theory is that with applications installed as flatpaks, you should be sharing libs and runtimes across multiple applications. In practice, a lot of this will depend on the flatpak creators, but there are only a finite number of runtimes out there, and all are based on Freedesktop’s I think. So barring an application that the dev’s have chosen to bundle all dependencies into. Your home directory does get big directory and file numbers from flatpaks, mine has 13614 directories, 103502 files. Running tree from the cli scrolls by for awhile.