Rancher/LinuxKit style systems


This is a random discussion - does anyone think it would be useful if the operating system itself was made out of multiple containers, like e.g. Rancher and LinuxKit do?

I think about this on and off. It also came up with the idea that was floated a while ago that e.g. a workstation like Silverblue could actually just be a container on top of e.g. Fedora CoreOS.

The thing I struggle with is it’s a fairly fundamental departure from how “classic” systems work; we simply could not make it transparent. For example, you would now need to configure TLS trust roots in two places.
In many cases it seems to me, you end up wanting to “bind together” these different containers and ship them as a single update - while in theory, sure, someone could just update the chrony/time-sync container separately, when would you really ever want that?

Are there any people who use Rancher or LinuxKit-derived systems who have found the technology useful?

So far it’s been a relatively nice alignment point for merging Container Linux and Atomic Host in that both are basically “image-based derivatives” of the upstream distributions (Gentoo and Fedora) respectively - in my view they’re “spins” but something like LinuxKit is a far more fundamental departure.


Not answering your question directly since I’ve never used Rancher or LinuxKit style stystems, but I’ve often thought about a “host contexts” idea. Where you essentially have different overlays of the host filesystem you can play around with. This isn’t necessarily for Fedora CoreOS (maybe more for silverblue). Here is the idea:

# ls /usr/bin/hostcontext
# alias hc=hostcontext
# hc enter
Entering 'default' host context
hc-default# dnf install htop
hc-default# exit
# hc new foo
# hc enter foo
Entering 'foo' host context
hc-default# dnf install emacs
hc-default# exit
# hc list
# hc delete foo

The host contexts would build on top of the host filesystem and attempt to be upgraded when the host updates, OR you can disconnect them from the host and make them not upgrade by default. They can probably be implemented via containers or some snazzy chrooting.

This host contexts idea could allow us to provide a “default” host context that has some stuff in it and would allow us to make the base smaller maybe??


That’s what I thought was the end game for Fedora Atomic Workstation / Silverblue - a minimal core (kernel plus container hosting plus git) with everything else (display manager, desktop apps, services) running in containers.


When we looked at this before, “desktop as container” never made much sense. There is a big interface between the desktop and the plumbing layers. Not a natural place to put a container boundary.


My desktop dream, similar to what @znmeb said, was to have a minimal system where I can add the things I want in a config file (see @dustymabe’s comment) or even via GUI and then I’d just have my perfect desktop environment with exactly the file explorer, window manager, and other stuff I wanted without all the bloat.

One can dream.


I’m not saying that something like that can’t be engineered.

Just that it would probably not look much like a container.


Don’t get me wrong - the GNOME 3 desktop is my interface of choice. My second choice would be a ChromeBook but with Firefox as the browser. :wink:


I build a minimal container based system with xorg, fluxbox, chrome, … on top of

  1. RancherOS
  2. Linuxkit
  3. custom alpine linux initrd

RancherOS is nice and stable running, but newer versions are booting really slow (30-60 seconds as I remember?) with my dell notebook from ssd.

Minimal custom base is easy to build with linuxkit or from scratch with alpine filesystem and apk (minimal rootfs + docker), but a service management is needed too…

How to “install” apps (download a compose? shortcut to a PATH directory) or add widgets and apps to autostart (compose / stack).

I stopped working on a custom docker container based desktop linux because it’s to much to do for a single person…

Tested Fedora Atomic Workstation yesterday, but it isn’t the solution I searched for.
Best at the moment should be RancherOS with custom containers, but I won’t use it with the high boot times.


Perhaps slightly tangential, but Qubes OS does this sort of thing using virtualization boundaries. Would be interesting to compare a containerized implementation of those ideas.


I like Dusty’s idea of contexts. I don’t think you’d get too far with the everything being a container , making the boundry fuzzy depending on how much you can bend something to fit into this would definitely be better as a migration path.

Just some ideas being thrown out here for the desktop, pick the ones that make sense and throw away the rest =).

I started doing this after trying out Plan 9 for a while, the idea there was that everything essentialy has a view of its own of the system, so disabling networking access was taking away /net access from the process (since everything had a file system interface to do things) and so on. Mounting another system’s /net meant tunneling all your traffic through it, so running a session or process with a VPN meant doing so. Linux’s clone and namespaces come from Plan 9, so I think the same logical extension fits better here too (except that not everything is in the filesystem so we have more resources to share, the idea remains the same).

More here on the history of things: https://lists.gnu.org/archive/html/coreutils/2015-07/msg00037.html

I think modularity really helps in this regard, and helps me do some things I am outlining below:

Run every session with a mixed root image of its own: So, I have a root filesystem with the bare essentials, and then a root with more stuff added on top, depending on things I want to use in that session (so I can have the latest GNOME from F29 while still not upgrading from f28). When versions are same, I merge the two roots with overlayfs and hope for the best, otherwise use the full blown GNOME chroot as a basis for user@.service. That service also has options to out the session in its own PID namespace, in its own User namespace, in its own mount namespace (so that usb drives I mount in it aren’t visible to the root context), and so on. This effectively allows one to isolate the user, and potentially even being able to run their own full blown distribution -something different from fedora. Now, one could argue if the GNOME stuff was its own systemd service, all of those could use their newer “runtime” and run the latest version, yet my shell and so on still pointing to my user runtime (this is hard when gnome-terminal is a service spawned by systemd --user running in a context I don’t want, so I just shell out to systemd-run).

I’ve also been thinking of adding a restricted version of this and use the login to directly launch a remote screen to my other desktop (so consider entering user=dom/user) and making that session effectively a guest session (with dynamic uids) so you can’t break out of it and do things to the real machine, but wrapping this has been quite hard (I’ve been trying a console DM called ly so far because making gdm do this doesn’t work, really).

The result is that different components are dynamically updated, and adjusted, so a stable system underneath but a bleeding edge GNOME and flatpak apps, and the user locked down in that session, shelling out to systemd-run and machinectl shell for running in the host context (since PID1 forks off these and is unaffected by our changes) and directly logging in to remote machines as if it were local (the key is making the user feel that it is a native machine).


I feel like the issue you’d start running into there would be different apps that need to be able to interact potentially having different views of the host system…

That being said, wouldn’t systemd-nspawn -b cover a good chunk of this?