So today and yesterday, I am participating in a small Ansible hackfest in Paris (hosted by Scaleway.com, sponsored by Red Hat), and I decided to do as much as possible with my Silverblue laptop, and not take the easy road of using my regular work laptop for that.
So I had plenty of time to wonder about pet containers, workflow, etc during the morning presentation, and I did continue to ponder today, when I had a interesting idea while showering, interesting idea that I want to share with people here.
My main concern with the pet container workflow is how to keep them updated. For example, if I start to use Dockerfile to create them, I would need to download new version of the base image, then rebuild manually, make sure I remember the tag I used, etc. Then I wondered about running cron jobs inside them, as I have a rather complicated mail setup for offline mail reading that I want to move to containers and would like to automate that (offlineimap + dovecot + ssh over tor, but explaining it and the why is for another post).
And then I remember that part of those problems are already solved by Openshift/OKD (I will use OKD since that’s the name of the upstream project now). As I guess non system admins are not familiar with OKD (previously named Openshift Origin), let me explain quickly what the software do.
So OKD is is a version of Kubernetes, adding a few features on top of the existing system. Most of those extra features find their way to Kubernetes somehow. In turn, Kubernetes is a system to manage containers deployment based around the idea of declarative configuration for the state of your cluster.
For example, you would declare how to get from a git repository to a container, and then OKD/Kubernetes would do the build, the deployment (replacing the containers, dealing with the network, routing, etc). It also deal with automated restart in case of crash, of linking related container together, but that’s more for production hosting than what we need. This is mostly used for hosting server applications, bringing automation and best practices, and a common language for portability around different providers.
And so, a system to rebuild and deploy containers without human intervention is IMHO something we would need. So what about having OKD installed by default, with a few base images, and using oc rsh to connect to the containers for pet containers.
This would bring several benefits.
First, it would solve some issues from Openshift Origin on Silverblue, story of one thousand cuts . If the system is preinstalled, then people can use it right away.
Second, it solve the pet containers rebuild issue. We can tell to people to use a build to create their container, and if the ImageStream are configured right, all should automatically rebuild and be usable with
oc rsh. It also guide people to the “right” approach, by using a file to describe the containers rather than doing it manually.
Three, it do solve the problem of persisting state. OKD/Kubernetes do manage volumes, since containers content are removed on restart, and in the pet container use case, we would have to solve that somehow too. So this could help to have that framework available and ready to be used, especially since this integrate with SELinux, and so deal with the same problem we would have to solve anyway.
Fourth, it solve the issue of running cron job in a image, since that’s also a feature of OKD (moved to Kubernetes now). That is my exact use case of “I need to run stuff in a container with cron”. Of course, I can also try to run cron inside, but then, we need a process manager inside the container, and I am not sure systemd support it. I can also run cron outside, but that require to layer crond (and I want to avoid layering). Finally, I can use systemd, but systemd need special configuration to run a timer scheduled service when the session is not opened.
Fifth, there will be a a opportunity for a compelling story regarding deployment. For example, Fedora could provides 3 containers for rust, 1 for the SDK (so interactive pet container), one to build (the s2i container, in OKD lingo), and 1 to deploy in production. Even better, since the system is automated, one user could customize the SDK pet container by just creating a new container that depend on the first, and so get the benefit of having the SDK maintained and yet add software on top. And then, the user story could be to use the SDK and push the built container directly to a registry, to production.
And as one of the target of Fedora workstation is developers ( https://fedoraproject.org/wiki/Workstation/Workstation_PRD ), and since it seems that container based deployment is where the industry is moving (based on the attendance of various meetups and events I have done since 4 years), I think that offering easy access to such workflows would be aligned with the goal of Fedora Workstation. And since it seems that Kubernetes is the big winner in that space, I think that should be the system used.
Now, Kubernetes for now do not seems to offer a workflow to build containers, even if I heard that is supposed to happen. But that’s one of the feature that OKD provides and the one we would need, hence the proposal to use it.
The biggest issue I see with that approach is that OKD is not laptop friendly. I did deploy it, and it took 10% CPU just because the various servers where chatting to each others to check they are ok. While I am all in favor of self care and being social even for go software, this is a no go as far as I am concerned for my laptop.
Another issue is that preinstalled for now mean “being as a rpm” and that would requires a lot of work. Or have a way to get it started from a container, and that’s also some work.
But in the end, I think we can solve multiple problems without reinventing the wheel by reusing industry standards.
So, what do people think of the idea ?