Feature for custom rpm-ostree container-native builds?

I started playing around with the container-native version of Kinoite and am very excited about the possibilities of this change to rpm-ostree. I am particularly excited about the prospect of easily creating custom images, layering in custom configurations and packages on top of the official OS container image.

As simple example, I’ve written a custom Containerfile that uses the base Quay.io Kinoite 37 image, RUNs rpm-ostree install to install some extra packages and ADDs some configuration files to the image. I’ve pushed this custom image to my own quay.io repository and have successfully rebased to it.

The trouble I’m finding is how to trigger a new custom build when a new official image is published. I’m wondering if this kind of use case is going to be considered, and if so will there be (or is there already) a way to trigger these kinds of custom builds?

I suppose this will be most/only beneficial to circumstances where Silverblue/Kinoite is going to be installed on a large number of machines that need the same configuration. I can see how this would be a pretty niche usecase.

2 Likes

Hello

You can just write a script that pulls the latest image, execute Dockerfile and push the result image into your repository and put in the cron to execute e.g once a day or once a week, how often you prefer to update your system. The Infrastructure that builds the base image works in the same way.

These images are intresting but I prefer to customize my system using package layering, it still works and is less complex.

1 Like

You’d probably want to set up something like a Jenkins server somewhere to monitor the base image, and then rebuild your image and push it to Quay whenever the base image updates. You can also look at Gitlab CI / CD or Github Actions to also do this for you.

1 Like

Yes indeed, and I’ve written the script to do it already. I was hoping there would be a way to trigger it though without resorting to a cron/timer and instead base it off of whenever the base image is updated. I just have the feeling that schedules are less optimal/efficient, though I suppose it would work fine.

Package layering works, of course. But I have 3 Kinoite machines currently and they all pretty much have the same configuration and I like to keep them more or less the same. I was thinking that with custom images, I can make the change in one place and the changes would propagate automatically to all machines with the next rpm-ostree update.

I can also see the usefulness of this if Silverblue/Kinoite become popular for institutional/educational deployments where systems would need to be synchronized. Having to run a rpm-ostree command on every machine would be a bigger undertaking when there are potentially hundreds of machines involved.

2 Likes

Thanks for sharing your experience, do you mind sharing your dockerfile somewhere? I’m planning on doing this myself with a github action to see how it goes!

1 Like

I have the containerfile available in a GitHub repo (link below). It also has a mechanism for triggering new builds in quay.io. Although it works, it’s a real hack job and would really benefit from an “official” build trigger.

1 Like

@jlebon blazed a trail with a “pet” toolbox container that is updated every week via GitHub Actions - GitHub - jlebon/pet: Pet container for hacking on CoreOS

I took that idea for updating the container regularly and make a custom FCOS build using the ostree native containers to layer on an Image Builder install. The image is updated weekly and pushed to quay.io

1 Like

The more I play with this the more I am thinking the opposite direction, it commoditizes custom builds!

I had been struggling with getting ostree-pitti-workstation going because I had never used anything like that before. Now I can reuse all my existing cloud knowledge (and existing documentation!) all the way to my client. My image built on the first try and I haven’t written a dockerfile in years. I’m looking forward to seeing how people customize stuff.

2 Likes

Follow on question for the folks working on this (If I should split this into a new topic please let me know):

I’m thinking if someone did a:

FROM silverblue:37 
RUN "install rpmfusion and nvidia drivers"

And then just published the image, would that work how I would expect it to? Am I correct in assuming that if there was skew in whatever the nvidia driver and the kernel are would just result in a failed CI build, there wouldn’t need to have any of this ever hit my client, it would just stay on the last working build?

Also are there any limitations to what I can put in there? What happens if I slurp in a random ansible “set up my desktop” playbook? (Sorry been on the road with spotty internet so haven’t had a chance to go as deep as I’d like just yet!)

The official images produced by the Fedora infra are time based: they are composed everyday.

The temporary CI that have setup is also time based.

1 Like

That should work. You would also probably want to include the fedora archive repo to get older kernels to keep NVIDIA drivers working.

That would get you an updated system excepted the kernel.

1 Like

See also all the examples in GitHub - coreos/layering-examples that should also apply here.

1 Like

But not every input (mostly RPM) to those images change every day. rpm-ostree compose image (what we are/should be using to generate base images) has correct change detection.

1 Like

Yes! That’s really a big part of the idea. See https://github.com/coreos/layering-examples/tree/main/build-zfs-module which is an example of building custom kernel drivers. Or, one could directly install pre-built drivers too!

If you can dream of it, we’ll try to support it :wink:

1 Like

Worth passing along that @walters is maintaining a Silverblue container image via this repo GitHub - cgwalters/sync-fedora-ostree-containers

Anyone interested in deriving from Silverblue can do FROM ghcr.io/cgwalters/fedora-silverblue:37

I hope that we start publishing more “official” Silverblue container images on the Fedora registry or the like.

1 Like

Right, that’s Issue #11047: generate ostree containers for silverblue, iot - releng - Pagure.io and I did PR#11120: Preparatory cleanup bits for work on https://pagure.io/releng/issue/11047 - releng - Pagure.io to try to get some momentum here, but…no review after a month :cry:

1 Like

Thanks everyone for the links, here’s my workflow file if it helps someone else, I was looking to just keep everything in github:

3 Likes

Inspired by Jorges stuff I also tried this and it works nicely (this from a guy that has never even touched docker etc).

One thing I am having problems is using a specific “custom kernel” for my Asus laptop.

For this specific model to get all the “bells and whistles” working, I would need to use this kernel: lukenukem/asus-kernel Copr until 6.1 lands in near future.

I have been trying to look at the layering example for replacing the kernel. I can get the image to build fine on github but after ostree update, I get stuck in a black screen during boot.

I have used these commands previously to replace the kernel in “normal” Silverblue:

sudo wget https://copr.fedorainfracloud.org/coprs/lukenukem/asus-kernel/repo/fedora-37/lukenukem-asus-kernel-fedora-37.repo
 
sudo rpm-ostree override replace --experimental --from repo=copr:copr.fedorainfracloud.org:lukenukem:asus-kernel kernel kernel-core kernel-modules kernel-modules-extra
1 Like

Looks like it’s due to this issue:
https://github.com/coreos/rpm-ostree/issues/4190

2 Likes

akdev posted this in the fedora discord doing nvidia drivers: