Hi All,
I’ve been working on a host provisioning strategy based on OCI artifacts that I refer to as OHP, which I’ve begun documenting here: GitHub - afflom/ohp: OCI Host Provisioning
The biggest difference between these referenced topics that I can see; is that the OSTree Native Container is an OCI image which would require extraction from its OCI format prior to provisioning/installation. This might be a benefit because traditional container image workflows/tooling would support this strategy, while OHP leverages OCI artifacts that would store the layered contents within the OCI artifact in their original (raw) format (consumable by UEFI chainloading) which would make OHP reliant on an ORAS based builder/client.
Any discourse/feedback on the subject is welcome and much appreciated.
Offhand, it seems like the way these projects could be chained together is that one would follow something like these steps Applying a new package using CoreOS layering - coreos/rpm-ostree to make a new coreos-derived container image with custom in-OS content. You’d push that to a registry, then craft an Ignition config (or use via butane sugar) that switches to your custom OS via container image on firstboot. To be hermetic, that image should be pulled via @sha256 digest perhaps?
That Ignition config then gets embedded as proposed now in OHP. These things are orthogonal - what we’re gaining by combining them is just that one can now customize the OS, but the customized OS state is now captured along with its “provisioning prerequisites” as containers (or OCI artifacts really as you propose).
Does this all seem right? BTW, how are you seeing support for OCI artifacts today? I guess we can probably design new things today that require it, but I worry we’d at least need fallbacks to ship via OCI containers, not artifacts in some cases.
The intersection between these two gets stronger if we support taking a FCOS-derived container image, and wrapping it in an ISO, AMI, etc. We have all that code in coreos-assembler today. If we do that, then the user has created fully custom disk images, and then their OHP configs reference them.
A random question…is this mainly focused on PXE? And/or is this project mainly trying to define a standard schema for the provisioning chain, and then a separate tool handles “rendering” it for a given infrastructure? Would it ever be in scope to e.g. have an AMI/Azure/OpenStack/etc image reference in this chain?
Thank you for your thoughtful responses and feedback. I deliberately delayed my response so that I could ensure that I was able to view your references and absorb the content of your posts.
I haven’t seen the coreos rehydrator prior to this, but its goal of deduplication of content is an implicit goal of OHP and it looks like the mechanism to achieve this optimization would be similar in both implementations. I can see an ohp based rehydrator that would leverage OCI artifacts instead of OCI images. The downside is that it would require a client other than an OCI runtime (podman), but that client is already present in the rehydrator image. So, it would just replace podman in that scenario. The advantage is that the referenced blobs would be stored in the registry in their raw/original format; enabling network chain loading without any pre/post-processing. The rehydration would be queued via OCI manifest annotations. The beauty of this fusion, as you have alluded to, is that a user can author a custom ignition and then push an OCI artifact that contains the boot instruction and custom ignition that references the “officially” released CoreOS OCI artifact.
You are correct in assuming that I wrote OHP with network booting in mind, but my goal was to describe a general OS provisioning standard based on OCI artifacts that is portable between baremetal and cloud/virt use cases.
I agree with you that fallbacks will be required, especially early into adoption. But I see this as an opportunity to position CoreOS to mature its approach to leveraging OCI artifacts which will converge with the mass adoption of OCI artifacts by the (cloud-native) community.