OCI based host provisioning (baremetal/virt)

Hi All,
I’ve been working on a host provisioning strategy based on OCI artifacts that I refer to as OHP, which I’ve begun documenting here: GitHub - afflom/ohp: OCI Host Provisioning

I’ve also recently become aware of Changes/OstreeNativeContainer - Fedora Project Wiki and I’m curious if/how these concepts can intersect.

The biggest difference between these referenced topics that I can see; is that the OSTree Native Container is an OCI image which would require extraction from its OCI format prior to provisioning/installation. This might be a benefit because traditional container image workflows/tooling would support this strategy, while OHP leverages OCI artifacts that would store the layered contents within the OCI artifact in their original (raw) format (consumable by UEFI chainloading) which would make OHP reliant on an ORAS based builder/client.

Any discourse/feedback on the subject is welcome and much appreciated.

This is really cool! Thanks so much for posting this here.

There’s also another intersection with GitHub - cgwalters/coreos-diskimage-rehydrator: Part of implementing https://github.com/openshift/enhancements/pull/201 - did you see that? I’d set that project aside in favor of the ostree-native-container work, but they are definitely all related.

Offhand, it seems like the way these projects could be chained together is that one would follow something like these steps Applying a new package using CoreOS layering - coreos/rpm-ostree to make a new coreos-derived container image with custom in-OS content. You’d push that to a registry, then craft an Ignition config (or use via butane sugar) that switches to your custom OS via container image on firstboot. To be hermetic, that image should be pulled via @sha256 digest perhaps?

That Ignition config then gets embedded as proposed now in OHP. These things are orthogonal - what we’re gaining by combining them is just that one can now customize the OS, but the customized OS state is now captured along with its “provisioning prerequisites” as containers (or OCI artifacts really as you propose).

There’s also a strong intersection with a discussion around shipping the FCOS stream metadata as a container/OCI itself. That came up in container-native CoreOS release engineering · Issue #828 · coreos/fedora-coreos-tracker · GitHub I think.

Does this all seem right? BTW, how are you seeing support for OCI artifacts today? I guess we can probably design new things today that require it, but I worry we’d at least need fallbacks to ship via OCI containers, not artifacts in some cases.

The intersection between these two gets stronger if we support taking a FCOS-derived container image, and wrapping it in an ISO, AMI, etc. We have all that code in coreos-assembler today. If we do that, then the user has created fully custom disk images, and then their OHP configs reference them.

A random question…is this mainly focused on PXE? And/or is this project mainly trying to define a standard schema for the provisioning chain, and then a separate tool handles “rendering” it for a given infrastructure? Would it ever be in scope to e.g. have an AMI/Azure/OpenStack/etc image reference in this chain?

Thank you for your thoughtful responses and feedback. I deliberately delayed my response so that I could ensure that I was able to view your references and absorb the content of your posts.

I haven’t seen the coreos rehydrator prior to this, but its goal of deduplication of content is an implicit goal of OHP and it looks like the mechanism to achieve this optimization would be similar in both implementations. I can see an ohp based rehydrator that would leverage OCI artifacts instead of OCI images. The downside is that it would require a client other than an OCI runtime (podman), but that client is already present in the rehydrator image. So, it would just replace podman in that scenario. The advantage is that the referenced blobs would be stored in the registry in their raw/original format; enabling network chain loading without any pre/post-processing. The rehydration would be queued via OCI manifest annotations. The beauty of this fusion, as you have alluded to, is that a user can author a custom ignition and then push an OCI artifact that contains the boot instruction and custom ignition that references the “officially” released CoreOS OCI artifact.

You are correct in assuming that I wrote OHP with network booting in mind, but my goal was to describe a general OS provisioning standard based on OCI artifacts that is portable between baremetal and cloud/virt use cases.

1 Like

I just realized that I didn’t answer your question pertaining to OCI Artifacts:

The idea of using OCI artifacts is fairly old (New OCI Artifacts Project - Open Container Initiative), but a formal standard has yet to be published (GitHub - opencontainers/artifacts: OCI Artifacts). It sounds like OCI is willing to adopt the spec as published by ORAS, when the oras/artifacts-spec comes out of draft: Release 1.0.0-draft.1.1 · oras-project/artifacts-spec · GitHub. In lieu of an official spec, several prominent projects (Network Dependents · oras-project/oras · GitHub Note: don’t let that list fool you, most of those projects are listed because of importing Helm packages.) are leveraging OCI artifacts now. Those projects include Helm (Helm | Registries), WASM (GitHub - engineerd/wasm-to-oci: Use OCI registries to distribute Wasm modules), Tinkerbell (GitHub - tinkerbell/hub: Hub contains reusable actions. It generates the manifest file used by ArtifactHub.), and OPA (GitHub - open-policy-agent/conftest: Write tests against structured configuration data using the Open Policy Agent Rego query language). There is even precedent for supporting OCI artifacts in Quay (Quay OCI Artifact Support for Helm Charts).

I agree with you that fallbacks will be required, especially early into adoption. But I see this as an opportunity to position CoreOS to mature its approach to leveraging OCI artifacts which will converge with the mass adoption of OCI artifacts by the (cloud-native) community.

1 Like