CoreOS configuration with Ignition is very interesting to me. I love the idea of a single file declaring the state of the machine. I’ve used Ansible for it that in the past and it works, but I love the idea of a single file holding the state. I also love the idea of using different OCI images (like ublue) that can be automatically built on quay or github. My usecase is a single home NAS that I want to have a declarative config for. It mostly just serves as an SMB and NFS share and runs a few various podman containers.
What I would love is the ability to combine the ignition file with auto updates so CoreOS could reinstall itself with the ignition file. AFAICT, there isn’t a way to do that on a single machine. If I wanted to update my ignition file and apply the changes, I have to reinstall completely. That means either reinstalling from an ISO, which is a pain and negates some of the benefits of the CoreOS model, or setting up another PXE server that can hold my configurations, which is not trivial.
Ideally, I could run coreos-installer reinstall --ignition-url https://github.com/some/config.ign and re-apply the updated ignition file.
I can think of a few options that would let me keep using Ignition as a declarative configuration:
Set up a PXE server to use for booting.
This seems like overkill for a home NAS. I’d need a second machine (a Pi would work), I have to modify my DHCP server, I have to run a PXE server and webserver, and I have to setup the PXE server so it automatically pulls down the most recent CoreOS images at a regular cadence. That adds a lot of failure cases and complexity which seems counter to the entire goal of Ignition. Also, of course, I’d want that Pi to run CoreOS itself which puts me back to square one.
Rebase onto my own custom OCI image and use Github actions to keep it up to date.
This seems pretty wasteful if all I’m adding are a few quadlet files, updating network config, etc. This makes sense if I’m making significant modifictions to the system (like adding ZFS or Nvidia drivers like ublue does), but for adding a 10 line quadlet file this is severe overkill.
Continue using Ansible for most modifications, only reinstalling if required (new HW, etc).
This will obviously work, but also feels somewhat silly to be reproducing what Ignition is already designed to do. It also means my Ignition file might become out of date and by the time I have a HW failure and need to reinstall, I’ll realize that I forgot to add some of my modifications into my Ignition and now I’m back to manual troubleshooting strange config, which is what CoreOS is trying to avoid.
None of those options feel great. It feels like I’m missing something so I’m hoping someone can shed some light on this for me. It’s possible that CoreOS just isn’t designed for a tiny home user and I’m stuck with one of these options, which is completely fair albeit a bit disappointing. I found
Is anybody able to shed some light on this for me or let me know either where I’m wrong, or a way to make this simpler?
Related links:
This github issue which talks about this, but there’s no discussion of how to enable what the original poster wanted.
Matchbox - This helps with the PXE server bit, but doesn’t handle the DHCP or TFTP servers and it doesn’t seem to handle downloading the latest CoreOS image to boot from (but I might be wrong on that)
I’m not sure I understand all the questions, but I’ll try to be helpful with what I think I understand.
Re-provisioning from an ISO (a USB flash drive, a ignition file stored in a GitHub repo or a local HTTP server) is not that difficult, at least in my experience, but why is it even necessary for the use case you’ve described? If you layer the python3 package via ignition, you can start configuring and managing CoreOS with Ansible right after the initial boot. I’m not sure this is the recommended approach, but at least it works.
Re-provisioning from an ISO (a USB flash drive, a ignition file stored in a GitHub repo or a local HTTP server) is not that difficult, at least in my experience, but why is it even necessary for the use case you’ve described? If you layer the python3 package via ignition, you can start configuring and managing CoreOS with Ansible right after the initial boot. I’m not sure this is the recommended approach, but at least it works.
It’s not necessary to use ignition, it’s just my preferred approach. Laying python3 and using Ansible will work fine, but I’d be doing what ignition is already (mostly) able to do. You’re right that updating a usb flash drive with a new ISO isn’t difficult, but it’s somewhat annoying. If I’m using Ansible anyway, than perhaps it doesn’t make sense to use ignition for anything other than setting up a user and installing python.
I don’t know if CoreOS + Ansible is the recommended approach either, but it doesn’t feel like it would be. It’s possible that there isn’t a recommended approach for small users like me because CoreOS isn’t designed for my use case (very small number of servers without supporting infrastructure).
I’m not sure what happened there, it certainly wasn’t intentional. Maybe it’s because I was editing the post while it’s category changed?
Fedora CoreOS is container-focused, designed primarily for clusters and optimized for Kubernetes nodes. Ignition runs very early in the boot process and one of its main purposes is to reduce the boot time. However, FCOS is great at operating standalone despite scale or workloads.
There is a recently introduced framework, very aptly named Pyromaniac, built around Fedora CoreOS, Butane and Jinja, which may be of interest to you and perhaps useful for your purpose.
Thanks for the link Hristo, that looks like a neat project I hadn’t seen before. At first glance it seems to help with a few of my concerns.
That’s what I’ve gathered, which is the reason for my topic. On the desktop side, Silverblue ~= Workstation but immutable. On the server side however, there doesn’t seem to be any immutable equivalent to Fedora Server. CoreOS will clearly work, but it’s optimized for clusters and doesn’t seem like it’s intended to be administered manually. IoT exists and seems maybe closer (??) to “Fedora Server” but has different tradeoffs and also doesn’t seem optimized for “normal” servers. It’s not even clear if there’s any official documentation describing the difference between CoreOS and IoT; I’ve only seen this.
I can make either CoreOS or IoT work, but considering Fedora’s push for immutable (which is a great push!), it seems surprising to me that there is no recommended workflow between the two extremes of Kubernetes cluster and IoT device.
Thought i’d share my experience, since I use CoreOS as a home server. May not be applicable for you, but it may be useful context.
I use Ignition to bootstrap disk/storage configuration and the hostname. If I had network requirements, I would likely also put that in ignition. Lastly, ignition trigger’s a rebase-and-reboot to my custom OCI-Image.
The OCI-image includes all my application configuration’s, including:
container configs via podman-quadlets
system packages, ie, VPN packages
system configs, ie, package configs
The custom OCI-image essentially replaces any need for Ansible. After first-boot/ignition, I use the OCI-image to manage my home-server. In the future, if I wanted to add an additional service/application to my home-server, I would:
create the configs
add configs to my Containerfile
let pipeline rebuild my Container
pull update and reboot in one of two methods:
wait for my CoreOS box to pull the update and reboot (set on a weekly basis) or,
SSH to my CoreOS box, and run rpm-ostree upgrade --reboot
As I already alluded to, I have a pipeline which builds my OCI image and host the image. It runs on a weekly basis so that any new updates from CoreOS stable stream are inherited. Then, my CoreOS box checks once a week for updates from my registry, and if there is an update, it performs the upgrade on Tuesday’s at 2am. Essentially, the box is auto-updating and relatively low maintenance.
As a reference, the configuration for my home-server is here:
The optimizations in FCOS are basically that it needs to be provisioned as quickly as possible in a cloud infrastructure, which is essentially some kind of virtual machine. Once provisioned FCOS isn’t much different from a traditional server edition, except that it’s obviously OSTree-based. How it is managed or configured after that is up to the administrator. The same is true for IoT.
If you whish to have Python right out of the box, you’ll probably choose IoT. If you need Docker and containerd in the base image, you will most likely want FCOS. It’s more a matter of preference in my opinion.
@barn 's approach is great and I also use a similar one, including for Silverblue.
Thanks @barn and @hricky, that’s what I do with my Silverblue install as well and I can do it for my server too. I’ll stick with that method unless something better comes along