Install application from ignition

I read in this discussion that it is not recommended to install any binaries into the base system; but then what is the best way to install an orchestration service such as swam? Obviously that cannot be run from a container.

And if the best way to install it is using rpm-ostree, then how is that achieved via ignition? Can someone provide a sample ignition excerpt that would install docker-compose or something?

One more thing, what is the best way to modify existing files via ignition? For instance, if one wants to use docker swarm, one would have to disable –live-user in the file /etc/sysconfig/docker. What is the best way to do that with the ignition system; to just replace the entire file?

As for your specific questions, I am of no help, but in general here is my experience about customization.

Ignition is nice, but it isn’t a full-blown orchestration tool. It is really just to automate basic admin tasks. Getting too fancy with ignition is a real headache because when testing a config that fails, it fails giving cryptic error messages which prevent your system from booting.

My advice is to stick to the basics with ignition such as add/overwrite/append configs, turning on/off services, adding users and filesystems.

I deploy FCOS in VMware and use a combination of ignition and packer to configure, customize, and then create a new ova with our modifications. Packer can do the heavy lifting ignition cannot do. You can probably use the same process to launch, process, and save a new image of the OS in your environment.

If that is not feasible, then you will have to turn to using a tool like ansible to customize your OS post-launch.

In general we do recommend to run things in containers… However, if you do have a statically compiled binary that is something that you can easily deliver via Ignition. You’ll just have to remember to update it when you want it updated.

IMHO this is generally fine for statically compiled binaries, but not advised for dynamically linked binaries that depend on share libraries from the host. The reason is that you are then depending on shared libraries delivered by the OS that could change easily in an automatic update. We want the updates to be reliable and not break your application/use cases.

So what if you have something that’s not a static binary and not easy to containerize? This is currently a problem we’re facing in the community. There is an open RFE for “reliable extensions”. Please join the discussion there.

You can use package layering (you’d have to write a config that creates a systemd unit that called rpm-ostree install xyz --reboot), but this can make your updates less reliable (see coreos/fedora-coreos-tracker#400).

Do you mean --live-restore, not --live-user ? I assume so.

From looking at the systemd unit for docker.service it does look like the only way is to replace /etc/sysconfig/docker. You can replace the file by using a storage section snippet in your fcct config like:

variant: fcos
version: 1.0.0
    - path: /etc/sysconfig/docker
      mode: 0644
      overwrite: true
        inline: |
          OPTIONS="--selinux-enabled \
            --log-driver=journald \
            --default-ulimit nofile=1024:1024 \
            --init-path /usr/libexec/docker/docker-init \
            --userland-proxy-path /usr/libexec/docker/docker-proxy"

Once you bring the system up you can see Live Restore Enabled: false:

[core@localhost ~]$ sudo docker info | tail -n 2
Live Restore Enabled: false

NOTE: This means you now own the docker daemon startup configuration so you’ll have to watch for changes to the docker daemon to see if you need to change your options. If you’d like to make this more configurable you could possibly open a PR to the moby-engine package in Fedora: