Setup for running an app in containers on virtual machine placed on home physical server

Hello everyone! :wave: :wave:

I divided the post into two parts:

  1. overall context (totally optional, but I fell it may be interesting/inspiring for some people),
  2. specific question that I have at the moment (feel okay to jump straight to this section).

1. The context:

I am planning to run a small server at home with a few independed applications for my personal usage.

I believe in keeping strict security boundaries in the software as an extra layer of security in case something is compromised. This is why:

  • I would like to keep the applications isolated from each other. I think Type-2 hypervisor would provide me a satisfactory level of isolation here.
  • in case the application consists of several components (such as HTTP server, database, etc.), I would also like to isolate them from each other. Here I think container-based virtualization will be satisfactory.

As for specific software, on the server I would like to install Proxmox VE. It will be used only for:

  • the management of virtual machines,
  • and their backups.

Proxmox VE is based on Debian stable, which I perceive as a good (good enough for me) base for secure OS due to its relatively old (so very well tested) packages. It is important especially in case of the “host” operating system (as compromising it would effectively compromise all the apps).

I am planning to run specyfic applications inside Proxmox VE’s Qemu/KVM virtual machines.

Here is the same on a diagram for visual learners:

fPH1JuD048Nl-oiczAA7ndIv6jEq6ZLgWhRffUbXXR6GiBlDxbBR6Fwx2vGjHbgfvG0Xy_QRcVS8GssOCXbF2H4oGl22UAkJaAKGIuwWKQrHUT0RAVafu8i0V2Z62vsI6outCBkprUTa6PjHcc6f02XHQpu0fi5xHPvTFSrUO9tOJfRCMHWd0ksv2OkY5FViughuzh3KvV6B-morkGIvGiLC8fQWjzeWRsdxutXOTHzBOPZjLYmHeCeQKe1

I would like to choose a guest operating system (an operating system for virtual machines) on which I will run containerized applications. I would also like to choose a method by which I will manage these containers.

Requirements relevant in the context of this post:

  • I’d like to keep the operating system and containers with components up-to-date (to fix security vulnerabilities and bugs as soon as possible). In practice, it means the upgrades has to happen automatically.
    Note: I accept rare failures caused by automatic upgrades (as I belive I will be able to maintain easily-recorvelable backups of VMs at Proxmox VE level),
  • I would also like to keep “infrastructure as code”, meaning I would like to keep track of the whole configuration. In practice, I want to store everything what is needed to provision the guest OS (with containers) from scratch using a version control system.

So far, I was considering using Ansible scripts to provision Debian with containers managed by Docker Compose and some small custom script automating regular system and container updates.

However, while doing my research, I came across Fedora Core OS (stable stream). It seems to be a potentially a great fit for my needs, as It seems to:

Beyond my requirements, I like that it seems to be a minimal operating system (as I perceive fewer packages as a smaller attack vector and less resource consumption at the same time). One (acceptable) downside is that it has newer (potentially less tested) packages than Debian.

It all leads me to the preliminary conclusion that Fedora Core OS may be a good choice for a guest OS on my server. Now I would like to understand which method of provisioning is best for me.

2. Question

I can see Fedora Core OS uses partially immutable storage.

At the moment, I think I only need a mutable storage for:

  • volumes of the custom containers that I will run,
  • logs (alternatively I could stream the logs to other place over the network).

I am wondering which option should I choose:

a) Running Fedora CoreOS directly from RAM from a boot image built using --live-ignition flag with:

{
  "ignition": {
    "config": {
      "replace": {
        "source": "https://raw.githubusercontent.com/.../coreos.ign"
      }
    },
    "version": "3.3.0"
  }
}

I belieive, it would let me simply re-provision the machine simply be changing the ignition file on GitHub and rebooting the virtual machine - which seems very nice :wink:

The actual ignition file stored on GitHub, would need to define a partition for mutable storage (shared between reboots/upgrades) for volumes assosiated with custom containers I am going to run (and maybe for logs as well).

Downside:

  • My understanding is that this solution does not provide me automatic upgrades of the OS and containers. So, I would need to take care of automation (that would be a script executed on Proxmox VE) that will regularly rebuild the image and restart the virtual machine. Not sure how often would I need to run it to keep the software fresh (once a day?).

b) Install Fedora CoreOS on disk using a ignition file containing the full config

If I understand correctly, this solution comes with an automated upgrades of the OS and all the containers.

Downside:

  • Won’t re-provisioning the machine be more troublesome? Ie. every time I changed the ignition config, I would have to reinstall the OS. However, I assume I could leave the disk/partition with volumes assosiated with my custom containers untouched and keep it between re-provisionings.
  • It seems that the whole /var/**/*` will be mutable. One one hand, it’s not a problem, but on the other hand I am not sure if I need it to be muttable.

Question:

Would you recommend me more which of these two options? I am not very familiar with Fedore CoreOS yet, and I don’t know if any of the options are more used in practice, etc.

My intuition is that option B is the way to go, but I wondering what will you say.

Thank you :heart:

Correct. Every boot would provision from scratch.

Right. In this case it would be good to have a place you can persist things so you don’t have to re-download it all on every boot/provision.

If you reboot once a week you’d probably be good.

Yes and No. By default automatic upgrades of the OS are enabled, but your containers are managed by you so you’ll have to keep those up to date.

Yes. If you change the Ignition config ideally you’d re-provision to validate that the change was good. Luckily you can automate the provisioning so in my opinion this isn’t much more heavyweight than if you were doing the “Live” approach from a).

Also, you can configure things in such a way that even a re-install will leave data in certain places in tact.

I think either approach will work :). Most people do use FCOS through option b), though.

Hi,
I’d like to share my own infra to help you make some choices.
I have 3 Fcos on bare metal (and a PXE / http server)

  • 1 live PXE boot
  • 1 installed on “stable” stream
  • 1 installed on “next” stream

In the future I’m planning to move every servers in live mode and to update them with this script I manually run for the moment :

#!/bin/bash

if [ $1 ]
then
	stream=$1
else
	stream="stable"
fi

arch="x86_64"
artifact="metal"
format="pxe"
scp_to="user@pxeserver:/path/to/share/"
downloadedfiles=()
failed=false

echo "Checking updates from $stream stream at : $data"
data="https://builds.coreos.fedoraproject.org/streams/$stream.json"

echo "Looking for $artifact $arch $stream release"
data=$(curl $data | jq .architectures.$arch.artifacts.$artifact) #filtering arch / artifact $data informations

if [[ -z "$data" ]] #if $data is empty then exit, that probably mean that $stream doesn't exists
then
	exit
fi

source /etc/os-release #getting os current version

if $(jq -n "$data" | jq --raw-output --arg version $OSTREE_VERSION '.release > $version') || [ $1 ] #comparing versions or force a stream
then 

	if [ $1 ]
	then
		echo "Getting $(jq -n "$data" | jq --raw-output .release) $stream release"
	else
		echo "Update available from $OSTREE_VERSION to $(jq -n "$data" | jq --raw-output .release)"
	fi

	files=$(jq -n "$data" | jq .formats.$format) #filtering $format files version
	for file in $(jq -n "$files" | jq --raw-output 'keys[]') #downloading all files
	do

		filename="$file.$stream"
		fileinfo=$(jq -n "$files" | jq .$file) #filtering each file informations

		for try in {1..2} #let try 2 times downloading with correct checksum
		do
			echo "Downloading $(jq -n "$fileinfo" | jq --raw-output .location)"
			curl -C - -o $filename $(jq -n "$fileinfo" | jq --raw-output .location) #Downloading fileinfo.location
			if echo "$(jq -n "$fileinfo" | jq --raw-output .sha256) $filename" | sha256sum --check 
			then
				downloadedfiles+=("$filename")
				break
			fi
			failed=true
		done
	done

	if ! $failed
	then
		scp ${downloadedfiles[@]} $scp_to
		if [ $? -eq 0 ]
		then
			rm ${downloadedfiles[@]}
		else
			echo "scp failed, re-run the script"
		fi
	fi
else
	echo "System is up to date, release : $(jq -n "$data" | jq --raw-output .release), nothing to do"
fi

you can run script.sh stable to “force” download stable stream, or script.sh to update current stable stream (this is a WIP … :wink: )
this will output files to kernel.stable initramfs.stable and so on to your pxe/http server via ssh

So, based on this, you can script some auto updates for your live Fcos

About containers now, if you want some auto update you can just enable --now podman-auto-update.service
this will check in following example if a new, e.g. “latest”, image is on your registry, donwload it, stop the service and restart it with new image

This works great with some adjustements :

  • add Environment=PODMAN_SYSTEMD_UNIT=%n in the [Service] section of your systemd unit file
  • add --label="io.containers.autoupdate=registry" in your podman run argument list
...
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
Type=notify
ExecStart=/usr/bin/podman run \
  --label="io.containers.autoupdate=registry" \
  --hostname=%N \
  --name=%N \
  --replace \
  --detach \
...

in your PXE create the needed files with the mac of your VM :
/tftproot/pxelinux.cfg/01-xx-xx-xx-xx-xx-xx/ which content :

DEFAULT FedoraCoreOS 
TIMEOUT 5 
PROMPT 0
LABEl FedoraCoreOS 
    KERNEL http://server:port/path/to/share/kernel.stable 
    APPEND initrd=http://server:port/path/to/share/initramfs.stable coreos.live.rootfs_url=http://server:port/path/to/share/rootfs.stable ignition.firstboot ignition.platform.id=metal ignition.config.url=http://server:port/path/to/ignitions/config.ign 
IPAPPEND 2

Show must go on ! :wink:
Let us know

1 Like