The 'right way' to deploy kubernetes on coreos virtual machines

There seems to be some significant variation in opinion on how to deploy k8s to coreos based VM. I am new to coreos and am struggling to find a good tutorial or even a high level thought piece on the process.

My intuition is that k8s should be installed via packages from rpm-ostree (not least because they would form part of the automatic updates for the OS?) and specified in the ign file for a template VM and then that template should be cloned and booted with additional node specific ign configurations for control planes, infrastructure, and application nodes.

In the first instance I would be grateful if someone could validate this general approach. I would be super excited if someone could outline the process in whatever detail they had inclination to offer or sign post resources that would cover the process.

I would be VERY interested in seeing a sample ign file that installed k8s components.

As an aside I use terraform to provision infrastructure, ansible to configure applications, VCenter for virtualisation . I am currently in the process of trying to deploy a k8s cluster to a single esxi host on a workstation.

You can take a look at what Typhoon is doing: https://typhoon.psdn.io/

How to deploy Kubernetes on Fedora CoreOS depends on a lot of factors:

  • Do you want the kubernetes version to be updated at the same time as the OS or on its own schedule?
  • Do you rely on a specific version of Kubernetes?
  • Which container runtime do you want to use? containerd or cri-o?
  • etc.

so depending on your goals, the way to deploy it may vary widely.

1 Like

Thanks Timothée,

Certainly for testing purposes I am content for the most recent version of Kubernetes available in package manager and do not have any dependencies on specific versions with my current projects. In fact a benefit of CoreOs that attracts me is the fact that the services can be updated automatically to latest version within the os.

Whilst the runtime isn’t so important to me at this stage I would generally lean in to cri-o, but this preference is based mainly on commentary from third parties on stability.

Based on this what, in your opinion, would be the best way to provision Kubernetes at ignition stage? Could you point me to a sample ign file? I am happy to configure in ansible post-deployment - joining control planes to workers, extracting kubectl config etc.

My goal in the short term is to merely provision Kubernetes components at the point of initial boot with ign files - and make them operational shortly afterwards with playbooks - then round out the approach as I explore the possibilities of the os and identify specific configurations for projects once I have some infrastructure to experiment on.

Thanks again for your replies.

Ben

I’m working/playing on a solution that is really easy to install ; based on cri-o, calico, and a containerized kubelet as in typhoon (I use the same container image :wink: )
I will publish this on github in a few days/weeks

The idea is to install FCOS with some ignitions files (or all in one) that download cri-o, kubeadm, kubectl
Cri-o is installed
the kubeadm init is started with some settings in ignitions files
kubelet is starded
calico is installed

this was controller ign files, you can do the same with workers ign files and you will have workers

I can’t fine an elegant way to auto join workers so you will have to do that manually

stay tuned

any update on the tool progress

My progress:

Using terraform to provision fcos and layering kubelet and crio with rpm-ostree.

Using ansible to configure. Some problems with read write permissions for folders in fcos when creating default kube files in /lib/… folder so have changed to using/etc/kubernetes/ subfolders for everything in clusterconfig template file which I use with kubeadm --config option.

I generate a new kubeadm join command and token after init rather then trying to extract it from the kubeadm init command. Trying to do that seems silly…

I’m now trying to setup HA and next step, I think, is to copy over certificates from pki folder jn original control plane node I to subsequent control plane nodes. I’m hoping once that is done setting up the workers will be relatively straightforward.

My ansible code is pretty crude and I doubt that it will work well for modifying the cluster… will take some refactoring once it’s working.

Not much time at present but will update when I make any further progress.

Sorry for being late, summer is summer :wink:
I’ll try to publish something soon …

The show must go on @gaurav09kumar : GitHub - Relativ-IT/KoreOS

1 Like

Looks great! I have been looking for something like this!

I saw kubernetes have real rpm repos for Kubernetes, i might switch to those instead, upgrades might be easier that way?

thanks ^^
I don’t know, this is a very “candid” installation process and a poc that just works
I’m used to boot my CoreOS via PXE and don’t make any installation so updates are made via butane/ignition files at each boot

I need to update some default manifests that have been updated to beta4 like ClusterConfiguration and InitConfiguration but for now my cluster is up and I must create a testing lab for the installer to be able to work on it

1 Like

I tried this yesterday and I got it working with iPXE.

Changing to beta 4 was easy, here is the only thing that needed to change in the kubeadm template.

controllerManager:  
  extraArgs: 
    - name: flex-volume-plugin-dir    
      value: /var/lib/kubelet/volumeplugins

I’’m still figuring out to add a second controlplane, but I don’t really know how since kubeadm init runs on every boot.

Thinking about creating a file /var/.installed, and only run the KoreOS-installer.service if that file is not present, or something.

Nice to see that you are interested by this project :wink:
I did add : ConditionDirectoryNotEmpty=!/var/lib/etcd in my own config in KoreOS-installer.bu to have :

variant: fcos
version: 1.5.0

systemd:
  units:
    - name: koreos-installer.service
      enabled: true
      contents: |
        [Unit]
        Description=Install KoreOS
        RequiresMountsFor=/etc/kubernetes
        After=network-online.target crio-installer.service crio.service koreos-template.service
        Wants=crio.service koreos-template.service
        ConditionPathExists=!/etc/kubernetes/manifests/kube-apiserver.yaml
        ConditionPathExists=!/etc/kubernetes/manifests/kube-controller-manager.yaml
        ConditionPathExists=!/etc/kubernetes/manifests/kube-scheduler.yaml
        ConditionPathExists=!/etc/kubernetes/manifests/etcd.yaml
        ConditionDirectoryNotEmpty=!/var/lib/etcd

        [Service]
        TimeoutSec=300
        Type=oneshot
        RemainAfterExit=true
        ExecStartPre=/usr/local/bin/kubeadm config images pull --config /opt/koreos/kubeadm.config.yaml
        ExecStart=/usr/local/bin/kubeadm init --v=5 --config /opt/koreos/kubeadm.config.yaml
        ExecStartPost=/usr/bin/install -D -o root -g root /etc/kubernetes/super-admin.conf /root/.kube/config
        ExecStartPost=/usr/bin/install -d -o core -g core /home/core/.kube
        ExecStartPost=/usr/bin/install -o core -g core /etc/kubernetes/admin.conf /home/core/.kube/config
        ExecStartPost=/usr/local/bin/kubectl --kubeconfig /root/.kube/config cluster-info
        ExecStartPost=/usr/local/bin/kubectl wait --kubeconfig /root/.kube/config nodes --all --for condition=Ready
        ExecStartPost=/usr/local/bin/kubectl wait --kubeconfig /root/.kube/config --all-namespaces deployments --all --for condition=Available --timeout=60s
        ExecStartPost=/usr/local/bin/kubectl wait --kubeconfig /root/.kube/config --all-namespaces pods --all --for condition=Ready
        ExecStartPost=/usr/local/bin/kubectl create --kubeconfig /root/.kube/config -f /opt/koreos/tigera-operator.yaml
        ExecStartPost=/usr/local/bin/kubectl wait --kubeconfig /root/.kube/config --namespace tigera-operator deployments --all --for condition=Available --timeout=60s
        ExecStartPost=/usr/local/bin/kubectl create --kubeconfig /root/.kube/config -f /opt/koreos/calico.config.yaml

        [Install]
        WantedBy=multi-user.target
1 Like