There seems to be some significant variation in opinion on how to deploy k8s to coreos based VM. I am new to coreos and am struggling to find a good tutorial or even a high level thought piece on the process.
My intuition is that k8s should be installed via packages from rpm-ostree (not least because they would form part of the automatic updates for the OS?) and specified in the ign file for a template VM and then that template should be cloned and booted with additional node specific ign configurations for control planes, infrastructure, and application nodes.
In the first instance I would be grateful if someone could validate this general approach. I would be super excited if someone could outline the process in whatever detail they had inclination to offer or sign post resources that would cover the process.
I would be VERY interested in seeing a sample ign file that installed k8s components.
As an aside I use terraform to provision infrastructure, ansible to configure applications, VCenter for virtualisation . I am currently in the process of trying to deploy a k8s cluster to a single esxi host on a workstation.
Certainly for testing purposes I am content for the most recent version of Kubernetes available in package manager and do not have any dependencies on specific versions with my current projects. In fact a benefit of CoreOs that attracts me is the fact that the services can be updated automatically to latest version within the os.
Whilst the runtime isn’t so important to me at this stage I would generally lean in to cri-o, but this preference is based mainly on commentary from third parties on stability.
Based on this what, in your opinion, would be the best way to provision Kubernetes at ignition stage? Could you point me to a sample ign file? I am happy to configure in ansible post-deployment - joining control planes to workers, extracting kubectl config etc.
My goal in the short term is to merely provision Kubernetes components at the point of initial boot with ign files - and make them operational shortly afterwards with playbooks - then round out the approach as I explore the possibilities of the os and identify specific configurations for projects once I have some infrastructure to experiment on.
I’m working/playing on a solution that is really easy to install ; based on cri-o, calico, and a containerized kubelet as in typhoon (I use the same container image )
I will publish this on github in a few days/weeks
The idea is to install FCOS with some ignitions files (or all in one) that download cri-o, kubeadm, kubectl
Cri-o is installed
the kubeadm init is started with some settings in ignitions files
kubelet is starded
calico is installed
this was controller ign files, you can do the same with workers ign files and you will have workers
I can’t fine an elegant way to auto join workers so you will have to do that manually
Using terraform to provision fcos and layering kubelet and crio with rpm-ostree.
Using ansible to configure. Some problems with read write permissions for folders in fcos when creating default kube files in /lib/… folder so have changed to using/etc/kubernetes/ subfolders for everything in clusterconfig template file which I use with kubeadm --config option.
I generate a new kubeadm join command and token after init rather then trying to extract it from the kubeadm init command. Trying to do that seems silly…
I’m now trying to setup HA and next step, I think, is to copy over certificates from pki folder jn original control plane node I to subsequent control plane nodes. I’m hoping once that is done setting up the workers will be relatively straightforward.
My ansible code is pretty crude and I doubt that it will work well for modifying the cluster… will take some refactoring once it’s working.
Not much time at present but will update when I make any further progress.