Tooling to deploy & manage Kubernetes clusters atop CoreOS (updates, maint., etc)

Hello everyone! I am fairly new to CoreOS and have been doing a bunch of reading and have come away with a question I hope you all can help with: what tooling is available to help deploy and run a fleet of hundreds of bare metal boxes whose purpose is to run Kubernetes as part of many small to medium sized clusters? Being new to FCOS I am not sure about how to approach this as my background is with “traditional” Linux installs / distros as opposed to immutable ones.

I’ve read about what appears to be some pretty good tooling related to this that is part of the OKD project, but I haven’t found anything that talks about using that tooling outside of OKD / OpenShift. I’d love to know what you all think about this, and thanks in advance!

1 Like

As mentioned in this post , it depends on the use case.

So far I am doing deployment of Kubernetes (K3s for now) via Ignition and think I can get all the initial bootstrapping done via a script that runs on first boot via systemd… 95% of my question is really on “what comes after that” with regards to maintaining the systems. I found Kured that can orchestrate reboots, but it seems like a lot more than just that is needed long term. If I understand correctly, OKD uses machine-config-operator and some other tooling to help here, but I have no understanding yet of how that may or may not make sense outside of OKD / OpenShift, or if it is even the right way to go when not on those platforms.

For Kubernetes day-2 operations, you should probably ask in their community communication channels.

I thought I’d start here as I am focusing about CoreOS day 2 things. My mention of MCO and the like is simply that I have seen some k8s tooling mentioned in OKD docs that seems to help with this management of the host OS and its config and updates but, again, I have no idea if this even makes sense with regular old FCOS.

That’s an interesting and valid question… When you mention multiple k8s clusters, first thing that comes to my mind is “multi-tenancy”. Here again you can do it with some soft solutions like some cluster tenancy operators, namespaces or projects how they’re called in openshift… where you try to isolate somehow the workload of various teams inside the same cluster. The best way to be sure workload is isolated is using hard solutions like hypervisors, separate vms etc if you have these kind of resources. Maybe there are other solutions. When you find something interesting do let us know.

FWIW @romangherta, when I mentioned multiple clusters, I meant physically separate bare metal clusters, some of which will be in geographically different locations.

It could be that in some cases this might be too much effort. Your control plane is supposed to orchestrate containers on various nodes. Each time you create a new cluster you will create a new control-plane / data-plane. You could use the same control plane, in a HA mode and separate workloads based on node labels and namespaces. You could create special runtime classes that isolate workload through virtualization (kata containers or more recent crun-vm). You can isolate network traffic via Network Policies.

Otherwise if you really want a separate cluster, this means you will have to use a project that does the following - has access to a hypervisor via libvirt for example on your baremetal nodes then what - create libvirt networks before creating vms. I suppose libvirt will have to be accessed in a privileged mode by this project unless you do this manually. So via libvirt you will create 2 networks for control plane and data palne with a route for communication. Then create VMs based on coreos I suppose and some scripts that install whatever version of kubernetes you need. If you do this manually you will shortly see some issues. You will want to simplify things, maybe deploy just one node kubernetes cluster, which is ok but this will be a special case not a general approach. But still you might learn a lof of interesting things if you try this manually. Fedora quick-docs for libvirt are brilliant.

In any case, this is a challenge with kubernetes and various projects approach this in a different way and although they claim big things believe me there is no general solution to this problem yet. Read this page from kubernetes docs just to be sure. Some of the projects you mention are already there and you must understand exactly how each project approaches multi-tenancy beyond the marketing language and sparse documentation. Good luck.

I’m not aware of any current solution / tool for managing multiple clusters. Usually a company has 1 or 2 clusters (Dev and Production), or even allow each team to manage it’s own cluster. The oc tool is specific to OpenShift / OKD so it only works with clusters using those. One of the benefits of using OpenShift / OKD is that it also manages the CoreOS node for you. It can update and reboot the nodes with a command or from the UI. RedHat has a cheatsheet you can use. You can find it here:

I’m sorry I didn’t explain better. What I’m planning for is prod clusters in each of our data centers and, potentially, some single digit size edge deployments. The fewer control planes the better with regards to the data center stuff. Regarding the edge stuff, if each edge can be managed by a central hub and be totally isolated from every other edge deployment then I’d be super happy.

On the OKD / OpenShift front, my only worry with it is having to use a different distribution on the edge… I haven’t seen anything that indicates it can run on smaller / lower performance hardware. If by using a central hub I can actually do low end hardware on the edge, then I’m all for exploring OKD… that would make things much simpler for me on the administrative side.

Also, thanks for the link. I’m going to read that tonight.