Where to begin: docker, Atomic, Container Linux, Fedora/RedHat/CentOS CoreOS?


#1

I am a long time *NIX operator/sysadmin with many years of keyboard time on RH, Yellow Dog, Fedora (I remember the Core wars), RHEL, CentOS and many others. I have a heavy focus in virtualization, both VMware and KVM/libvirt, and I believe containers and the openshift style infrastructure is the logical progression to my next level of education. Right?

But how do I do containers to get started right now, without spending a lot of time on what may become irrelevant tech (hello, docker swarm.)

  • Install Docker on my OS of choice

  • Deploy an Atomic VM

  • Deploy a Container Linux VM/iron machine

  • Patiently wait for FCOS to drop a beta

I surely can’t wait so long as that last point requires, but I’d rather not spend a lot of time figuring out RedHat Atmoic and os-tree if that is going to change drastically.

I know I’m asking about a lot of things that are currently in flux. I don’t really expect definitive answers, I’m hoping the folks here who know more and have walked these paths can provide some suggestions.


#2

I would start by doing a “oc cluster up” on a Fedora/Centos, or Minishift, and do some basic containers, play with s2i. You likely do not want to do bare containers right away, because that’s kinda too low level and lots of stuff are missing. Kubernetes/Openshift is the way to go for the industry, and the os (FCOS, etc) is not the most important detail when starting IMHO.


#3

Thanks misc, that is exactly the sort of feedback I’m looking for. I’ll do some reading and give a few things a try.


#4

Hello @zeroxsheepdog,

From your message I couldn’t tell if you already have experience with containers or not. If you don’t, I think the best place to start would be docker itself, by creating containers, creating images, changing network properties, granting special permissions in cases where they are needed.

If you already have Linux/KVM/Openstack experience, this will help a lot in mastering Kubernetes, since a lot of the ideas are similar. It does however depend how deep you want to go. If you run Kubernetes on a public cloud, you don’t have to worry about what runs beneath the surface. If you run your private Kubernetes cluster, you will need some other skills too, just like in the case of Openstack (example: nginx, keepalived, VXLAN, Ceph/GlusterFS, DB clusters and so on).

If you want to try Kubernetes out, and you already have some docker images ready, the easiest way is to create K8s a cluster using GCP (or AWS or DigitalOcean or whatever). It can also be automated using GitLab, and there is an interface in GitLab that allows you to create a free trial K8s cluster on GCP. This will help you get a good feeling of what a Kubernetes user (not admin) can do, and how using Kubernetes would benefit the development process of applications. By mastering the Kubernetes functions, you can also improve the high availability of your services and also reduce the costs of hardware.

If you want to host your own, you can start with Kubespray, since it provides some Ansible playbooks to deploy Kubernetes. Openshift is of course the jewel of RedHat, but without a license you can only get Openshift Origin which feels like BETA in a lot of cases. If you know what you are doing, Openshift Origin can be used even in production, but it’s really to painful for somebody who is new to containerisation. If your company needs some in-house deployment of Kubernetes, the best thing is to contact a company that offers deployment and training, since the first steps are usually the hardest. This is highly recommended is situations where you have some monster applications with hardware that costs a few millions, since a proper on-premise Kubernetes deployment will provide Load Balacing, DNS, High-Availability, Monitoring, Storage, GUI, CLI, automation, RBAC and so on.

After you covered these parts, it makes a lot of sense to have a look at network protocols like BGP or EVPN. BGP is also used by some network plugins within K8s (like Calico). EVPN is used for cases where you want VXLAN networks to be managed by the switches, and not on the servers, by moving the data plane from the CPU to the ASIC. This is important since a proper network architecture will reduce the costs a lot. DigitalOcean proved this year that a proper implementation of BGP alongside Kubernetes can create a huge network connection containers located every in the world, in 20 or more datacenters. This also reduces your costs a lot if you run Linux operating systems on the switches, instead of Cisco or Juniper. In comparison with a legacy Cisco/VMware infrastructure, setups like this can provide cost reduction at all levels (hardware, software, support model) by using open source software and proper automation. The tricks is that in those situations you have to design all the 3 elements (Network, Compute, Storage) together for that specific use case, and the cost reduction is huge.

Good Luck! :smiley: