I think I’ve ignored the Kubernetes movement for many years now. I used to maintain Docker-based infrastructures a couple of years ago, over bare-metal, mostly for work purposes. It was an interesting learning experience back then in the details of the containers foundation.
Still, I am still prudent about infrastructure in general and for a long time have favored pure and simple bare-metal or bare-metal + VM solutions in favor of containers for most of the critical data workloads (a.k.a. Big Data). Even in work deployments, we bypassed the usual performance penalties of containers by bind mounting the disk or using IPVLAN for networking when pure performance was needed. My favoritism for bare-metal is based on the fact that you can’t just ignore +50 years of evolution and documentation (if we intent on saying that we consider the “birth” of 1st operating system (UNIX) in 1969). I don’t want to go earlier than that …
As readers of the blog would know, I used to make my own Hetzner based deployment with Ansible, FreeIPA and a few other tricks. Used to is the actual word as I’m now past that, for mainly two reasons:
- there’s something new and enjoyable that I’m doing at work, which consumes my brain time and keeps me entertained even in the late hours of the night. Thus I enjoy spending more time reading about the work subject (which happens to be a personal subject I want to develop on also) that I don’t feel an urge to keep infrastructure;
- management of the infrastructure itself, of the Let’s Encrypt certificates and anything else, was tedious and manual (automated with Ansible + Go.CD but either I had to write too much code to link stuff together or had to just manually add the necessary links or configuration from time to time). I favored immutability, favored a GitOps flow, wanted to keep passwords safe, preferably not in a Keepass but in source control (Git). As a conclusion, it was taking waaaaay to much time, from family mostly.
At first I thought, let’s give Kubernetes (on Hezner) a try. Found a little utility called xetys/hetzner-kube which looked interesting and it worked (at least a fork of it did on Ubuntu 20.04) but … I crashed and burned that cluster. The crash and burn was an interesting story into understanding how a Kubernetes cluster is bootstrapped. For people with time on their hands, sure, play with it, use it, probably don’t use it in production.
For me however, I wanted to maintain a few of the services (some of which actually bring a little cash flow to the extended family) without pain and immense loss of time. As such, the decision to find a managed Kubernetes service came about (the cheapest being Scaleway, DigitalOcean, OVH then GCP/AWS offerings). I tried Scaleway (for the price) but after +1h of waiting, the nodes weren’t attaching. Bad experience. Then tried DigitalOcean. Good starting experience, good performance, fast cluster set-up (with 3x nodes and will probably grow).
Why the switch from cheap Heztner (and I give them that, they have good resources at a fraction of the price) to DigitalOcean (less resources, but a managed Kubernetes). It has to do with reality checks:
- in reality I enjoy development more than infrastructure work. I do understand infrastructure, I feel a vibe and a sugar rush whenever I do infrastructure work (Ops) but that’s not my main passion.
- I mostly enjoy the “image” or “immutable” approach. If I make an application with some libraries, describing what it needs and how it works, I intent the machinery to bring it to an environment and actually maintain it there;
- I want things such as Let’s Encrypt certificates, DNS, ingress (reverse proxy) handled for me. Configuring databases (with a nice MySQL Operator from Presslabs, also used by Mattermost) is a breeze. Other operators on OperatorHub.io or various (stable or not) HELM charts from around the Internet make it interesting, fun and for the most parts of it, easy to update to the latest versions of everything;
- secret management is now much easier as I combine Secrets Operations a.k.a. “sops” (inspired from HELM secrets) with the value definitions in Git for every service I need;
- I still build and run things with Go.CD (with Elastic Agent Profiles) and still use Ansible to manage some parts of the infrastructure, but waaay more less than I used to and they’re no used for gluing together whatever I don’t find yet in Kubernetes easily to do and when I need a clear “value stream” between code and production;
- I wanted time back for reading, writing, experimenting. Thus, delegating the “nuts and bolts” of infrastructure, having best practices applied easily and automatically by the “synergy” provided through a Kubernetes cluster frees me to look into other passions, while knowing that I have a standard way of interaction with my own infrastructure that doesn’t vendor-lock me in to Digital Ocean or some other provider;
This blog, by the way, runs on Kubernetes.