Over the years I have dabbled with Kubernetes on-and-off a bit, and mostly learned theoretical stuff surrounding it. I also managed and deployed few of the EKS based clusters on AWS at work, so I know certain operational things as well.
To be honest, I haven’t really done any “cool and amazing” things with it. Those were mostly bare Amazon EKS setups, no ArgoCD or something fancy like that. Fanciest things I managed were Horizontal Pod Autoscaler and AWS ALB ingress controller on those clusters.
There was always something to prevent me playing with ArgoCD, various Ingress controllers, various storage interfaces, operators, Keda, Karpenter etc. But now, the time has come to start experimenting with Kubernetes on the local setup and try out all those cool things I always wanted but either had no time, budget or stakeholder buy-in to accomplish them.
Since I’ll be doing this locally on a Debian 12 powered Intel NUC, there are probably certain things I won’t be able to try out (such as Karpenter for example), but that will be a project for some other day.
There are various distributions of Kubernetes which may make your day-to-day life easier, most prominent one that was recommended to me was Rancher (RKE). But having reliable and simple K8s distribution that makes hard decisions for you is boring. What I want to accomplish here is to learn how K8s works and operates more deeply, to understand its components, quirks, and things that INSERTYOURDISTRIBUTIONSOFCHOICE hides from you. I don’t want to learn framework, I want to learn the underlying thing if that makes sense.
In any case, for this reason, I have chosen to go with a pure Kubernetes route. You know, the one you get on kubernetes.io. Deployment will be done using kubeadm and will focus on a single node where both control-plane and pods will reside on the same node. At least until I get few more of those NUCs and expand the setup.
To be honest, I thought this process was going to be most painful, but with many great guides online, reading official documentation, and just overall “I know, this is Unix” sayings I have managed to go through the initial setup quite smoothly. Aside of perhaps changing Pod network later, because I haven’t paid attention that I’ll need to deploy some sort of network to the cluster as well.
Great starting resource for me, to which I stumbled completely randomly during my RSS feed browsing, was the following guide:
First thing you’ll need to install and enable is some sort of container runtime. There are few to choose from, but as Ben in the above noted article, I have opted to use the containerd.
Load necessary kernel modules and tweak system settings:
cat <<EOF | tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF modprobe overlay modprobe br_netfilter cat <<EOF | tee /etc/sysctl.d/99-kubernetes-k8s.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF sysctl --system
apt update apt install containerd
Generate default containerd configuration:
containerd config default > /etc/containerd/config.toml
But watch out. If you already use containerd and have configured it prior to this, make sure to follow the official guide and see which settings are required for K8s. Since I haven’t used it prior to this, at least not on this machine, I just overwritten the config with a default one.
And if you’re using SystemD with cgroup v2, good thing to use is the cgroup driver. So in your
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] ... [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true
And ensure that
SystemdCgroup option is set to
Now you’re ready to restart and enable containerd at boot. In Debian, it gets enabled by default, but it never hurts to ensure that’s the case:
systemctl enable containerd systemctl restart containerd
Next step is to prepare the repository. If you’re using some other distribution family, make sure to get the correct steps from the official documentation. Since I’m using Debian I had to do the following:
apt update apt install apt-transport-https ca-certificates curl
Trust their GPG key
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Install and pin package versions
apt update apt install kubelet kubeadm kubectl apt-mark hold kubelet kubeadm kubectl
And at last, initialize a cluster:
kubeadm init --control-plane-endpoint=SOMEHOSTNAMEYOUWANTTOUSETOACCESSKUBEADM
Or, if you want to be smarter than me, add
--pod-network-cidr=192.168.0.0/16 (or other CIDR of your choice) to that command. Otherwise, before installing network provider you’ll have to either adjust that option in the cluster config.
If everything is OK, you should get output like this:
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join MYURL:6443 --token MYTOKEN \ --discovery-token-ca-cert-hash sha256:BLAH \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join MYURL:6443 --token MYTOKEN \ --discovery-token-ca-cert-hash sha256:BLAH
In case you miss out on adding pod-network-cidr configuration option, here’s what I had to do in order to remediate it.
Edit kube-proxy configmap
kubectl edit cm kube-proxy -n kube-system
And set clusterCIDR to the appropriate value
data: config.conf: |- ... clusterCIDR: "192.168.0.0/16"
Edit kubeadm-config configmap
kubectl edit cm -n kube-system kubeadm-config
And ensure you have podSubnet defined under the networking section
data: ClusterConfiguration: | ... networking: ... podSubnet: 192.168.0.0/16
Restart kubelet afterwards and you’re done with it. Pods should now use the new network.
systemctl restart kubelet
For my network plugin, I have chosen to use Calico, and followed their official docs for installation into the cluster. Except, as I don’t like to really pipe into the shell directly, and neither should you, I first downloaded files, reviewed them (to the limit of my understanding at this point) and then applied:
$ curl -O https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml $ kubectl create -f tigera-operator.yaml namespace/tigera-operator created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created serviceaccount/tigera-operator created clusterrole.rbac.authorization.k8s.io/tigera-operator created clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created deployment.apps/tigera-operator created $ curl -O https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml $ kubectl create -f installation.operator.tigera.io/default created apiserver.operator.tigera.io/default created
And checked pod status until they reached stable status. Example output few days after (although obviously, this shouldn’t take longer than 1-2 minutes, and it didn’t, I just forgot to capture the output for this post)
$ kubectl get pods -n calico-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-6975798cd4-v2f96 1/1 Running 0 2d6h calico-node-l6gxs 1/1 Running 0 2d6h calico-typha-5f464c47f-8h7d8 1/1 Running 0 2d6h csi-node-driver-95sfl 2/2 Running 0 2d6h
Alright, so this so far got me to the phase where I have a working cluster, and working networking, all on a single node. Now the next steps are to install ingress controller, and start deploying some pods/apps.
As this article is getting a bit long, I’ll split the ingress controller configuration into a separate one, but to spoil it for you, I have chosen to use HAProxy Ingress Controller. As I mentioned in some of my previous articles, HAProxy will always have a special place in my heart, and from what I can see, it should be pretty straight forward to use.