Baseline Setup, Part 2 – Laying the Foundation with MicroK8s

Laying the foundation for my homelab rebuild with MicroK8s. This post covers why I chose it, how I configured it, and how my Ansible admin node (Overseer) keeps everything repeatable.

Baseline Setup, Part 2 – Laying the Foundation with MicroK8s
Photo by Ian Taylor / Unsplash

Baseline Setup, Part 2 – Laying the Foundation with MicroK8s

If the homelab is a house, Kubernetes is the slab I’m pouring this time around.

Over the years, I’ve bounced between raw Docker, Compose stacks, Portainer, and even full-blown K3s. They all worked… until they didn’t. This time, I wanted something lean, modular, and structured—without turning my homelab into a second job.

I landed on MicroK8s for the cluster, and I’m using a separate orchestration node called Overseer to keep things organized and repeatable. This post covers how I laid the foundation.


🧠 Why MicroK8s Over K3s (and Everything Else)?

I gave K3s another shot. It’s solid, but I ran into quirks with Helm CRDs and long-term config drift. I didn’t want to rely on Rancher for cluster management, and I really didn’t want to waste weekends debugging CSI drivers.

MicroK8s just worked:

  • One-line install via Snap with sane defaults
  • Built-in support for MetalLB, Ingress, DNS, and RBAC
  • Lightweight enough for bare metal but structured enough for real services
  • Actively maintained by Canonical (Ubuntu maintainers)

🧠 Enter Overseer

I also introduced a new piece this time: a dedicated Ansible orchestration server outside the cluster.

I call it Overseer. It:

  • Manages all provisioning and playbook execution
  • Has a full inventory of every node (current and future)
  • Stays separate from MicroK8s so I can rebootstrap or wipe cluster nodes without losing my playbooks or logs

Right now it manages two nodes, but I’m planning to scale up to four for redundancy and workload separation.


🏗️ Current Node Setup

  • sundari-core: control plane + workloads
  • sundari-node1: secondary worker node
  • All nodes are bare metal, running Ubuntu 22.04 LTS
  • 4 cores each, 32 GB RAM, fast 2TB SSDs

These are racked and have static DHCP leases mapped in Unifi Network.


🔄 Cluster Install Process

On each node:

sudo snap install microk8s --classic
sudo usermod -a -G microk8s $USER
newgrp microk8s
microk8s status --wait-ready

Then enable the key add-ons:

microk8s enable dns ingress rbac metallb:192.168.1.240-192.168.1.250

🤝 Cluster Formation

On the primary node:

microk8s add-node

Then paste the output command on the secondary node. Confirm with:

microk8s kubectl get nodes

🌐 Networking & Prereqs

Key things to check:

  • Open ports 16443, 10250, 8472/UDP between nodes
  • Enable IP forwarding and bridge config:sudo sysctl net.bridge.bridge-nf-call-iptables=1
    sudo sysctl -p
  • Disable conflicting firewall rules (ufw, etc.)
  • Static IPs via DHCP reservations (essential)

🧩 Enabled Add-ons

  • dns: internal service discovery
  • ingress: basic routing (Traefik takes over soon)
  • rbac: preps the cluster for SSO and user roles
  • metallb: assigns external IPs to services
  • hostpath-storage: for lightweight local PVCs

✅ Initial Test

To verify routing:

kubectl run nginx --image=nginx
kubectl expose pod nginx --port=80 --type=LoadBalancer

MetalLB assigned an external IP, and the NGINX page came up on first try. Always a good sign.


🧠 Lessons Learned

  • Keep MicroK8s logs handy:
    /var/snap/microk8s/common/var/log/
  • Keep Overseer outside the cluster so you can nuke and pave without collateral damage

Snap installs are great, but schedule refreshes:

sudo snap set system refresh.timer=03:00-04:00

🔜 Up Next: Traefik

With the base cluster online and IP routing confirmed, next I’ll bring in Traefik as the router. I’ll walk through setting up entryPoints, automatic TLS with Let’s Encrypt, and how I use TOML files to keep every app route readable and portable.