Homelab Part 2— How I built a versatile and repeatable Homelab with Terraform

Sahu
6 min readMar 3, 2023

--

This is the second part of the HomeLab series, where I’m creating an on-premise (my Home) setup for all my personal needs. Checkout the YouTube Video for Part 1 here or you can read it here.

The HomeLab Setup

In this one I will go over the internals of how I built the home server’s infrastructure and also give a brief about what I eventually want this setup to look like. But the best part is you can even use the same setup for setting up a Local Kubernetes Playground, it’s versatile like that. 😏

As usual, there’s a YouTube video as well with a Live Demo.

You can find the entire source code on my Github (Homelab by mrsauravsahu) — this repo will contain the various parts (or molecules as I call it, built as terraform modules) to create the setup.

The few things I want to make sure this setup does are —

  • The setup should be repeatable, and I should be able to recreate the infrastructure anytime I want. Clearly, Infrastructure as Code (IaC) makes sense for this and I am using Terraform as my IaC.
  • And scale to the number of servers you have. I currently have just one server, which is my Raspberry Pi, but ideally I want other hosts on my network to join in and disconnect to give me more compute power for certain tasks (running a sandbox development environment, rendering a video from a queue when my video editing machine joins in — Yes, few nodes will have GPU support as well; and other ad-hoc activities which aren’t directly possible with the small Raspberry Pi)

Node Requirements

You need at least 8GB of RAM and some available storage. I’ve tested this on my Pi so any decent Tower Machine should be fine too.

There are also a few requirements that should be present on the node before starting, but at some point, I’ll move everything to the first molecule so you can get started with just a Linux (or Mac or Windows — Multiple OS support anyone? 🤩) Machine and just setup SSH yourself to get started.

Installing Docker

Before orchestrating resources in Kubernetes, we need a way to run images. I set Kubernetes with k3s in the first molecule, so I need a Container Engine too. k3s by default uses containerd but because I have full access to the node, and I’d like to debug things if required, I’m going to be using Docker as my Container Engine. This means all Kubernetes Resources (Pods, Services just to name a few will run as Docker Containers)

The following assumes you’re running a Debian based OS in your node, I have Raspian on my Raspberry Pi. Let’s install docker.

# install docker
$ sudo apt install docker.io

Some troubleshooting required

You might need to add your user to the docker group if you want to use docker commands through your user. Check this link out ( https://docs.docker.com/engine/install/linux-postinstall/) for that, but try a docker run hello-world to see if everything is working, and maybe also try a restart.

I also had to modify my docker daemon.json. 🤷‍♂️

$ cat /etc/docker/daemon.json 
{
"exec-opts": [
"native.cgroupdriver=cgroupfs"
]
}

Apart from this, the install might fail with weird errors related to cgroups. You need a few extra packages. This is to support running containers and socat for exposing services as NodePort.

$ sudo apt install linux-modules-extra-raspi socat

k3s molecule

At the time of writing, the first molecule is called as k3s, as it sets up the k3s server (which in k3s terminologies is the Node which has the control plane) I just have one node so it obviously has to have the control plane but if you have more, you’re going to have to wait until k3s has multi node support. Feel free to discuss it in the homelab repo with me.

The k3s molecule is in the molecules/k3s directory. Assuming you’re in that directory.

You will run into errors exposing services as NodePort so install socat as well.

$ sudo apt install socat

If you want to use a separate backend to store your Terraform state, configure the backend in the [providers.tf](<http://providers.tf>) file. Otherwise remove the default http backend (I store my state in a private git repository)

terraform {
required_version = ">=1.3.0"
...

# backend "http" { }
}

Modify the inputs.tfvars file to fit your needs.

cat inputs.tfvars
servers = [ {
host = "127.0.0.1"
private_key = "~/.ssh/id_rsa"
user = "root"
} ]

Do note k3s will create a systemctl service so we need a user who is part of the sudoer group, otherwise the k3s install will fail. 😞

Now let’s apply the Terraform code. Ideally you’d run a plan but it’s just us using Terraform on the node. Feel free to still run a plan if that’s how you like it.

# Make sure you're in the right molecule
$ cd molecules/k3s

$ terraform apply -var-file=inputs.tfvars

The molecule does the following:

  • Downloads k3s binary and required Docker images to setup the k3s server. This will take around five minutes (with a good Internet connection to download everything and start the cluster)
  • Copies the kube config from the node and writes it as a Terraform Output. This is required for the further molecules. This also means that your node’s Control Surface (using Iron Man terms here) will be exposed on the Local Network. I think this should be fine for a Home Network.

If the k3s molecule fails, you can the logs of the k3s service to fix it.

$ sudo journalctl -u k3s.service -r | less

$ sudo journalctl --flush --vacuum-size=1B

Enrich the Server (cluster-resources molecule)

I’m going to refer to our setup as a Server until I have fully tested a Multi Node setup. Let’s check out the second molecule which installs various cool things on the Server.

cd molecules/cluster-resources

Again, feel free to comment out the http backend if you’re using local Terraform state.

Before running this make you checkout the applications being installed in the inputs.tf and the inputs.tfvars files. For example, if you don’t want to use a custom DNS, you can use the 127.0.0.1.nip.io to access the Hajimari Dashboard and comment out pihole from the list.

You need to get the kube config file from the Terraform output and place it in the file system. That path needs to be passed to the var.cluster.config_path variable.

You might want to do a plan for this if you want to see all changes (the list is pretty long). You can also modify the list of external applications being installed. And then apply the changes. If you don’t have a good Internet connection, few installs will fail with ImagePullBackoff. In that case, just delete those Pods and rerun the molecule.

$ terraform apply -var-file=local.inputs.tfvars
helm_release.external_apps["grafana-agent-operator"]: Refreshing state... [id=grafana-agent-operator]
helm_release.external_apps["ingress-nginx"]: Refreshing state... [id=ingress-nginx]
helm_release.external_apps["prometheus"]: Refreshing state... [id=prometheus]
helm_release.external_apps["opentelemetry-collector"]: Refreshing state... [id=opentelemetry-collector]
helm_release.external_apps["grafana"]: Refreshing state... [id=grafana]
helm_release.external_apps["loki"]: Refreshing state... [id=loki]
kubernetes_namespace.homelab_ns: Refreshing state... [id=homelab]
helm_release.external_ingresses["grafana"]: Refreshing state... [id=grafana-ingress]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
Terraform will perform the following actions:
# helm_release.external_apps["grafana-agent-operator"] will be created
+ resource "helm_release" "external_apps" {
+ atomic = false
+ chart = "grafana-agent-operator"
...
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value:

And that’s it. You have your HomeLab or a local Kubernetes Environment ready.

In the next part I’ll discuss more about Observability and Monitoring, so stay tuned for that.

— Sahu

--

--

Sahu

Personal Opinions • Beyond Full Stack Engineer @ McKinsey & Company