Containers and Kubernetes on macOS: A Setup for 2022
--
Update: I have an updated version of this article which builds upon the setup described below, you can find it here — https://mrsauravsahu.medium.com/docker-and-kubernetes-on-macos-a-setup-for-2022-aa01819920b6
I’m writing this blog based on this post I shared on LinkedIn — https://www.linkedin.com/posts/mrsauravsahu_docker-kubernetes-macos-activity-6877178836908347392-rTnv
This is after the announcement of new subscriptions for Docker Desktop, you can read more about it here — https://www.docker.com/blog/updating-product-subscriptions
So, in a nutshell, Docker Desktop now requires a license for usage for larger companies. Now we have a choice to make, to get the license to use Docker Desktop or evaluate some open-source awesomeness to do these things.
Just to be clear “Docker”, as in Docker images is still open source. The license is for the Docker Desktop App. Read more about the internals of Docker and this naming confusion here — https://www.tutorialworks.com/difference-docker-containerd-runc-crio-oci
I’m going to assume, that you, the reader, are a Software Engineer or someone who uses Docker and Containers for development and are fairly comfortable using the CLI (Command Line Interface). And in an upcoming blog, I’m going to share what tools you can use to manage your containers with a GUI (Graphical User Interface).
What Docker Desktop used to help me with
There were a quite a few features for which I used to use Docker Desktop. Let’s list them below —
- Creating and managing Docker Images
- Using and Publishing Images from a Container Registry
- Test out Kubernetes Applications on a local Kubernetes Cluster
- And probably the most important, the ability to turn off the Docker Server when not needed
Few things about Docker and the OCI
From the website opencontainers.org
— “The Open Container Initiative (OCI) is an open governance structure for the express purpose of creating open industry standards around container formats and runtimes.”
When we talk about containers, we really are creating OCI compliant images. Docker is one such project, but there are other projects which can do this too. Docker is sometimes used interchangeably with OCI compliant images.
The choices — and containerd
I read about these alternatives only when the licensing changes happened, and this blog was really helpful — https://jfrog.com/knowledge-base/the-basics-7-alternatives-to-docker-all-in-one-solutions-and-standalone-container-tools
Out of all of these, containerd
kind of stood out for some reason. Nothing specific, but I thought of trying it out — https://containerd.io
If you don’t know, most of these projects run on bare metal on Linux and Windows (through the Windows Subsystem for Linux), but on macOS, you’re most likely running a Linux VM, albeit hidden from you. Applications like Docker Desktop just hide this from us to create a more seamless experiences, forwarding ports, helping mount the FileSystem and whatnot.
So after researching the alternatives to Docker Desktop on macOS, I came across quite a few interesting projects, mainly for any project which helps create OCI compliant images and using them in a local Kubernetes Cluster and an easy to use Image Registry. It would also help if it integrates well with local ports.
Now, let’s see a few of such projects —
rancher desktop (rancherdesktop.io)
Verbatim from their website, “Rancher Desktop is an open-source desktop application for Mac, Windows and Linux. It provides Kubernetes and container management.”
You can also choose the Kubernetes Version to run and the Image Registry connects directly to this cluster.
microK8S (microk8s.io)
MicroK8S is a project by Canonical, Ltd. who also maintain Ubuntu. MicroK8S is a project to help you create clusters spanning multiple nodes and is a great choice for running Kubernetes in your on-premise cloud, if you have one. I use this on my PC when running Ubuntu.
k0s (k0sproject.io)
k0s focusses on ease of installing Kubernetes and has zero dependencies. It also uses containerd as the default container runtime, but can support other runtimes. Unfortunately, it runs only on Linux and Windows Server 2019 (experimental)
colima (github.com/abiosoft/colima)
“Container runtimes on macOS (and Linux) with minimal setup.” And this is what I ended up with. The setup is really easy and it supports docker or containerd as its container runtime.
It has great support for a local Kubernetes Cluster and images created through containerd can directly be used in the Cluster. It also supports ports bring forwarded from the Linux VM to the mac host and mounting the File System into the containers.
Colima, (at least at the time of writing) seems like the perfect open-source alternative to Docker Desktop (minus the GUI part). Now let’s see how to setup and use colima.
The Setup
The best part is colima is really easy to setup. I use homebrew so it’s just a single command to get everything ready.
$ brew install colima
Now, just check if the install worked with a —
$ colima version
So, now colima is installed but is not consuming any resources until we start it, which internally is using a Linux VM to run everything.
$ colima start --with-kubernetes -r containerd
In the above command, I’m passing in the --with-kubernetes
switch because I want a local Kubernetes Cluster as well. The -r
switch tells which Container Runtime to use, the default is Docker
, I’m passing containerd
because remember, we can’t run Docker Server without Docker Desktop. Also, there are more options you can pass in, like the specifications for the VM, run a --help
to check those options.
To interact with images and containers, we’ll use the cli tool for containerd
which is nerdctl
. You can install it separately, or consume it through colima. Let’s try to run the hello-world image from Docker Hub —
$ colima nerdctl run hello-world
...
Hello from Docker!
This message shows that your installation appears to be working correctly.To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bashShare images, automate workflows, and more with a free Docker ID:
<https://hub.docker.com/>For more examples and ideas, visit:
<https://docs.docker.com/get-started/>
So, that worked very similar to the docker command. You can create, manage and publish your images now. To use these images in the local Kubernetes Cluster, they need to in the [k8s.io](<http://k8s.io>)
namespace, which is the containerd namespace, not Kubernetes.
An Example — Images & Containers
Let’s take this simple ping node.js API and try to run it in our local Kubernetes Cluster.
On a GET /
, it returns { "message": "ping"}
This is our source code, it doesn’t require any dependencies.
$ cat index.jsconst http = require('http')const ping = (_, res) => {
res.writeHead(200)
res.end(JSON.stringify({
message: 'ping'
}))
}const server = http.createServer(ping)const port = process.env.PORT || 5000
const host = process.env.HOST || 'localhost'
server.listen(port, host)console.log(`started server at ${host}:${port}`)
Because I’m already used to running commands with docker, I’m going to setup this alias from docker to nerdctl.
alias docker='colima nerdctl -- -n k8s.io'
This alias makes sure all images I create are being placed in the k8s.io containerd namespace. If you don’t want to use your images in Kubernetes, you can ignore the namespace switch.
Let’s build our image with this Dockerfile
$ cat DockerfileFROM node:lts-alpineWORKDIR /appCOPY index.js ./CMD node index.js$ docker build -t simple-node-server .
Now, let’s run the container.
$ docker run -d -p 8081:80 -e HOST=0.0.0.0 -e PORT=80 simple-node-server
Even though the container is running inside a Linux VM, the port will get forwarded. (This might take some time. Sometimes, the URL works only after a few seconds)
$ curl <http://localhost:8081>
{"message":"ping"}%
Example Continued — with Kubernetes
Now let’s deploy this to our local Kubernetes Cluster. I have this deployment.yaml file. I’m creating the Pod and then a Service around it.
apiVersion: v1
kind: Podmetadata:
name: simple-node-server
labels:
app: simple-node-server
spec:
containers:
- name: main
image: simple-node-server:latest
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 250Mi
cpu: 0.25m
env:
- name: PORT
value: "80"
- name: HOST
value: "0.0.0.0"---apiVersion: v1
kind: Service
metadata:
name: simple-node-server
spec:
selector:
app: simple-node-server
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 80
Let’s create a namespace for our app.
$ kubectl create ns simple-app
We can now create the resources with the above yaml file.
$ kubectl apply -f deployment.yaml -n simple-app
If you check the Service, it is of type NodePort, so we should be able to hit the service directly as colima
forwards these ports from the VM. To get the port, we can run this kubectl
command —
$ kubectl get svc -n simple-app simple-node-server -o json | jq '.spec.ports[0].nodePort'
30262
Mine’s running at http://localhost:30262, you can check the port which gets assigned and you’ll see the same JSON response {"message":"ping"}
Colima seems like a really cool project and I’m really excited to see what’s new for them. So, that’s my setup for Containers and Kubernetes for 2022. Hope this is helpful! Keep creating things.
-S