I was doing some cybersecurity research on YouTube one day when I came across a somewhat related video where a guy built an 8 node Raspberry Pi cluster and got it to run Kubernetes, or K8s as it’s widely know. Well actually this is running K3s, which is basically K8s.
I was hooked and decided to give it a try. So I purchased everything I needed to run 8 Raspberry Pi 4 8GB nodes in a cluster. It was a lot of fun to do, and by fun I mean I ran into so many roadblocks and challenges. In fact, I am still stuck at one spot, waiting on Rancher or someone in the Rancher community to lend me a hand. Their install script to install the Rancher GUI, it’s not working.
I decided to document what I did because the video leaves out quite a bit. Some of the config files just don’t work, and I am not able to locate any of the documentation the author of the video, Chuck, mentions several times in the video. Rather I got to know the Rancher docs very well. Which is a good thing. I reached out to Chuck about the documentation, but he has not responded. I imagine my email is just one of many, many emails that he may eventually get to.
I’ll start with what I purchased and where. Then I’ll go into the setup and provide some working yaml files. My hope is that you will be able to follow this guide step by step and in the end have a Raspberry Pi cluster running Kubernetes. How cool is that? On bare metal. And you don’t need 8, that’s super overkill, in fact you can technically do this with just one node, but two makes more sense. A single node can act as both the master and a worker node.
This is list of suggestions for what to get. You will need at a minimum, a Raspberry Pi, power and networking. This is what I purchased to get things going for an 8 node cluster.
The first thing we will need to do is get our first node online. These steps will apply for each node you have in your cluster. We are going to install Raspberry Pi OS Lite which you will download when imaging the memory card. To do this you will use the Raspberry Pi Imager.
Ready for some steps? Here we go.
/Volumes/boot/
.config.txt
and at the bottom of the file add arm_64bit=1
.cmdline.txt
and add cgroup_memory=1 cgroup_enable=memory ip=192.168.1.170::192.168.1.1:255.255.255.0:rpimaster:eth0:off
The values that will change from node to node is the IP and the hostname. So in this example, 192.168.1.170
would change on the next node and rpimaster
would change as that’s the hostname.touch ssh
.ssh pi@192.168.1.170
. The password will be raspberry
. Wohoo! You are now sitting on your RPI. Repeat this step for as many times as you have nodes.sudo iptables -F bin/iptables-legacy
then sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
and lastly run sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
curl -sfL https://get.k3.io | K3S_KUBECONFIG_MODE="644" sh -s -
and sit back and watch it go.Now you can run a simple command to see if it worked.
`root@rpimaster:/home/pi# kubectl get nodesNAME
STATUS ROLES AGE VERSION
rpimaster Ready control-plane,master 21s v1.21.4+k3s1
Looking good. This master node is the control-plane,master
.
Now it’s time to register the rest of the nodes. To do this we are going to tell each node about the master node using a token.
root@rpimaster:/home/pi# cat /var/lib/rancher/k3s/server/node-tokenK10f07158496cafcbd96f225afb04c391d385d967d8009a954dc334afa0aebffaa5::server:332bebecd5a2ba35f9914e75a05bf14f
Now on each node (you will ssh in to each one) run this command.
curl -sfL https://get.k3s.io | K3S_TOKEN="K10f07158496cafcbd96f225afb04c391d385d967d8009a954dc334afa0aebffaa5::server:332bebecd5a2ba35f9914e75a05bf14f" K3S_URL="https://192.168.1.170:6443" K3S_NODE_NAME="rpi1" sh -
The only value you will change from node to node is the K3S_NODE_NAME="rpi1"
value.
And that’s it! To see our nodes, run kubectl get nodes
. You should see the following output.
pi@rpimaster:~ $ kubectl get nodes
NAME STATUS ROLES AGE VERSION
rpimaster Ready control-plane,master 12d v1.21.4+k3s1
rpi7 Ready 11d v1.21.4+k3s1
rpi2 Ready 11d v1.21.4+k3s1
rpi6 Ready 11d v1.21.4+k3s1
rpi5 Ready 11d v1.21.4+k3s1
rpi1 Ready 11d v1.21.4+k3s1
rpi4 Ready 11d v1.21.4+k3s1
rpi3 Ready 11d v1.21.4+k3s1
We are running Kubernetes on bare metal. 🙂
Next up, we will be deploying NGINX across our nodes. Continue on to Part II!
You must be logged in to post a comment.