web stats
Featured image of post Kubernetes on RaspberryPi with microk8s

Kubernetes on RaspberryPi with microk8s

We are going to set up a Kubernetes service on RaspberryPi using microk8s as a standalone cluster. It's unnecessary, not suitable for production, and a bit tricky... but it's fun ๐Ÿ˜. Let's play around a bit!

Today, the use of containers is becoming more widespread, and their orchestration with solutions such as Openshift or Kubernetes. If in our day to day it is something that we have to deal with continuously, or we simply want to mess around with a personal project, it may be interesting to have a small laboratory at home for testing.

With our beloved RaspberryPi we can set up a small cluster (for now we are going to leave it as standalone) that although it will not serve as a production environment, and it is not something quick or easy, it can be tremendously entertaining.

To start what we are going to need is going to be very simple:

  • RaspberryPi 4 with at least 4GB of RAM
  • 64GB microSD card
  • Current adapter of at least 3A to power the Raspberry
  • Another computer with which we will connect by ssh.

Preparation

The first thing will be to prepare the S.O. of our RaspberryPi. This time we are going to use Ubuntu Server.

To do this we are going to download the corresponding ISO from its official website and load it onto our SD using [Balena Etcher](https://www.balena.io /etcher/).

We plug in the power adapter and boot up our board. The initial boot process usually takes about 5 minutes before it allows us to login.

We access our Raspberry through ssh, if we do not have a monitor and keyboard. The default login is:

  • user: ubuntu
  • pass: ubuntu
1
ssh ubuntu@192.168.200.204

Once inside, the first thing it will ask us is to establish a password.

We update parcel

1
2
3
sudo apt update
sudo apt upgrade -y
sudo apt autoremove -y

We prepare the S.O. to use cgroups. To do this we add the following to the end of the boot line of the /boot/firmware/cmdline.txt file:

1
cgroup_enable=cpu set cgroup_enable=memory cgroup_memory=1

Initially we connect the Raspberry by network cable, but we can optionally leave the Wifi configured as “failover”

1
sudo nano /etc/netplan/50-cloud-init.yaml
1
2
3
4
5
6
7
    Wi-Fi:
        wlan0:
            dhcp4: true
            optional: true
            access points:
                "SSID_name":
                    password: "WiFi_password"

We correct our time zone

1
sudo timedatectl set-timezone Europe/Madrid

We add a hostname

1
sudo nano /etc/hostname

and finally restart

1
reboot

##MicroK8s

When setting up our small Kubernetes cluster, we have several options:

In our case, since we are using an operating system as a base, we are going to go with the microk8s solution, since it is developed by Canonical itself as a snap package. This can provide us with more compatibility and simplicity in keeping you up to date.

To do this, it is as simple as installing the package.

1
sudo snap install microk8s --classic

The syntax to manage our cluster with microk8s is: microk8s kubectl ******. Our intention is to abstract as much as possible from this syntax, and simplify it by using kubectl as an alias:

In this case it is bash but if we used zsh we would have to change the path of the aliases file

1
2
3
echo 'export PATH=/snap/bin:$PATH' >> ~/.bashrc
echo "alias kubectl='microk8s kubectl'" >> ~/.bash_aliases
source ~/.bashrc

we add permissions to our user

1
2
3
    sudo usermod -a -G microk8s ubuntu
    sudo chown -f -R ubuntu ~/.kube
newgrp microk8s

We check that it is running:

1
microk8s status --wait-ready

We added some essential plugins, such as Dashboard, Ingress Registry and DNS

1
2
microk8s status --wait-ready
microk8s enable dns ingress dashboard registry

A short description of each of them

1
2
3
4
5
microk8s enable ingress # exposes HTTP and HTTPS routes from outside the cluster to inside the cluster.
microk8s enable dashboard # Graphical GUI for cluster management via web interface
microk8s enable dns # create DNS records for services and pods
microk8s enable storage # allows to manage the storage that we provide to the pods of the cluster
microk8s enable registry # allows us to have a local registry to store docker images

We verify that our aliases work correctly, and we have a running node

1
kubectl get nodes

ite have a local registry to store the docker images

1
2
3
4
5

We verify that our aliases work correctly, and we have a running node

```bash
kubectl get nodes

MetalLB

We are going to install the Load Balancer, using the solution of MetalLB

1
microk8s enable metallb

It will ask us for a range of reserved IPS to manage. In my home Network I am going to reserve these: 192.168.200.220-192.168.200.249

We are going to launch a small deployment to verify that our balancer is working correctly:

1
2
3
kubectl create deployment whoami --image=containous/whoami

kubectl expose deployment whoami --type=LoadBalancer --port=80

We can verify that one of the reserved IPs has been assigned:

1
2
3
ubuntu@RPI4:~$ kubectl get service whoami
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
whoami LoadBalancer 10.152.183.181 192.168.200.220 80:30341/TCP 4m55s

Now we only need to attack the IP from the browser or with a simple curl

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
ubuntu@RPI4:~$ curl 192.168.200.220
Hostname: whoami-5d56c87b99-fxjvn
IP: 127.0.0.1
IP: ::1
IP: 10.1.117.20
IP: fe80::30ba:88ff:febb:a570
RemoteAddr: 10.0.1.1:51427
GET/HTTP/1.1
Host: 192.168.200.220
User-Agent: curl/7.81.0
Accept: */*

ubuntu@RPI4:~$

dashboard

To access our dashboard, we need to know the IP and port assigned to it:

1
2
3
ubuntu@RPI4:~$ kubectl -n kube-system get services | grep kubernetes-dashboard
kubernetes-dashboard NodePort 10.152.183.148 <none> 443:30353/TCP 15m
ubuntu@RPI4:~$

With this we can now access the dashboard from outside our raspbery PI

https://{server ip}:{port number}

In my case:

1
https://192.168.200.204:30353

When accessing from the browser, it will ask us for authentication. We need to find out the token. We can obtain this information in the microk8s configuration itself

1
2
microk8s config | grep-token
    token: NW4vWGJZUEkyMDdPWVZGRnVEVXlBVjhmWHNkaUpoQ1RBN29jVGMyek44Zz0K

We copy the token and paste it into the UI.

With this we would already have our cluster with a single node running and ready to work with it. Later we will see how to add more nodes to this cluster, remove them, etc.

MetalLB

We are going to install the Load Balancer, using the solution of MetalLB

1
microk8s enable metallb

It will ask us for a range of reserved IPS to manage. In my home Network I am going to reserve these: 192.168.200.220-192.168.200.249

We are going to launch a small deployment to verify that our balancer is working correctly:

1
2
3
kubectl create deployment whoami --image=containous/whoami

kubectl expose deployment whoami --type=LoadBalancer --port=80

We can verify that one of the reserved IPs has been assigned:

1
2
3
ubuntu@RPI4:~$ kubectl get service whoami
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
whoami LoadBalancer 10.152.183.181 192.168.200.220 80:30341/TCP 4m55s

Now we only need to attack the IP from the browser or with a simple curl

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
ubuntu@RPI4:~$ curl 192.168.200.220
Hostname: whoami-5d56c87b99-fxjvn
IP: 127.0.0.1
IP: ::1
IP: 10.1.117.20
IP: fe80::30ba:88ff:febb:a570
RemoteAddr: 10.0.1.1:51427
GET/HTTP/1.1
Host: 192.168.200.220
User-Agent: curl/7.81.0
Accept: */*

ubuntu@RPI4:~$

dashboard

To access our dashboard, we need to know the IP and port assigned to it:

1
2
3
ubuntu@RPI4:~$ kubectl -n kube-system get services | grep kubernetes-dashboard
kubernetes-dashboard NodePort 10.152.183.148 <none> 443:30353/TCP 15m
ubuntu@RPI4:~$

With this we can now access the dashboard from outside our raspbery PI

https://{server ip}:{port number}

In my case:

1
https://192.168.200.204:30353

When accessing from the browser, it will ask us for authentication. We need to find out the token. We can obtain this information in the microk8s configuration itself

1
2
microk8s config | grep-token
    token: NW4vWGJZUEkyMDdPWVZGRnVEVXlBVjhmWHNkaUpoQ1RBN29jVGMyek44Zz0K

We copy the token and paste it into the UI.

With this we would already have our cluster with a single node running and ready to work with it. Later we will see how to add more nodes to this cluster, remove them, etc.

comments powered by Disqus
Built with Hugo-Extended & theme Stack