Cloud Native

Building an Identity Enabled Kubernetes Cluster on Raspberry Pi

October 1, 2016

by

Marc Boorshtein

Why?  WHY NOT?  Ok, so while at DevFestDC this year Ray Tsang (aka @saturnism) was doing a code demo of Kubernetes and has his stack of Raspberry Pis that he uses and I thought it was pretty cool.  Given we have a couple of conferences coming up (ISSA security conference in Dallas and Kubecon in Seattle) I thought it would be fun to have one that was running Unison and showing off our identity management functions for a Kubernetes cluster.

k8s_pis

I already had a Raspberry Pi 2 so I’ll use that as a GUID, run OpenLDAP and an NFS server for mount points.  This will be an identity aware k8s deployment, so we’re going to run Unison as an identity provider which will need mariadb for audit data and mongo for group data (we’re going to pretend that the OpenLDAP server is Active Directory and we can’t make changes).

I ordered some other four more raspberry pi model 3s for the master and 3 minions.  Everything is routed together using a TP Link 8 port ethernet router and powered using a 6 port Anker USB power source.

The first thing I had to decide is, what version of Kubernetes to use?  1.4 had just come out and I had heard so much about how kubeadm made everything so silly easy I decided I’d go that route.  The kubeadm install instructions said that both Ubuntu and CentOS variants were supported.  Looking at the apt packages I saw arm, but not for CentOS so I decided that was the route I wanted to go.  Doing some quick googling I found that there was an Ubuntu Mate 16 distribution perfect for Raspberry Pi.  So I downloaded it and started my adventure.

A quick note – I DID try Raspbian Jessie.  I didn’t get far enough for it to work but there were a couple of extra things I had to do.  Before running the instructions I had to install docker from Docker since the version that comes with Jessie is very old.

Once I had flashed my card (Samsung EVO 32gb) and booted up my new master we were ready to go.  The first thing I did was an apt-get update and upgrade…which bricked everything…multiple times.  So after starting fresh a few times, I just figured I’d skip the upgrade.  I also created a static IP address and began the install based on the kubeadm instructions.  The first problem I ran into was that docker wouldn’t start.  Turned out cgroups wasn’t enabled so after some more googling I came across that you need to tell Debian to enable it by editing /root/cmdline.txt and adding “cgroup_enable=memory cgroup_enable=cpustats” BEFORE the elevator option.  Gave it a quick reboot and we made some progress.  Docker started and containers began running but the api-server kept crashing due to a nil pointer error.  I opened a ticket on GitHub thinking this looked like a bug.  Turned out it was and already fixed but it hadn’t made it into the 1.4.0 release.  I could have waited until 10/10 for 1.4.1….but I’m really not that patient.  So with some help I got 1.4.1 compiled locally and thanks to a great suggestion by Lucas Käldström (aka luxas on GitHub and @kubernetesonarm on Twitter) to get the docker container needed for the api server deployed locally, per the GitHub ticket:

  1. Compile the arm version of Kubernetes
  2. Create a directory on the master called /root/local/api-server
  3. copy the kube-apiserver binary to /root/local/api-server
  4. Create the below Dockerfile in /root/local/api-server
  5. Build locally – $ docker build –tag gcr.io/google_containers/kube-apiserver-arm:v1.4.0 .

Here’s the Dockerfile:

-- CODE language-docker --
FROM armel/busybox
COPY kube-apiserver /usr/local/bin/

Now I was able to run kubeadm init and everything is running.  I was able to add my first minion no problem.  Finally, I disabled the ui to save memory by running “graphical disable” and rebooted.  kube-dns doesn’t want to start, but so thats what I’ll be looking at next.  I’ll also need to build out my OpenLDAP server and NFS server.  Then we can start having some REAL fun identity enabling the cluster!

Thanks again to Lucas Käldström for helping me get this up and running!

Related Posts