Cloud Native

Building a Multi Cluster Authentication Portal

August 24, 2020

by

Marc Boorshtein

TL;DR

  • Support authentication to multiple clusters from a single portal
  • Use the dashboard and kubectl on each cluster
  • Integrate both managed and on-premises clusters
  • Automate cluster authentication and onboarding

Kubernetes clusters are like potato chips, you can't have just one! You likely have at least a development and production cluster. You may also have region or application specific clusters. Whatever reason you have for multiple clusters, you still need to manage access to them. We're going to go through two approaches using OpenUnison to provide a multi-cluster authentication portal for your clusters. You'll be able to provide your developers and administrators one point of access for kubectl, the dashboard, and any other application that supports OpenID Connect.

Single Management Cluster

One cluster to rule them all, and in identity bind them.

It's been a while since I saw Lord of the Rings, but I'm pretty sure that's what was written in elvish inside the ring of power. In this model, there's a single cluster that is a "Management" cluster that hosts a central OpenUnison which is the main authentication point for all access. This OpenUnison is a portal to the other clusters. This model works well if you don't have a centralized SAML2 or OpenID Connect identity provider to host authentication and are instead relying on LDAP or Active Directory. The downside to this approach is that your management cluster could become a single point of failure. The good news is OpenUnison works well in a highly available environment so if your cluster is highly available that helps mitigate the risk. A major advantage to this approach is you maintain control of the connections with your other clusters. Even if you have a centralized SAML2 or OpenID Connect identity provider, having a management cluster takes away a risk of having to wait on a centralized authentication team for every cluster that needs to be onboarded. Since the entire integration process is API driven you could completely automate new cluster onboarding with authentication!

Management Cluster

The above diagram shows how we'll integrate each cluster. Our management cluster connects to Active Directory using our Login Portal for Active Directory. The other two clusters will run the Login Portal for OpenID Connect, using the first cluster as our identity provider. Using this approach provides several capabilities:

  • Configuration-less kubectl login - Use kubectl oulogin --host=mycluster.domain.com to login to your cluster without having to pre-configure kubectl
  • Secure Dashboard Access - Access the dashboard on each cluster without having to login again
  • Support both Cloud Managed and On-Premises Clusters - Provide a single access point for all your clusters, regardless of where they run
  • Localized Deployments - Provides for specific customization without effecting all other clusters

First, we collected what we need for your management cluster:

  1. Active Directory LDAPS CA Certificate
  2. Service Account

We also updated the template values.yaml. Next, let's deploy our login portal:

$ helm repo add tremolo https://nexus.tremolo.io/repository/helm/
$ helm repo update
$ kubectl create ns openunison
$ helm install openunison tremolo/openunison-operator --namespace openunison
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

Next create a Secret to hold our service account's credentials and dashboard secret.

apiVersion: v1
kind: Secret
metadata:
  name: orchestra-secrets-source
  namespace: openunison
type: Opaque
data:
  AD_BIND_PASSWORD: JGVjcmV0MTIz
  K8S_DB_SECRET: aW0gYSBzZWNyZXQ=
  unisonKeystorePassword: aW0gYSBzZWNyZXQ=

The values for K8S_DB_SECRET and unisonKeystorePassword are arbitrary. They should just be long and contain random characters except the ampersand (&). Finally, we'll deploy the helm chart to launch OpenUnison.

$ helm install orchestra tremolo/openunison-k8s-login-activedirectory --namespace openunison -f /path/to/values.yaml

Once OpenUnison is deployed and running, you should be able to login and see the main screen:

We could start adding clusters, but this main screen will become difficult to manage. We'll next enable the organizations tree on this screen to make it easier for us to organize our clusters and other applications. Edit the orchestra openunison object (kubectl edit openunison orchestra -n openunison). Find the non_secret_data section and add the below to the list:

    - name: SHOW_PORTAL_ORGS
    value: "true"

Once OpenUnison redeploys, you'll have a new section to your portal at the top of the screen showing a tree. Your badges for the dashboard and token are no longer there. Click on the "Local Deployment" section and your icons are back! Now that our main cluster is deployed we can start adding new clusters.

Adding a New Cluster

Adding a new cluster is a simple process. First, add a key to the orchestra-secrets-source Secret called cluster2. Save the value you used for this key, we'll use it when we deploy OpenUnison into our new cluster. Next, download a template that contains the object's we'll need to create in our management cluster.

The first object is an Org that will give us a new entry in our navigation tree for our cluster. Update metadata.name and spec.description as needed. Also, change spec.uuid to a new unique id. I like Type IV UUIDs, there's handy sites that will generate these for you.

The next block creates the trust between the OpenUnison you're going to deploy to your new cluster and the OpenUnison on your primary cluster. We'll keep the metadata.name and spec.clientId for this cluster, but we'll want to update this for new clusters. Notice that the spec.clientSecret section points to the Secret we just updated. We do need to update spec.redirectURI to point to the host OpenUnison will run on for cluster2. For instance the OpenUnison for my second cluster will run on k8sou.apps.192-168-2-131.nip.io, so my value is https://k8sou.apps.192-168-2-131.nip.io/auth/oidc.

The last two objects are for the "badges" in our portal. The objects are very large because the icon is embedded as a base64 encoded PNG file. The first PortalUrl is for the new cluster's dashboard. Update spec.org to match the same value from your Org's spec.uuid. Also update the spec.url to point to the same value as network.dashboard_host in the values.yaml you'll use to deploy the new OpenUnison instance for your new cluster.

The final object is the badge for your user to get their kubectl commands. Just as with the previous PortalUrl object, update the spec.org to match the same value from your Org's spec.uuid. Update the spec.url host to point to the host your new cluster's OpenUnison will be hosted on. Once updated, add the objects to your cluster. Once added refresh your portal screen and you'll see a new organization called "cluster2" with badges for the dashboard and tokens for your second cluster!

These badges won't work because we don't have an OpenUnison running on our new cluster yet. Our last step is to create our new OpenUnison. We'll need the certificate for our management OpenUnison, you can get it from the tokens screen for the management OpenUnison. Just as with our main cluster, we need to deploy the OpenUnison operator and dashboard:

$ helm repo add tremolo https://nexus.tremolo.io/repository/helm/
$ helm repo update
$ kubectl create ns openunison
$ helm install openunison tremolo/openunison-operator --namespace openunison
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

Next create a Secret in your second cluster as per the deployment instructions for the OIDC login portal, with OIDC_CLIENT_SECRET matching the cluster2 key in the orchestra-secrets-source Secret in our management cluster. Next, create your values.yaml for the OpenUnison that will run in cluster2. You'll need to update the oidc section to point to the management OpenUnison, here's mine:

oidc:
  client_id: cluster2
  auth_url: https://k8sou.apps.192-168-2-144.nip.io/auth/idp/k8sIdp/auth
  token_url: https://k8sou.apps.192-168-2-144.nip.io/auth/idp/k8sIdp/token
  user_in_idtoken: true
  userinfo_url: https://k8sou.apps.192-168-2-144.nip.io/auth/idp/k8sIdp/userinfo
  domain: ""
  scopes: openid email profile groups
  claims:
    sub: sub
    email: email
    given_name: given_name
    family_name: family_name
    display_name: name
    groups: groups

Finally, with our values.yaml ready to go deploy the chart on cluster2:

$ helm install orchestra tremolo/openunison-k8s-login-oidc --namespace openunison -f /path/to/values.yaml

Once OpenUnison is up and running, clicking on the badges in your management dashboard should get your right into your new cluster! You can use the same approach to RBAC in a multi-cluster environment as in a single cluster environment. User groups are moved through the same way as before and are avaailable to your RBAC policies.

To add new clusters, repeat the steps in this section!

Central Portal, Decentralized Authentication

This scenario is similar to the first, but the management OpenUnison is only used to host links. All of the cluster's have their own OpenUnison with a trust to an external identity provider.  The biggest advantage to this approach is there is no single point of failure. If the management OpenUnison is unavailable, you can still get to the other OpenUnisons and their clusters. The process for deployment is the same except:

  1. Each cluster's OpenUnison is integrated directly with the identity provider. This means that you're likely using the oidc, saml2, or github version of the login portal.
  2. When you add the links to the management OpenUnison, there's no Trust object. Here's the template without the Trust included.

In this scenario you get a central portal for accessing the dashboards, but don't need to worry about one of your clusters being a single point of failure.

Adding Other Applications

Your clusters are more then just kubectl and the dashboard.  These systems mostly support OpenID Connect so you can add them to your portal too, but thats for another blog!

Related Posts