Architecture

VCluster Enterprise Authentication

by

Marc Boorshtein

In 2021, Loft introduced an open source project for simplifying multi-tenancy: VCluster. The VCluster project makes multi-tenancy in Kubernetes easier by providing a unique API server and virtual nodes for each tenant. The virtual cluster handles the work of synchronizing objects and workloads to the host cluster, providing your workloads and users what appears to be their own cluster. Using VCluster for multi-tenancy solves multiple common issues that are encountered in multi-tenant environments:

  1. Help isolate workloads by running them against their own API server
  2. Allow users the ability to deploy cluster wide objects like custom resource definitions, ClusterRoles, and ClusterRoleBindings

Once you deploy a VCluster, the first question is How do I access it? The VCluster project has a utility, called vcluster, which you can use to access a virtual cluster using a master certificate. For instance, if we run the command:

vcluster vcluster -n vcluster -- bash

We'll get a bash shell that has a new kubectl configuration. To achieve this the vcluster command:

  1. Creates a port forward to your virtual cluster via your control plane cluster
  2. Retrieves a Secret that contains a private key and certificate for your cluster
  3. Generates the configuration for access to your cluster

If we inspect our kubectl configuration file, we'll see a certificate with the identity O=system:masters, CN=system:admin that is good for one year. If this identity were lost or compromised, the only way to disable it would be to re-key the cluster. Also, since the identity is a member of the system:masters group all RBAC is bypassed. Finally, if we create a new namespace with this identity the virtual cluster's audit logs will show:

.
.
.
  "user": {
    "username": "system:apiserver",
    "uid": "338ff24a-b86f-49ca-9aa5-08b2f7915e86",
    "groups": [
      "system:masters"
    ]
  },
.
.
.

This means that an audit can't tie back the request from the original user who ran the vcluster command to create the namespace. This is likely going to be a compliance issue in most enterprises. This approach is relatively simple and works well for bootstrapping and some development scenarios, but has several drawbacks:

  1. You're using certificate authentication, which has many well documented drawbacks
  2. You're not accessing the cluster as a user, but using a generic account that is bypassing RBAC. You could generate certificates for everyone on your team, but that goes back to issue #1
  3. You need to distribute the vcluster command and keep it up to date, in many enterprises, onboarding a new tool can require considerable paperwork
  4. Your enterprise generally has timeout and other compliance requirements, making certificates a violation of those compliance requirements
  5. You can't use your enterprise's identity system to manage authorization for your virtual cluster

This isn't a new problem, in fact your control plane cluster that hosts VClusters probably has the same problem! You could approach each virtual cluster the same way you handled your control plane cluster. You could setup an OpenID Connect connection to an identity provider, except by default VCluster uses k3s which doesn't support OpenID Connect so you'll need to use a project like kube-oidc-proxy to make that happen. (It's important to note that k0s is also supported, which does support OpenID Connect, but it's not the default). Depending on who owns your OpenID Connect identity provider, this could be a difficult process. These potential ownership silos are common in enterprises. An easier way is for your control plane to host an identity provider, but now you need to deploy and maintain it. You also need to onboard new virtual clusters into your identity provider. OpenUnison can vastly simplify this process while providing your virtual clusters the enterprise security you need!

Using OpenUnison for Enterprise Authentication

You may have heard of OpenUnison for integrating authentication into your clusters and cluster management applications. OpenUnison also supports centralizing authentication for multiple clusters. We could go through this manual process for each virtual cluster, where an admin provisions the VCluster and uses the ouctl command to integrate it into the control plane, but that's just too manual. This is Kubernetes, this process should be automated too! We created a helm chart that you can use to automate the onboarding. This chart launches a container that includes helm, vcluster, ouctl, and kubectl. It will generate your virtual cluster's OpenUnison configuration, launch the ouctl command to integrate your virtual cluster into your control plane's OpenUnison and create the RBAC bindings needed for your users to access the cluster based on their enterprise groups!

Before you get started, you'll need:

  1. Control plane Kubernetes cluster with an Ingress controller installed, we're going to assume NGINX
  2. Deploy OpenUnison onto your control plane cluster
  3. Deploy a VCluster

If you want to try this out quickly, you can leverage the scripts from Kubernetes: An Enterprise Guide to deploy a KinD cluster and deploy OpenUnison with a built in "Active Directory" using the scripts in our GitHub repository. With an empty Ubuntu 20.04 image, run:

$ git clone https://github.com/PacktPublishing/Kubernetes---An-Enterprise-Guide-2E.git
$ cd Kubernetes---An-Enterprise-Guide-2E/chapter1
$ ./install-docker.sh
.
.
.
$ cd ../chapter2
$ ./complete-install.sh
.
.
.
$ cd ../chapter6/openunison
$ ./deploy_openunison_imp.sh
.
.
.

Once this is done, you'll have a running cluster with OpenUnison using our example Active Directory. Next, while still sshed into your vm, edit /tmp/openunison-values.yaml and set openunison.non_secret_data.SHOW_PORTAL_ORGS to "true". It will look like:

openunison:
  replicas: 1
  non_secret_data:
    K8S_DB_SSO: oidc
    SHOW_PORTAL_ORGS: "true"
  secrets: []

Now run helm upgrade orchestra-login-portal tremolo/orchestra-login-portal -n openunison -f /tmp/openunison-values.yaml. This will update OpenUnison to better organize badges in the portal for different clusters.

Next, login to your cluster by going to https://k8sou.apps.X-X-X-X.nip.io/, where "X-X-X-X" is the IP of your VM. For instance my VM is 192.168.2.40 so the URL is https://k8sou.apps.192-168-2-40.nip.io/.

Login with the username mmosley and the password start123. If you deployed OpenUnison with your own identity system, login as a user with cluster administrator privileges. Next, click on Local Deployment:

OpenUnison Portal

Then, click on the "badge" with the label Kubernetes Tokens:

OpenUnison Token Badge

Finally, click on the "Copy" button next to kubectl Command to get your kubectl configuration and paste it into a terminal on your local workstation (NOT the VM), or kubectl Windows Command if using Windows:

Get kubectl Configuration

With your kubectl configuration in hand, you now have access to your cluster!

k get nodes  
NAME                      STATUS   ROLES                  AGE   VERSION
cluster01-control-plane   Ready    control-plane,master   26m   v1.21.1
cluster01-worker          Ready                     26m   v1.21.1

NOTE: If you want to bypass the process of first logging into the portal and then getting a token, you can trust your CA cert (or if using a public certificate like Let's Encrypt) and use the oulogin kubectl plugin to authenticate without first predefining a kubectl configuration file.

Deploying a Virtual Cluster

With our cluster and OpenUnison deployed, the next step is to deploy a virtual cluster. We could download the vcluster command, but I'd rather use the VCluster integration with the Kubernetes Cluster API. The Cluster API is a custom controller that lets you deploy Kubernetes clusters by creating instances of custom resource definitions that define your cluster's "hardware", "nodes", etc. In this case we'll use it to deploy a VCluster without needing the vcluster utility. These instructions are from https://github.com/loft-sh/cluster-api-provider-vcluster. The first step, is to deploy the Cluster API. First, download the clusterctl utility using your favorite method (I use homebrew). Next, create ~/.cluster-api/clusterctl.yaml with the following:

providers:
  - name: vcluster
    url: https://github.com/loft-sh/cluster-api-provider-vcluster/releases/latest/infrastructure-components.yaml
    type: InfrastructureProvider

Next, initialize the cluster api:

clusterctl init --infrastructure vcluster

Once the clusterctl command is done running, we can deploy our VCluster. I deviated from loft's instructtions here so I could add the k3s audit log. You can deploy your cluster pretty easily with a gist I created:

kubectl create -f https://gist.githubusercontent.com/mlbiam/c54e748f85f6c609e7105902bf50bace/raw/f3dfbb37942771fb75f06ed27ad0bf268ba40fbd/vcluster-creation.yaml

This gist sets up a namespace, creates a PVC for the vcluster's auditlogs and creates a one node vcluster using the Cluster API. After a minute or two you'll see your cluster is running:

$ kubectl get pods -n vcluster-blog
NAME                                                     READY   STATUS    RESTARTS   AGE
coredns-669fb9997d-8wnws-x-kube-system-x-vcluster-blog   1/1     Running   0          81s
vcluster-blog-0                                          2/2     Running   0          109s

Now that we have both our virtual cluster and control plane setup, the last step is to integrate our VCluster into our control plane OpenUnison.

Onboarding OpenUnison

So far, we've created a control plane cluster with OpenUnison integrated into our enterprise authentication. We also deployed the Cluster API and a VCluster using the Cluster API. The last step is to deploy OpenUnison into our virtual cluster and integrate it into our control plane OpenUnison. This sounds complicated, but thanks to the magic of APIs it's super easy. A single helm chart will do all the work for us. We first need to create a values.yaml file that will tell our chart what host names to use:

vcluster:
  label: vcluster-blog
  name: vcluster-blog
  namespace: vcluster-blog
  api_server_host: k8sapi.apps.vcluster-blog.192-168-2-40.nip.io 
  dashboard_host: k8sdb.apps.vcluster-blog.192-168-2-40.nip.io
  openunison_host: k8sou.apps.vcluster-blog.192-168-2-40.nip.io
  az_groups:
  - cn=k8s-cluster-admins,ou=Groups,DC=domain,DC=com

The first three options are for the portal and for the kubectl configuration:

  • label - The label for the virtual cluster in the portal
  • name - The name of the VCluster in your kubectl configuration file
  • namespace: The namespace where VCluster is running

The next three options are what host names to use:

  • openunison_host - The host name for the vcluster's local OpenUnison
  • dashboard_host - The host name for the vcluster's local dashboard
  • api_server_host - The host name for the vcluster's API proxy

Notice that we created host names with an extra domain component so they don't overlap with the host names from our control plane cluster, even though they're all running on the same IP address. These host names must be unique and point to the same load balancer used by the control plane cluster. The OpenUnison documentation has a detailed explanation of how these host names interact with your cluster.

Finally, we list the groups from our identity provider we want to have access to our virtual cluster. Members of these groups will have the cluster-admin ClusterRole, giving them administrator access without impacting the control plane cluster. This uses the example administrator group from the sample "Active Directory" we deployed, but you can list any group provided by your identity provider. For example you could list any of your your Okta groups or Github teams if using those providers for authentication in the control plane cluster.

With your values.yaml in hand, the last step is to run the integration against your control plane cluster:

helm repo add tremolo https://nexus.tremolo.io/repository/helm/
helm repo update
helm install vcluster-blog tremolo/openunison-vcluster-onboard -n openunison -f /path/to/values.yaml

We can check the progress of the deployment by looking for the deployment job:

kubectl logs -f -l job-name=onboard-vcluster-vcluster-blog -n openunison

It will take a few minutes for everything to run, but once it's done you can refresh your OpenUnison portal and see a new option:

OpenUnison with new cluster

In addition to the Local Deployment, you now have vcluster-blog with badges for the VCluster's Dashboard and tokens. Clicking on the dashboard badge will bring you directly to your VCluster's dashboard! You can also get a kubectl configuration for your VCluster too. If you want to bypass the portal and get your tokens directly, you can run kubectl oulogin --host=k8sou.apps.vcluster-blog.192-168-2-40.nip.io, which will launch a browser to log you in with your enterprise identity provider. Finally, we'll create a new namespace with our new token and see how it appears in the audit logs:

.
.
.
"impersonatedUser": {
    "username": "mmosley",
    "groups": [
      "cn=k8s-cluster-admins,ou=Groups,DC=domain,DC=com",
      "cn=group2,ou=Groups,DC=domain,DC=com",
      "system:authenticated"
    ]
  },
.
.
.

Our real user was used to access and create the namespace, not a system user. It's impersonatedUser because we're accessing the virtual cluster through an impersonating proxy. If we try to login to OpenUnison with a user that isn't a member of our administrator's group, such as jjackson with the password start123, you'll notice that the vcluster-blog section isn't available. If we try to login directly using the oulogin plugin, it will fail as unauthorized.

We've come full circle, deploying a cluster, a VCluster, and integrating with our enterprise's authentication and authorization system! This post used Active Directory, but you can use this same process with any identity provider that OpenUnison supports such as Okta, AzureAD, GitHub, and ADFS.

What's Next?

This implementation was pretty automated. Once Kubernetes and OpenUnison were deployed, we never used the vcluster, ouctl, or the clusterctl command (outside of the initial bootstrap of Cluster API) to deploy a VCluster and integrate it into our enterprise authentication system. We just used the helm command! Honestly, that's still too much. If you're rolling out virtual clusters manually or based on tickets, you're still not getting the most out of them. Next, we'll use OpenUnison's Namespace as a Service to create a VCluster as a Service portal, so even the helm chart isn't run manually.

Special Thanks

I'd like to say thanks to the team at loft for creating VCluster and donating it to the community. I specifically want to send a shout out to Oleg Matskiv for helping me get VCluster running with the Cluster API!

Related Posts