Architecture

vcluster as a Service

by

Marc Boorshtein

This post walks through using OpenUnison's Namespace as a Service capability to create a self service portal where users can request that vclusters be created and have them automatically deployed and integrated for enterprise authentication. Loft's vcluster project is an amazing technology for providing a better multi-tenant experience for Kubernetes users. It gives each tenant their own API server that they can manage and create Custom Resource Definitions in without needing administrative privileges on the control plane cluster.

Centralizing vcluster creation using OpenUnison provides several benefits:

  • No manual API interaction - OpenUnison will do all the work of interacting with the Kubernetes and ClusterAPI apis, limiting the possibility of a user error
  • Better User Experience - Users won't need any special tools or cli commands. They'll be able to track the request process too.
  • Easier Compliance - Users will access vCluster using their enterprise's authentication system. All actions are logged as the user in the cluster's audit logging. Also, since there's no need to distribute the vcluster tool to users, there's one less binary to get cleared for distribution. Finally, OpenUnison's audit and report features can be used to satisfy your compliance framework's need for the enforcement of an access policy.
  • Better Security - Users will be able to integrate applications from their vCluster into enterprise authentication without needing to know any information about it. They can integrate directly into their vCluster's local OpenUnison.

Automation is key to creating a vcluster as a Service. In our last post, we explored how to deploy a vcluster with enterprise authentication using OpenUnison. This process was fairly straight forward, as these things go, but was very manual. We had to create the vcluster using the Kubernetes ClusterAPI, then we had to run a Job that integrated the vcluster with the control plane cluster for SSO. What we were really doing is manually calling APIs. A better approach is to automate this process, and add self service. By the end of this post, your users will be able to request that a vcluster be created that they can control who has access without having to involve your team. No one will need to run a kubectl command or manually update a configuration to make this work and users won't need to download any special tools to interact with their vcluster.

vcluster as a Service Architecture

Before diving into building our vCluster as a Service, let's take a look at the final architecture and each components responsibilities.

vcluster as a Service Architecture

Here are the major components of the vCluster as a Service architecture:

  1. Kuberneretes Control Plane Cluster - This is the main cluster that will host our vClusters, Ingress, and control plane OpenUnison. It also hosts the ClusterAPI we'll use to deploy our vClusters.
  2. Ingress Controller - Your control plane cluster's Ingress controller will provide ingress for all of your Pods, including your vCluster pods. When you create an Ingress object, the vCluster synchronizes it from your vCluster into your control plane. There's no Ingress for each individual vCluster.
  3. Control Plane OpenUnison - This is the primary OpenUnison users will interact with. It hosts SSO, runs the workflow engine, and provides the self service interface for users to request vClusters and for admins to approve them.
  4. ActiveDirectory - We're going to deploy a simulated ActiveDirectory into our cluster with users and groups pre-created. This setup will work any of OpenUnison's authentication mechanisms, but we want to focus on vClusters, not setting up authentication!
  5. SQL Database - OpenUnison needs to store state for workflows in a database. This database wasn't deployed when doing straight SSO, but with Namespace as a Service it's a requirement. For this post, we'll use a pre-built MariaDB instance.
  6. SMTP Service - When a user requests that a vCluster gets created, administrators need to be notified to approve the request. For this post, we'll use the SMTP Blockhole service so we don't generate any errors but don't need to worry about a real email service.
  7. vCluster(s) - Individual vClusters will be deployed with audit logging to a PersistentVolume in individual namespaces. We'll also deploy Namespaces and RoleBindings to manage access to the cluster.
  8. vCluster OpenUnison - Each vCluster will get its own OpenUnison to provide SSO and manage access

This architecture provides both flexibility and security. Having OpenUnison automate the deployment of clusters means they're all deployed the same way and with the same assumptions. Having an OpenUnison in each vCluster means you don't need an additional binary to distribute to provide access and you can integrate SSO into cluster management apps, like Grafana or ArgoCD, without getting the control plane involved. There are many components, but thankfully most of them will be deployed for us. With out architecture designed, the next step is to deploy our control plane.

Deploy and Prepare Kubernetes

The first step is to deploy a Kubernetes cluster. Pretty much any kind of cluster will do. I'm going to use CIVO for this post. The clusters are easy to deploy with NGINX, have persistent volume classes preconfigured, and setup a load balancer for you. Make sure to disable Traefik v2 with NodePort and choose NGINX. Once the cluster is deployed, the next step is to deploy the ClusterAPI. We'll use the same process as in the last blog post:

These instructions are from https://github.com/loft-sh/cluster-api-provider-vcluster. The first step, is to deploy the Cluster API. First, download the clusterctl utility using your favorite method (I use homebrew). Next, create ~/.cluster-api/clusterctl.yaml with the following:

providers:  
- name: vcluster
  url: https://github.com/loft-sh/cluster-api-provider-vcluster/releases/latest/infrastructure-components.yaml    
  type: InfrastructureProvider

Next, initialize the cluster api:

clusterctl init --infrastructure vcluster

Once done, we'll next need to deploy our database and SMTP service:

kubectl create -f https://raw.githubusercontent.com/OpenUnison/kubeconeu/main/src/main/yaml/mariadb_k8s.yaml
kubectl create ns blackhole
kubectl create deployment blackhole --image=tremolosecurity/smtp-blackhole -n blackhole
kubectl expose deployment/blackhole --type=ClusterIP --port 1025 --target-port=1025 -n blackhole

Finally, we'll deploy an ApacheDS to simulate ActiveDirectory:

kubectl create -f https://raw.githubusercontent.com/PacktPublishing/Kubernetes---An-Enterprise-Guide-2E/main/chapter5/apacheds.yaml

With that, we have components 1,2,4,5, and 7 from the above architecture. The next step is to deploy OpenUnison.

Deploying OpenUnison

The OpenUnison deployment is mostly automated. For the blog post, we're going to use nip.io, a DNS domain that lets you specify an IP address in a domain name and it will always return that IP address. We're also going to use our own internal MariaDB, "ActiveDirectory", and SMTP. The first thing we need to do is create three files that contain passwords for our services. Assuming you're using our demo infrastructure:

echo -n startt123 > /tmp/db
echo -n start123 > /tmp/ldap
echo -n doesnotmatter > /tmp/smtp

NOTE: We're using echo here because it's a demo. In real life, don't do this. We create the files to keep the passwords from being stored in your history.

Next, you'll need to download the ouctl command. This does all the work for us of setting up OpenUnison and integrating OpenUnison with the dashboard. Once you have the ouctl command downloaded, download the OpenUnison helm values file and update network.openunison_host, network.dashboard_host, and network.api_server_host to have unique DNS names that point to your load balancer. The last change to this file is to replace openunison.non_secret_data.VCLUSTER_DOMAIN_ROOT with the DNS suffix used for the control plane. With that, we're ready to deploy!

helm repo add tremolo https://nexus.tremolo.io/repository/helm/
helm repo update
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
./ouctl install-auth-portal -t /tmp/smtp -s /tmp/ldap -b /tmp/db /path/to/openunison-vcluster-values.yaml

This will take a few minutes while all the containers get running. Once completed, run:

kubectl create -f https://gist.githubusercontent.com/mlbiam/c22f982da9c4164a4ee1aa4c1dd9a664/raw/3acbba8dd2623d065d85abd5c48ca511e759e55e/vcluster-extensions.yaml

We'll walk through the details of what we customized next, first lets login! go to your network.openunison_host and user the username mmosley and the password start123.

Requesting and Created a vCluster

Once you're logged in, you'll see this screen:

OpenUnison NaaS Portal

If you've used OpenUnison to login to your clusters, you'll see additional options from the tree for management. You're seeing this because you are the first user to login. You can checkout the basic functions of the NaaS portal in OpenUnison's docs. The important part here is to click on New Kubernetes Namespace, which opens a new tab where we can request a new Namespace with its own vCluster:

Request new NameSpace

The first thing we need to do is choose which cluster we want to create our new vcluster in. OpenUnison NaaS has a powerful multi-cluster management subsytem, but that's for another blog post! We next need to give it a name and then choose the tenant type. It's important to note that the "stock" NaaS Create New Namespace screen doesn't include the Tenant Type. We added it through our configuration (we'll cover those details at the end). Once we're ready, hit Submit Registration.

Once that's submitted, close this tab and go back to the main NaaS screen. You can also hit Home at the top. Once back at the home screen, hit refresh. You'll now see a small red 1 next to the menu Open Approvals. In a real world scenario, another user would have just requested a new vcluster, but to minimize the bouncing around in this post, we're using the same user to request and approve. Click on the Open Approvals menu:

OpenUnison NaaS with approvals

A new screen with a list of open approvals will appear. Click Review next to the one open approval:

Review open approvals

This provides a final screen that lists the details of the request and the requestor:

NaaS Approval

The request shows what was requested and what groups the user is currently a member of. Next, provide a justification for approving the request and click Approve Request. You'll be asked to confirm the request. At this point, sit back or grab a cup of coffee. Lots of things are happening:

  1. OpenUnison created groups in its database
  2. Created the namespace and RoleBinding objects for those groups
  3. Created ConfigMaps for the audit configuration
  4. Created the ClusterAPI objects that will launch the vcluster

Taking a look at the dashboard in our control plane we can see that the vCluster StatefulSet is being created. Once that's done, OpenUnison launches a Job to integrate the new vCluster with the control plane using SSO with OpenUnison. Once that Job is completed, the next step is to logout of the NaaS portal and log back in.

Using the new vcluster

Once logged back in, you'll see a new entry on our front page:

NaaS with new vCluster

Clicking on the new vCluster brings us to the vcluster's access badges:

vCluster Access Badges

You can click on the dashboard link to bring you straight to the vcluster's dashboard, or use the tokens link to generate a kubectl configuration for your cluster. You can also use the oulogin plugin to login directly from your CLI without first logging into any portal.

Now that we have our vcluster setup, how can others access it securely? OpenUnison solves that for you by managing who can access which cluster! Logout as the current user, and log back in as jjackson with the password start123.

NaaS Portal for jjackson

Notice that the new vcluster isn't listed! That's because jjackson doesn't have access. If you just sent her a link to the cluster, she wouldn't be able to get access that way either. Click on the Request Access menu option. Expand vcluster Control Plane on the new screen and choose Administrators. Next to your vCluster, choose Add To Cart.

NaaS Role Request

As soon as you click Add To Cart, a red 1 next to Check Out will appear in the menu. Click on Check Out. Provide a reason and click "Submit Request". At this point, mmosley will receive an email that someone has requested access to their vCluster. You can logout, and log back in as mmosley. Just as before, review and approve your open approval. Now, log out again and log back in as jjackson.She now has admin access to our vCluster!

NaaS for jjackson with vCluster Access

Removing access is similar. The vCluster owner can remove the user by requesting access on their behalf and denying the request. With a basic vcluster as a Service built, let's look at other customization points!

Customizing vCluster as a Service

The current vcluster as a Service is very opinionated. It uses Active Directory for authentication and manages all groups inside of OpenUnison. OpenUnison also support authentication via OpenID Connect, SAML2, and GitHub. All of which can be used with the NaaS platform. We can also customize the NaaS portal to leverage groups from those providers instead of, or in addition to, OpenUnison's built-in groups. For instance, if you're using Okta, Active Directory, or GitHub, the New Namespace screen will let you pick which groups you want to use so you don't have to type them in.

What's Next for vcluster as a Service?

Now that we've managed to build and deploy our vcluster as a Service platform, what's next? The great thing about OpenUnison, is it knows how to talk to systems besides Kubernetes. It can provision repositories to GitHub, create applications in ArgoCD, and deploy additional infrastructure in our clusters. What if, instead of just providing a blank cluster to our users, we could provision a fully functional GitOps cluster? With its own ArgoCD and GitHub repo where the identities are all tied together? What if this cluster broke? Could we just, regenerate it from scratch? This would vastly simplify multi-tenant operations. In our next post, we extend our vCluster as a Service to create ephemeral clusters that will let us realize this operations dream!

Learn More About OpenUnison and Tremolo Security

To learn more about OpenUnison, take a look at our project docs. If you have a question, you can open an issue on GitHub. We're happy to help! Finally, if you're interested in commercial support, you can reach out to us directly!

Related Posts