Cloud Native

Multi-tenant Amazon EKS - The Easy Way - Part I: Authentication

May 5, 2020


Marc Boorshtein

Amazon's EKS is a great way to deploy Kubernetes. Its deployment is automated, it is integrated with Amazon services for monitoring, and can be quickly upgraded so you don't need to worry about doing all that work yourself. Once you decided to use EKS one of the first decisions you'll need to make is if you're going to use the cluster as the tenancy layer or individual namespaces. There are several advantages to a multi-tenant EKS cluster:

  • Reduced sprawl - Depending on how many tenants you'll have you can end up having a large number of clusters to configure, manage and maintain. Even with automation this sprawl can quickly become a maintenance nightmare.
  • More Dynamic On-boarding - If you're customers are in their own clusters, it takes time to stand-up even when automated. Creating a new namespace is as real-time as it gets.
  • Better Cost Management - Every control plane for EKS has a cost. Every load balancer has a cost. Every compute node has a cost. Running multiple tenants on a cluster lets you re-use these resources and get better density on your compute nodes.

In this series of blog posts, we're going to cover many of the items topics you need to consider when creating a multi-tenant EKS cluster. The first topic in this blog post is how users access your cluster. A user can be a developer, an admin, or even an automated process like a CI/CD pipeline.

User Access to EKS - The Hard Way

The steps to grant access to EKS using IAM:

  1. Create an IAM role with the EksDescribeClusters policy, letting your user user assume this role
  2. Add the role to the aws-auth ConfigMap in the kube-system namespace, mapping the user to Kubernetes user and groups
  3. Create RBAC bindings for these users/groups
  4. When the user wants to use EKS, they generate a configuration using the role created in #1

Why is this the "hard" way? First, you need to do this for every user. Even when automating it can be error prone. A second reason is this doesn't give a secure mechanism for accessing the dashboard, requiring another system to be put in place to facilitate secure dashboard access. When its time to off-board a user, you need to unwind this process to properly remove their access.

This method also requires that the cloud team be responsible for who has access to EKS. Is that a responsibility the cloud team even wants? It will depend on the enterprise. If your cloud team doesn't want the added responsibility of managing access to EKS clusters, relying on IAM to authorize access to EKS could slow down your EKS implementation strategy.

User Access to EKS - The Easy Way

Enter the Orchestra login portal for Active Directory. The login portal uses impersonation to tie your Active Directory credentials to your EKS cluster's RBAC configuration. This gives you the best of both worlds. Use your centralized Active Directory credentials for EKS access and get a managed Kubernetes service. As an added bonus, you'll be able to use the Dashboard securely and get short lived tokens that are kept up-to-date by kubectl. Here's what your architecture with EKS and the Ochestra Login portal will look like:

When an admin or developer interacts with EKS, they'll do so using OpenUnison's reverse proxy. The proxy will authenticate the user against Active Directory and expose the user's groups to Kubernetes making it easy to create RBAC bindings from Active Directory groups.

Admins and developers will access the dashboard through OpenUnison's built in proxy, creating a secure way to provide users with the convenience of the dashboard without having to install a local tool.

To deploy Orchestra in this configuration, you'll first need a read only service account from your Active Directory domain as well as the certificate for your Active Directory's LDAPS listeners.

Deploying Orchestra

Assuming you have an active EKS cluster, the first steps are to deploy your Ingress controller and Dashboard. Once the ingress controller is deployed, next up is the dashboard

$ kubectl apply -f

We don't create an Ingress object or special RBAC bindings for the Dashboard. When we deploy Orchestra and use impersonation the user's identity will be injected into each request. Next, make sure you have helm installed and deploy the openunison operator:

$ helm repo add tremolo
$ helm repo update
$ kubectl create ns openunison
$ helm install openunison tremolo/openunison-operator --namespace openunison

Once the openunison operator pod is running, create a secret that holds the your Active Directory service account password:

apiVersion: v1
type: Opaque
  name: orchestra-secrets-source
  namespace: openunison
  unisonKeystorePassword: aW0gYSBzZWNyZXQ=
kind: Secret

The K8S_DB_SECRET and unisonKeystorePassword can be any based64 encoded random data. With our secret created the final step is to create our values.yaml file and deploy the orchestra helm chart. Here's an example chart:

  openunison_host: ""
  dashboard_host: ""
  api_server_host: ""
  session_inactivity_timeout_seconds: 900
  k8s_url: ""

  ou: "Kubernetes"
  o: "MyOrg"
  l: "My Cluster"
  st: "State of Cluster"
  c: "MyCountry"

image: ""
myvd_config_path: "WEB-INF/myvd.conf"
k8s_cluster_name: kubernetes
enable_impersonation: true

  namespace: "kubernetes-dashboard"
  cert_name: "kubernetes-dashboard-certs"
  label: "k8s-app=kubernetes-dashboard"
  service_name: kubernetes-dashboard
  use_k8s_cm: false

  - name: ldaps

  prometheus_service_account: system:serviceaccount:monitoring:prometheus-k8s

  base: cn=users,dc=domain,dc=com
  host: ""
  port: "636"
  bind_dn: "cn=ldapsa,cn=users,dc=domain,dc=com"
  con_type: ldaps
  srv_dns: "false"

The key areas in this chart are network.api_server_host and enable_impersonation. These two settings tells OpenUnison to use impersonation instead of OpenID Connect when integrating with the cluster and to provide access to the API server on a particular host. When we interact with Kubernetes, we'll do it using the impersonated host, not going directly to our API server. The kubectl utility won't know the difference. It will think we're working directly with an API server using OpenID Connect.

Finally, run our playbook:

$ helm install orchestra tremolo/openunison-k8s-login-activedirectory --namespace openunison -f /path/to/values.yaml

Once the openunison pod is running, login to orchestra by going to the host in your values.yaml for network.openunison_host. You'll be prompted for a username and password:

Enter a username and password from your domain, and you'll next be presented with a portal for accessing your cluster:

Click on the Kubernetes Dashboard link. You won't see much. Take a look at the bell in the upper right hand corner. Its saying your user isn't authorized to access any resources. That's because even though Kubernetes recognizes your user, isn't authorized to access any apis:

To authorize our user we can either create RBAC policies based on the user's unique id or their groups. Its better to authorize access based on our Active Directory groups for multiple reasons:

  • Faster onboarding/offboarding - Authorizing access involves simply adding or removing users from their Active Directory groups.
  • Easier Auditing - You can't ask Kubernetes to "give me all RoleBindings for the user mmosley", you need to enumerate each policy. Using groups in Active Directory gives you an easy way to query all groups a user is a member of.
  • Simpler RBAC Objects - Adding each user to an RBAC binding can lead to really long lists of authorized users that are being updated as usage changes. Using a group keeps your objects smaller and simpler.

Going back to the OpenUnison home page, click on your username in the upper left and you'll see a screen similar to:

Our user is a member of the group cn=k8s-cluster-admins,cn=Users,dc=domain,dc=com. Kubernetes has a built in ClusterRole called cluster-admins. Create a ClusterRoleBinding assigning all users in the group cn=k8s-cluster-admins,cn=Users,dc=domain,dc=com as cluster-admins in our cluster.

kind: ClusterRoleBinding
  name: activedirectory-cluster-admins
- kind: Group
  name: cn=k8s-cluster-admins,cn=Users,dc=domain,dc=com
  kind: ClusterRole
  name: cluster-admin

Once the ClusterRoleBinding is created, go back to the dashboard and hit the refresh button and VOILA! You now have cluster admin access via the dashboard! Finally, click on the "Kubernetes Tokens" badge. It will give your commands to set your kubectl configuration from the command line for both *nix/macOS and Windows Powershell:

Clicking on one of the double document boxes will copy the appropriate command into your clipboard. Simply paste that command into your terminal and use kubectl:

And finally, logout of OpenUnison and within a minute or two, you'll see that your session is no longer valid and kubectl stops working:

How Long Will This Take?

This eleven minute video shows the entire deployment process from start to finish. Includes dashboard integration, kubectl and LetsEncrypt for certificates.

Whats Next?

In this blog post we explored how to start building a multi-tenant EKS cluster using our Active Directory for authentication and authorization. In the next post we'll talk about automating namespace creation using the Orchestra self service portal. In the mean time, if you're interested in learning about other support login options for EKS, such as SAML2, OpenID Connect and GitHub take a look at our Kubernetes Solutions!

Related Posts