Getting started with Kubernetes can be daunting. In addition to getting Kubernetes up and running, adding in security and authentication can also be difficult. How do your users login? What access do they have? How will you disable access? In this blog post we’re going to walk through integrating Canonical’s Distribution of Kubernetes (CDK) with the OpenUnison Login Portal for Kubernetes to provide authentication for the dashboard and kubectl using our enterprise Active Directory.
First, why the CDK? With multiple installers and clouds why use Canonical’s? There are a few value adds from the CDK:
- If you’re familiar with juju, Canonical’s automation system, you’ll be right at home with the CDK’s deployment process.
- The CDK doesn’t just deploy Kubernetes, it will also deploy your hosts.
- In addition to the hosts and Kubernetes, the CDK will install the NGINX ingress controller and can deploy Helm, Prometheus and other popular add-ons.
One of the great things about the CDK is it works well on spare hardware to get a full cluster up and running quickly. For this article, I got a full k8s environment running on a Mac Mini with 16GB of RAM. When the CDK deploys locally, it builds “VMs” using LXD, a container technology that lets individual nodes of the Kubernetes cluster run on containers without needing full virtual machine hosts. From here we’ll assume you’ve deployed the CDK and are ready to get started deploying OpenUnison.
To integrate OpenUnison with your Active Directory you’ll need a few things:
- The IP address or DNS host name of your Active Directory domain controller,
- The certificate for your domain
- A read-only service account
When choosing how to connect to Active Directory there are multiple options:
- Connect directly to a domain controller via IP or host name – This is often easy but doesn’t provide high availability.
- Connect to a virtual IP address (a.k.a. VIP) that load balances your Active Directory domain controllers – This is how most enterprises want applications to connect to Active Directory, providing resiliency and high availability
- Use DNS SRV records – If your DNS knows how to find domain controllers, this option lets OpenUnison discover domain controllers as needed, without the need for details of a specific domain controller
The certificate you use should be the AD domain’s root certificate. If you use a particular server’s certificate then you could run into issues when working in a highly available environment. Finally, the read-only service account needs to be in the form of a distinguished name. It will usually look something like cn=k8s_service_account,cn=users,dc=domain,dc=com.
Now that we have what we need from Active Directory, we need to plan how users will access OpenUnison. OpenUnison is an application that serves multiple roles:
- OpenID Connect identity provider for Kubernetes
- Token access application
- Authenticating reverse proxy for the Kubernetes Dashboard
To this end, we need to define two host names for accessing OpenUnison and the dashboard. The only requirement on these names is that they share the same DNS suffix. The README.md file in the GitHub repo for the login manager explains all of the various configuration options so we’ll skip that for now. For this deployment, here are the options we’ll use:
AD_BASE_DN=cn=users,dc=ent2k12,dc=domain,dc=com AD_BIND_DN=cn=Administrator,cn=users,dc=ent2k12,dc=domain,dc=com AD_BIND_PASSWORD=$ecret123 AD_CON_TYPE=ldaps AD_HOST=192.168.2.75 AD_PORT=636 K8S_DASHBOARD_HOST=k8sdb-cdk.tremolo.lan K8S_URL=https://192.168.2.111 OU_CERT_C=US OU_CERT_L=Alexandria OU_CERT_O=Tremolo Security OU_CERT_OU=k8s OU_CERT_ST=Virginia OU_COOKIE_DOMAIN=tremolo.lan OU_HOST=k8sou-cdk.tremolo.lan SRV_DNS=false USE_K8S_CM=false unisonKeystorePassword=MyPassword
When deploying with the CDK there are a couple of variances from the kubeadm deployment described in the GitHub repo:
- K8S_URL will point to the api server proxy the CDK deploys, not directly to the API server(s) themselves
- USE_K8S_CM must be false as the CDK does not use CertManager internally to sign certificates, it instead uses EasyRSA for development and Vault for production
Now that the prerequisites have been collected:
$ mkdir -p deploy-openunison-cdk/certs $ mkdir -p deploy-openunison-cdk/props
Copy your Active Directory root certificate to deploy-openunison-cdk/certs/trusted-adldaps.pem and create a file called deploy-openunison-cdk/props/input.props with your configuration options. Finally, lets run the deployment:
curl https://raw.githubusercontent.com/TremoloSecurity/kubernetes-artifact-deployment/master/src/main/bash/deploy_openunison.sh | bash -s deploy-openunison-cdk/certs deploy-openunison-cdk/props https://raw.githubusercontent.com/OpenUnison/openunison-k8s-login-activedirectory/master/src/main/yaml/artifact-deployment.yaml
In a few minutes OpenUnison will be deployed and ready to access. The CDK deploys the NGINX ingress controller on port 443 on the worker nodes, so there’s no need for a load balancer to get started (though you’ll want one for production).
$ curl https://raw.githubusercontent.com/TremoloSecurity/kubernetes-artifact-deployment/master/src/main/bash/deploy_openunison.sh | bash -s ./certs ./props https://raw.githubusercontent.com/OpenUnison/openunison-k8s-login-activedirectory/master/src/main/yaml/artifact-deployment.yaml % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 267 100 267 0 0 1070 0 --:--:-- --:--:-- --:--:-- 1072 namespace/openunison-deploy created configmap/extracerts created secret/input created clusterrolebinding.rbac.authorization.k8s.io/artifact-deployment created job.batch/artifact-deployment created NAME READY STATUS RESTARTS AGE artifact-deployment-cpxtt 0/1 Pending 0 0s artifact-deployment-cpxtt 0/1 ContainerCreating 0 0s artifact-deployment-cpxtt 1/1 Running 0 46s artifact-deployment-cpxtt 0/1 Completed 0 61s
This step created the openunison namespace, created the proper certificates and deployed OpenUnison. To access OpenUnison make sure that your hosts (OU_HOST and K8S_DASHBOARD_HOST) have DNS entries that point to your worker nodes and then enter your OU_HOST into your browser. For the above config I put https://k8sou-cdk.tremolo.lan/ into my browser and was prompted to enter my Active Directory username and password. Once we’re authenticated we’ll see the login portal.
Before we click on the dashboard link, kill the pod to restart it. The OpenUnison deployment replaces the empty dashboard certificate so access to the dashboard is TLS enabled end-to-end.
$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE heapster-v1.6.0-beta.1-6db4b87d-c4pgs 4/4 Running 0 60m kube-dns-596fbb8fbd-j8lw4 3/3 Running 0 72m kubernetes-dashboard-67d4c89764-cvgk8 1/1 Running 0 71m metrics-server-v0.3.1-67bb5c8d7-5pxmj 2/2 Running 0 63m monitoring-influxdb-grafana-v4-65cc9bb8c8-772mk 2/2 Running 0 72m $ kubectl delete pod kubernetes-dashboard-67d4c89764-cvgk8 -n kube-system pod "kubernetes-dashboard-67d4c89764-cvgk8" deleted $ kubectl get pods -n kube-system -w NAME READY STATUS RESTARTS AGE heapster-v1.6.0-beta.1-6db4b87d-c4pgs 4/4 Running 0 61m kube-dns-596fbb8fbd-j8lw4 3/3 Running 0 72m kubernetes-dashboard-67d4c89764-xs6jn 0/1 ContainerCreating 0 14s metrics-server-v0.3.1-67bb5c8d7-5pxmj 2/2 Running 0 64m monitoring-influxdb-grafana-v4-65cc9bb8c8-772mk 2/2 Running 0 72m kubernetes-dashboard-67d4c89764-xs6jn 1/1 Running 0 24s
Once the dashboard pod has restarted, click on the Kubernetes Dashboard link to access the dashboard:
The “Unauthorized” error at the top of the screen is because we haven’t yet configured the api server to trust OpenUnison for authentication. First get the configuration artifacts from the openunison namespace:
$ kubectl describe configmap api-server-config -n openunison Name: api-server-config Namespace: openunison Labels: Annotations: Data ==== ou-ca.pem-base64-encoded: ---- -----BEGIN CERTIFICATE----- MIID7DCCAtSgAwIBAgIGAWb/jqV/MA0GCSqGSIb3DQEBCwUAMH4xCzAJBgNVBAYT AlVTMREwDwYDVQQIEwhWaXJnaW5pYTETMBEGA1UEBxMKQWxleGFuZHJpYTEZMBcG A1UEChMQVHJlbW9sbyBTZWN1cml0eTEMMAoGA1UECxMDazhzMR4wHAYDVQQDExVr OHNvdS1jZGsudHJlbW9sby5sYW4wHhcNMTgxMTEwMjEzNzEwWhcNMjgxMTA3MjEz NzEwWjB+MQswCQYDVQQGEwJVUzERMA8GA1UECBMIVmlyZ2luaWExEzARBgNVBAcT CkFsZXhhbmRyaWExGTAXBgNVBAoTEFRyZW1vbG8gU2VjdXJpdHkxDDAKBgNVBAsT A2s4czEeMBwGA1UEAxMVazhzb3UtY2RrLnRyZW1vbG8ubGFuMIIBIjANBgkqhkiG 9w0BAQEFAAOCAQ8AMIIBCgKCAQEAl+MZDTBrDT2UnvXVXBhpQscXlxcEThDWeZDs s9+8CWKtKjbbGVJiOv2A86O7iWFQDtmSRuL4PASWPjoq7ZprwoklVqjzUyEHTlLO 38SIXYGpjIYiQtZMZAJjehvp84jY4lnaB0qGk7PMbxtVUv/Im5u0OgCFskdIrmUS pvPDe4NpF/hi1kwWug0lD/ZygP/mQeuna542FHO/eZvNHTdIIcjesAcYdMGdzf/F l2Ri2dImTLhSVqcsyswkPcLnUP32G9i6XJlL7krqdht4dshqyQbzCmPo8e6UAECZ Ubsy/OdW86D0Na/zV2fiMdUtcBMXh3qzZ1hTjkPIJmbSt1Q+7QIDAQABo3AwbjAP BgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwICBDASBgNVHSUBAf8ECDAGBgRV HSUAMDcGA1UdEQQwMC6CFWs4c291LWNkay50cmVtb2xvLmxhboIVazhzZGItY2Rr LnRyZW1vbG8ubGFuMA0GCSqGSIb3DQEBCwUAA4IBAQBB9JgDLCRDqw4rs3695h0t Xx2uuFqB+eBd+m2tWB9Jua5HUz0yDm4hW7W+S5zGhJJZC45iRwG7okobM1cQfYO3 /0UIwlf5T6md+6IVd0LNX/WGlgHXcusF4P3HmntWhbOmgLV6RIe+dcBBJWtVONL8 SzxhFAEOF0h5GWXf89clSTAGD0EtVl69X1XnCVZ+/WyTcc58Me1pUMI/Xmehm4ky XzLkLHsymO+aSxaUYlKTeNbZ/6+/WCPJ2vGkBzDebvwxuoQObWajaucBWcKZH/Va Brt3SWtWPuthfdTXcIEPpRdMOyolqwz52FPDe0yKWkpaTElU2vbZJehfjENCuSnQ -----END CERTIFICATE----- oidc-api-server-flags: ---- --oidc-issuer-url=https://k8sou-cdk.tremolo.lan/auth/idp/k8sIdp --oidc-client-id=kubernetes --oidc-username-claim=sub --oidc-groups-claim=groups --oidc-ca-file=/etc/kubernetes/pki/ou-ca.pem
The certificate is the same certificate as the OpenUnison user interface and must be copied to each master. In my case, I created the file /root/cdk/ou-ca.pem on my masters with the certificate.
The last step to enable authentication is updating the api server’s configuration. Since the CDK is built on juju charms, the last step is to run the juju charm that will configure our api servers. The charm takes all of the oidc-api-server-flags from our ConfigMap, just without the “–“.
$ juju config kubernetes-master api-extra-args="oidc-issuer-url=https://k8sou-cdk.tremolo.lan/auth/idp/k8sIdp oidc-client-id=kubernetes oidc-username-claim=sub oidc-groups-claim=groups oidc-ca-file=/root/cdk/ou-ca.pem"
Give juju a few minutes to update the api server configurations and restart. You can check the progress by running juju status. Once the api servers are back online, refresh your dashboard screen and instead of the “Unauthorized” error, you’ll now see the Kubernetes Dashboard!
You may notice that you see a great deal of information and have unlimited capabilities to create namespaces, deployments, etc. Out of the box the CDK has no built in authorization system, relying instead on external security systems. To limit who has access to our cluster, enable the Kubernetes RBAC system:
$ juju config kubernetes-master authorization-mode=Node,RBAC
Again, wait a few minutes for the api server to restart and refresh your dashboard:
When accessing the dashboard there’s now no access for our user, Matt Mosley. To get our administrative access back, we’ll need to create an RBAC policy binding. We want to use groups in Active Directory to authorize our user. Create the RoleBinding:
$ kubectl create -f - < kind: ClusterRoleBinding > apiVersion: rbac.authorization.k8s.io/v1 > metadata: > name: activedirectory-cluster-admins > subjects: > - kind: Group > name: CN=k8s_login_ckuster_admins,CN=Users,DC=ent2k12,DC=domain,DC=com > roleRef: > kind: ClusterRole > name: cluster-admin > apiGroup: rbac.authorization.k8s.io > EOF clusterrolebinding.rbac.authorization.k8s.io/activedirectory-cluster-admins created
Now when we refresh the dashboard we once again have access because our user is a member of the CN=k8s_login_ckuster_admins,CN=Users,DC=ent2k12,DC=domain,DC=com group in Active Directory.
In addition to using the dashboard, you’re able to access Kubernetes via the kubectl command from this portal. Click on the Kubernetes Tokens badge to view the certificates for OpenUnison, Kubernetes and the command for configuring kubectl. Prior to the first time using kubectl a user will need to import the two certificates on this screen into their trust store. Once the certificates are trusted, copy the kubectl command from this screen and you can now use kubectl until you logout of your web session or your session expires.
From here, you can build out your RBAC policies to align with your Active Directory groups or build bindings directly to users. Your cluster is now accessible via your enterprise Active Directory.