Cloud Native

Using Okta with Kubernetes

April 28, 2020


Marc Boorshtein


  • Use Okta for authentication, and Okta groups for RBAC Authorization
  • Use OpenUnison's OpenID Connect Login Portal for integration
  • Access the dashboard and cli with the same credential

Using Okta and Kubernetes

Okta is a popular authentication service used by enterprises and startups alike. It lets you store your users and groups while also providing capabilities for multiple multi-factor authentication options. Finally, it can integrate with your on-premises Active Directory too. Okta supports both SAML2 and OpenID Connect protocols. In this post we're going to walk through integrating Okta with your Kubernetes cluster using OpenUnison's login portal for OpenID Connect.

If Okta Already Supports OpenID Connect, Why Do I Need OpenUnison?

There are four reasons for using OpenUnison with Okta instead of connecting Kubernentes to Okta directly:

  1. Combined Authentication for Dashboard and kubectl - OpenUnison provides a single point of entry for both the dashboard and kubectl. OpenUnison's reverse proxy provides built in integration for the dashboard with your Okta login eliminating the need for kubectl proxy. The integrated kubectl configurator will create a kubectl configuration file for you supporting both Powershell and Bash/Zsh without manually installing certificates or needing plugins.
  2. If your cluster is managed (ie EKS, AKS or GKE) you can use OpenUnison's impersonation features to integrate Okta into your cluster.
  3. If you want to use Okta's Multi-Factor Authentication options, many of them require a web browser.
  4. The id_token returned by Okta with the access_token doesn't include groups, requiring a system that will call the user info endpoint.

There are multiple kubectl plugins that support the password grant, which lets you login from the CLI directly without a browser. This method has several downsides:

  1. Limited multi-factor authentication
  2. No dashboard integration
  3. Most don't support the user info endpoint, eliminating using Okta's groups in your RBAC policies

Setting up Your Okta Identity Provider

The first two things you'll need to get started are:

  1. An account on - Seems obvious!
  2. A Kubernetes cluster - Any distribution will do (including a managed cluster)

The first thing we'll need to do is setup our identity provider. Login to Okta, I recommend the "Classic UI" and click "Add Application":

Next, click "Create New App":

For the platform choose "Web" and for "Sign on method" choose OpenID Connect:

Give the application a descriptive name and set the callback URL. This is the URL we'll host OpenUnison on. We're going to host OpenUnison in a cluster running on and use the service for DNS to keep things simple. We'll come back to this subject when we deploy OpenUnison. The path on the URL will always be /auth/oidc.

Next, click on the "Sign-On" tab, next to OpenID Connect Token click and then click "Edit". Configure the application to only include groups that start with "k8s-":

Finally, we'll authorize anyone in the group "demo-k8s" to have access to this application:

Let's also create a group called "k8s-admins" and add our test user to it. We'll use this group later on to authorize cluster administrators.

Deploying the OpenUnison Operator

Now that Okta is ready to go, lets deploy OpenUnison. The first step is to pull the Helm charts and deploy the operator. An operator in Kubernetes is a container that watches for changes to a custom object type and makes updates accordingly to get the cluster's state in line with whats expected based on the custom resource. In OpenUnison's case, the operator will generate a PKCS12 keystore based on certificates in Kubernetes Secrets, setup Deployment and Ingress objects, and deploy certificates as needed for the dashboard. It also creates a CronJob that will run every night to check if the self signed certificates generated by the operator need to be renewed. To get started, make sure you have helm 3.x installed then run the following commands:

-- CODE language-shell --
$ helm repo add tremolo
$ helm repo update
$ kubectl create ns openunison
$ helm install openunison tremolo/openunison-operator -n openunison
$ watch kubectl get pods -n openunison

Once the openunison-operator pod is running, we're ready to create a Secret.

Create Your Secret

You never want to store secret information in a custom resource or ConfigMap in Kubernetes. Instead, secret information should be stored in a Secret object. Are secrets encrypted? No. Are they stored in a way thats easier to secure? Probably not. Why use a Secret then? It makes it easier to segregate access via RBAC and other authorization methods. It also makes it easier to integrate with 3rd party secret management tools. That said, here's our secret:

apiVersion: v1
type: Opaque
  name: orchestra-secrets-source
  namespace: openunison
  unisonKeystorePassword: aW0gYSBzZWNyZXQ=
kind: Secret

The unisonKeystorePassword and K8S_DB_SECRET can be any base64 encoded randomness. They're used internally. The only value that is tied to Okta is the OIDC_CLIENT_SECRET. This is the base64 encoded client secret from your Okta application. You can get it from logging into your dashboard and going to your application:

Copy the secret, base64 encode it and use it for the value of OIDC_CLIENT_SECRET. Save your secret yaml and add it to your cluster:

-- CODE language-shell --
$ kuebctl create -f /path/to/secret.yaml

If you haven't already deployed the Kubernetes Dashboard, now is the right time to do so before we deploy OpenUnison:

-- CODE language-bash --
$ kubectl apply -f

Deploy The Orchestra Login Portal

Before we deploy OpenUnison we need to determine two hosts, one for the login portal and one for the dashboard. If you're planing on using a managed Kubernetes service like EKS, AKS or GKE, you'll need a third host name to host the api proxy to support impersonation. These host names need to be registered in a DNS service that is accessible from your browser, the OpenUnison pod and the Kubernetes API server. For testing, I'm a big fan of using to get a host name based on an IP address. Using this template, here's my values:

  openunison_host: ""
  dashboard_host: ""
  api_server_host: ""
  session_inactivity_timeout_seconds: 900

  ou: "Kubernetes"
  o: "MyOrg"
  l: "My Cluster"
  st: "State of Cluster"
  c: "MyCountry"

image: ""
myvd_config_path: "WEB-INF/myvd.conf"
k8s_cluster_name: kubernetes
enable_impersonation: false

  namespace: "kubernetes-dashboard"
  cert_name: "kubernetes-dashboard-certs"
  label: "k8s-app=kubernetes-dashboard"
  service_name: kubernetes-dashboard
  use_k8s_cm: false

trusted_certs: []
  prometheus_service_account: system:serviceaccount:monitoring:prometheus-k8s

  client_id: 0oa3p001ibFsuP3r6357
  user_in_idtoken: false
  domain: ""
  scopes: openid email profile groups
    sub: sub
    email: email
    given_name: given_name
    family_name: family_name
    display_name: name
    groups: groups

The network block tells your cluster and OpenUnison what hosts to listen on, timeouts, etc. Notice the openunison_host lines up with the host we used in our redirect URL when we setup the application in Okta. This will be the central access point for OpenUnison, hosts the OIDC connection with your cluster, and hosts the screen where you get your credentials for kubectl access. The dashboard_host is the host name for the dashboard.

Next, skip down to the oidc section where we specify our connection specific attributes. Get your client id from the classic console, navigating to our application, clicking on the General tab and scrolling to the bottom:

The auth_url, token_url, and userinfo_url are all from your Okta discovery configuration. You can get this document based on the host for your Okta account. Here's a quick way to get that document from *nix command line:

--- CODE language-bash ---
curl 2>/dev/null | jq -r

That will give you a json document that will contain all your URLs.

Next, we'll point out that user_in_idtoken is false is because by default Okta considers the id_token supplied with the access_token to be a "thin" token that does't include all of the user's attributes. Setting to false tells OpenUnison to retrieve the user's claims from the user info endpoint. Finally, the scopes setting includes "groups" so that we receive the user's groups that start with "k8s-" in the groups claim when we retrieve the user's information from the userinfo endpoint.

With all our values in hand, we can deploy the OpenUnison helm chart:

-- CODE language-shell --
$ helm install orchestra tremolo/openunison-k8s-login-oidc -n openunison -f ~/Documents/projects/test-helm/values-login-okta.yaml

The helm chart will deploy a few RBAC policies and create an OpenUnison custom resource object. The operator we deployed earlier will see the newly created custom resource and:

  1. Create Secret objects for certificates, including for the Kubernetes Dashboard
  2. Generate a Secret to be used by OpenUnison
  3. Generate a Deployment, Service and Ingress object
  4. Create a CronJob that checks generated certificates every night and re-issues them within 10 days of their expiration

After a few minutes, you should see that you now have two pods in the OpenUnison namespace:

-- CODE language-shell --
kubectl get pods -n openunison
NAME                                   READY   STATUS    RESTARTS   AGE
openunison-operator-7d58975678-lkjfx   1/1     Running   0          18h
openunison-orchestra-78858c5f4-8bnkb   1/1     Running   0          4m19s

Now we can login by going to our network.openunison_host:

You'll see we have logged in using our Okta account. Our unique identifier, or sub, is in the upper left. Click on that to see your profile:

Our test user's groups should show up. We'll be able to use this in our RBAC policies. Next go back to the Home screen and click on the Kubernetes Dashboard badge:

All those "Unauthorized" messages are because we haven't yet configured our api server to trust OpenUnison. First we'll get OpenUnison's certificate:

-- CODE language-shell --
kubectl get secret ou-tls-certificate -n openunison -o json | jq -r '.data["tls.crt"]' | base64 -d > /tmp/cert.pem

If you take a look at /tmp/cert.pem you'll see a base64 encoded certificate. This certificate is the same certificate used to access OpenUnison. Next, we need to get our API server flags:

-- CODE language-bash --
kubectl describe configmap api-server-config -n openunison

These flags need to be configured on your API server. How you set these options is dependent on which distribution of Kubernetes you're using. I'm using kubeadm so we need to first copy our certficate to /etc/kubernetes/pki on the api server and then add these options to /etc/kubernetes/manifests/kube-apiserver.yaml. Once I do those two things kubeadm will see the configuration has changed and restart the API server container. Once your API server is restarted, go back to your dashboard. You should see something that looks like:

The good news is we don't see generic "Unauthorized" errors anymore! The bad news is we get a new error message saying we're not authorized. This is because while Kubernetes knows who we are, we haven't authorized access via an RBAC Role and RoleBinding. There's already a ClusterRole called cluster-admin that gives global administrative privileges to our cluster. We need to assign it using a ClusterRoleBinding. We have two options for authorization. We can authorize the user directly or we can authorize based on a group from Okta. Its best to do this by group for a few reasons:

  1. Assignment doesn't require updates to the RoleBindings or ClusterRoleBinding objects
  2. Auditing is much easier as you don't have to enumerate all policies
  3. Removing access happens as soon as you remove the user from the group

When we setup our application, we created a group in Okta called k8s-admins, so lets authorize that group:

kind: ClusterRoleBinding
  name: okta-cluster-admins
  kind: ClusterRole
  name: cluster-admin
- kind: Group
  name: k8s-admins

Once you create this ClusterRoleBinding, refresh your dashboard and voila! You now have admin access to your cluster via the dashboard!

Login to the Kubernetes CLI

Now that we have the dashboard working, what about the kubectl command? Go back to the Home screen and click on "Kubernetes Tokens", you'll see a screen with several options. Click on the double square next to the "kubectl Command" on *nix/macos or "kubectl Windows Command" on Windows.

Clicking on either of these will copy a kubectl command into your clipboard that will:

  1. Import the CA certificate for your API server
  2. Import the OIDC information needed to allow kubectl to refresh your token
  3. Import your existing id_token
  4. Trust your OpenUnison certificate

Pasting it into your terminal gets you working with the CLI:

-- CODE language-shell --
$ kubectl get nodes
error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
$ export KUBECONFIG=/tmp/k
$ export TMP_CERT=$(mktemp) && echo -e "-----BEGIN CERTIFICATE-----\nMIICyDCCAbCgAwIBAgIBADANBgkq
Cluster "kubernetes" set.
Context "kubernetes" created.
User "00u3fusfj6jFLURbp357" set.
Switched to context "kubernetes".
$ kubectl get nodes
k8s-all-in-one   Ready    master   16d   v1.18.1

As you do your work, you may notice a pause or delay every few minutes interacting with kubectl. That's because the tokens are short lived at one minute with a minute of skew time. Finally, try logging out of OpenUnison.  Wait a minute, and try using kubectl again. You'll see the session is over and you can no longer obtain tokens:

-- CODE language-shell --
$ kubectl get nodes
Unable to connect to the server: failed to refresh token: oauth2: cannot fetch token: 401 Unauthorized

You now have an integrated solution for Kubernets and Okta! As you expand your policies you can simply add groups to your Okta account and reference them directly in RoleBinding and ClusterRoleBinding objects! Want to watch the entire process start to finish?


It's 2020 and we know that one of the most important things we can do for security is keep our systems updated. As we detect patches that are available for CVEs in our containers, we push out updates. Watch the container and when its updated pull the new version into your registry.

What Next?

Take a look at our Kubernetes solutions page to see how we can help automate your cloud native infrastructure!

Related Posts