Cloud Native

Using Okta with Kubernetes

November 9, 2021

by

Marc Boorshtein

TL;DR

  • Use Okta for authentication, and Okta groups for RBAC Authorization
  • Use OpenUnison's OpenID Connect Login Portal for integration
  • Access the dashboard and cli with the same credential

Using Okta and Kubernetes

Okta is a popular authentication service used by enterprises and startups alike. It lets you store your users and groups while also providing capabilities for multiple multi-factor authentication options. Finally, it can integrate with your on-premises Active Directory too. Okta supports both SAML2 and OpenID Connect protocols. In this post we're going to walk through integrating Okta with your Kubernetes cluster using OpenUnison's login portal for OpenID Connect.

If Okta Already Supports OpenID Connect, Why Do I Need OpenUnison?

There are four reasons for using OpenUnison with Okta instead of connecting Kubernentes to Okta directly:

  1. Combined Authentication for Dashboard and kubectl - OpenUnison provides a single point of entry for both the dashboard and kubectl. OpenUnison's reverse proxy provides built in integration for the dashboard with your Okta login eliminating the need for kubectl proxy. The integrated kubectl configurator will create a kubectl configuration file for you supporting both Powershell and Bash/Zsh without manually installing certificates or needing plugins.
  2. If your cluster is managed (ie EKS, AKS or GKE) you can use OpenUnison's impersonation features to integrate Okta into your cluster.
  3. If you want to use Okta's Multi-Factor Authentication options, many of them require a web browser.
  4. The id_token returned by Okta with the access_token doesn't include groups, requiring a system that will call the user info endpoint.

There are multiple kubectl plugins that support the password grant, which lets you login from the CLI directly without a browser. This method has several downsides:

  1. Limited multi-factor authentication
  2. No dashboard integration
  3. Most don't support the user info endpoint, eliminating using Okta's groups in your RBAC policies

Setting up Your Okta Identity Provider

The first two things you'll need to get started are:

  1. An account on okta.com - Seems obvious!
  2. A Kubernetes cluster - Any distribution will do (including a managed cluster)

The first thing we'll need to do is setup our identity provider. Login to Okta, and click on SSO Apps:

Next, click on Create New Application:

The screen will grey-out with a "popup" asking you what kind of application you want to create.  Choose OIDC - OpenID Connect for Sign-in method and Web Application for Application type

The next screen will let you name your application and provide a logo.  It will also ask for the callback URL for OpenUnison.  This step is used to ensure that no one tries to force Okta to redirect your authentication to an attacker.  The URL will be the value of https://network.openunison_host/auth/oidc, where network.openunison_host comes from your values.yaml.  Since our demo network.openunison_host is k8sou.apps.ou.tremolo.dev, the Sign-in redirect URIs is https://k8sou.apps.ou.tremolo.dev/auth/oidc.  At the bottom of the form you can specify which groups have access to this application.  For simplicty we specified Allow everyone in your organization to access

Once you click Save, you'll be brought to the final application configuration screen which will have the data you need to configure OpenUnison.

Let's also create a group called "k8s-admins" and add our test user to it. We'll use this group later on to authorize cluster administrators.

Deploying OpenUnison

With Okta configured, the next step is to download the ouctl command for your platform and rename it to ouctl. Once that is done, retrieve the OpenUnison helm charts and deploy the Kubernetes Dashboard:

-- CODE language-shell --
$ helm repo add tremolo https://nexus.tremolo.io/repository/helm/
$ helm repo update
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml

Create Your Secret

You never want to store secret information in a custom resource or ConfigMap in Kubernetes. Instead, secret information should be stored in a Secret object. Are secrets encrypted? No. Are they stored in a way thats easier to secure? Probably not. Why use a Secret then? It makes it easier to segregate access via RBAC and other authorization methods. It also makes it easier to integrate with 3rd party secret management tools.

Retrieve the Okta client secret and store it in a file so the ouctl utility can access it. You can get it from logging into your dashboard and going to your application:

Deploy The Orchestra Login Portal

Before we deploy OpenUnison we need to determine two hosts, one for the login portal and one for the dashboard. If you're planing on using a managed Kubernetes service like EKS, AKS, GKE, or Civo, you'll need a third host name to host the api proxy to support impersonation. These host names need to be registered in a DNS service that is accessible from your browser, the OpenUnison pod and the Kubernetes API server. For testing, I'm a big fan of using nip.io to get a host name based on an IP address. Using this template, here's my values:

network:
  openunison_host: "k8sou.apps.192-168-2-144.nip.io"
  dashboard_host: "k8sdb.apps.192-168-2-144.nip.io"
  api_server_host: ""
  session_inactivity_timeout_seconds: 900
  k8s_url: https://192.168.2.144:6443

cert_template:
  ou: "Kubernetes"
  o: "MyOrg"
  l: "My Cluster"
  st: "State of Cluster"
  c: "MyCountry"

image: "docker.io/tremolosecurity/openunison-k8s:latest"
myvd_config_path: "WEB-INF/myvd.conf"
k8s_cluster_name: kubernetes
enable_impersonation: false

dashboard:
  namespace: "kubernetes-dashboard"
  cert_name: "kubernetes-dashboard-certs"
  label: "k8s-app=kubernetes-dashboard"
  service_name: kubernetes-dashboard
certs:
  use_k8s_cm: false

trusted_certs: []
  
monitoring:
  prometheus_service_account: system:serviceaccount:monitoring:prometheus-k8s

oidc:
  client_id: 0oa3p001ibFsuP3r6357
  issuer: https://dev-874494.okta.com/
  user_in_idtoken: false
  domain: ""
  scopes: openid email profile groups
  claims:
    sub: sub
    email: email
    given_name: given_name
    family_name: family_name
    display_name: name
    groups: groups
    
openunison:
  replicas: 1
  non_secret_data:
    K8S_DB_SSO: oidc
    PROMETHEUS_SERVICE_ACCOUNT: system:serviceaccount:monitoring:prometheus-k8s
    SHOW_PORTAL_ORGS: "false"
  secrets: []
  html:
    image: docker.io/tremolosecurity/openunison-k8s-html

The network block tells your cluster and OpenUnison what hosts to listen on, timeouts, etc. Notice the openunison_host lines up with the host we used in our redirect URL when we setup the application in Okta. This will be the central access point for OpenUnison, hosts the OIDC connection with your cluster, and hosts the screen where you get your credentials for kubectl access. The dashboard_host is the host name for the dashboard.

Next, skip down to the oidc section where we specify our connection specific attributes. Get your client id from your application, navigating to the Client Credentials section:

The oidc.issuer is the full URL of your Okta domain. So in the above configuration my Okta domain is dev-874494.okta.com, so my oidc.issuer is https://dev-874494.okta.com.

Next, we'll point out that oidc.user_in_idtoken is false is because by default Okta considers the id_token supplied with the access_token to be a "thin" token that does't include all of the user's attributes. Setting to false tells OpenUnison to retrieve the user's claims from the user info endpoint. Finally, the scopes setting includes "groups" so that we receive the user's groups that start with "k8s-" in the groups claim when we retrieve the user's information from the userinfo endpoint.

There's one step before we can configure OpenUnison, we need to configure Okta to send groups to OpenUnison so they can be included in our RBAC bindings. Click on Sign On (it's on the top of the screen), then in the bottom box click Edit next to the OpenID Connect ID Token header. Next to Groups claims filter change the dropdown that says Starts With to Matches regex and put .* in the text box next to it. Click Save to finish the configuration. This will send all groups the user is a member of in your Okta domain to OpenUnison. You can come back and configure this later to limit groups that are related to your Kubernetes clusters.

Okta Groups Configuration


With all our values in hand, we can deploy the OpenUnison helm chart:

-- CODE language-shell --
$ ./ouctl -s /path/to/oka/client/secret /path/to/values-login-okta.yaml

These helm charts will deploy a few RBAC policies and create an OpenUnison custom resource object. The operator we deployed earlier will see the newly created custom resource and:

  1. Create Secret objects for certificates, including for the Kubernetes Dashboard
  2. Generate a Secret to be used by OpenUnison
  3. Generate a Deployment, Service and Ingress object
  4. Create a CronJob that checks generated certificates every night and re-issues them within 10 days of their expiration

After a few minutes, you should see that you now have three pods in the OpenUnison namespace:

-- CODE language-shell --
kubectl get pods -n openunison
NAME                                             READY   STATUS    RESTARTS   AGE
openunison-operator-7cd48b58f4-b5rn2             1/1     Running   0          130m
openunison-orchestra-854b897f46-7xjpv            1/1     Running   0          130m
ouhtml-orchestra-login-portal-55ccc56cc6-lb6rk   1/1     Running   0          128m

Now we can login by going to our network.openunison_host:

You'll see we have logged in using our Okta account. Our unique identifier, or sub, is in the upper left. Click on that to see your profile:

Our test user's groups should show up. We'll be able to use this in our RBAC policies. Next go back to the Home screen and click on the Kubernetes Dashboard badge:

All those "Unauthorized" messages are because we haven't yet configured our api server to trust OpenUnison. First we'll get OpenUnison's certificate:

-- CODE language-shell --
kubectl get secret ou-tls-certificate -n openunison -o json | jq -r '.data["tls.crt"]' | base64 -d > /tmp/cert.pem

If you take a look at /tmp/cert.pem you'll see a base64 encoded certificate. This certificate is the same certificate used to access OpenUnison. Next, we need to get our API server flags:

-- CODE language-bash --
kubectl describe configmap api-server-config -n openunison

These flags need to be configured on your API server. How you set these options is dependent on which distribution of Kubernetes you're using. I'm using kubeadm so we need to first copy our certficate to /etc/kubernetes/pki on the api server and then add these options to /etc/kubernetes/manifests/kube-apiserver.yaml. Once I do those two things kubeadm will see the configuration has changed and restart the API server container. Once your API server is restarted, go back to your dashboard. You should see something that looks like:

The good news is we don't see generic "Unauthorized" errors anymore! The bad news is we get a new error message saying we're not authorized. This is because while Kubernetes knows who we are, we haven't authorized access via an RBAC Role and RoleBinding. There's already a ClusterRole called cluster-admin that gives global administrative privileges to our cluster. We need to assign it using a ClusterRoleBinding. We have two options for authorization. We can authorize the user directly or we can authorize based on a group from Okta. Its best to do this by group for a few reasons:

  1. Assignment doesn't require updates to the RoleBindings or ClusterRoleBinding objects
  2. Auditing is much easier as you don't have to enumerate all policies
  3. Removing access happens as soon as you remove the user from the group

When we setup our application, we created a group in Okta called k8s-admins, so lets authorize that group:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: okta-cluster-admins
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: Group
  name: k8s-admins

Once you create this ClusterRoleBinding, refresh your dashboard and voila! You now have admin access to your cluster via the dashboard!

Login to the Kubernetes CLI

Now that we have the dashboard working, what about the kubectl command? Go back to the Home screen and click on "Kubernetes Tokens", you'll see a screen with several options. Click on the double square next to the "kubectl Command" on *nix/macos or "kubectl Windows Command" on Windows.

Clicking on either of these will copy a kubectl command into your clipboard that will:

  1. Import the CA certificate for your API server
  2. Import the OIDC information needed to allow kubectl to refresh your token
  3. Import your existing id_token
  4. Trust your OpenUnison certificate

Pasting it into your terminal gets you working with the CLI:

-- CODE language-shell --
$ kubectl get nodes
error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
$ export KUBECONFIG=/tmp/k
$ export TMP_CERT=$(mktemp) && echo -e "-----BEGIN CERTIFICATE-----\nMIICyDCCAbCgAwIBAgIBADANBgkq
.
.
.
Cluster "kubernetes" set.
Context "kubernetes" created.
User "00u3fusfj6jFLURbp357" set.
Switched to context "kubernetes".
$ kubectl get nodes
NAME             STATUS   ROLES    AGE   VERSION
k8s-all-in-one   Ready    master   16d   v1.18.1

As you do your work, you may notice a pause or delay every few minutes interacting with kubectl. That's because the tokens are short lived at one minute with a minute of skew time. Finally, try logging out of OpenUnison.  Wait a minute, and try using kubectl again. You'll see the session is over and you can no longer obtain tokens:

-- CODE language-shell --
$ kubectl get nodes
Unable to connect to the server: failed to refresh token: oauth2: cannot fetch token: 401 Unauthorized
Response:

You now have an integrated solution for Kubernets and Okta! As you expand your policies you can simply add groups to your Okta account and reference them directly in RoleBinding and ClusterRoleBinding objects! Want to watch the entire process start to finish?

Updates?

It's 2022 and we know that one of the most important things we can do for security is keep our systems updated. As we detect patches that are available for CVEs in our containers, we push out updates. Watch the container and when its updated pull the new version into your registry.

What Next?

Take a look at our Kubernetes solutions page to see how we can help automate your cloud native infrastructure!

Related Posts