Cloud Native

Kubernetes Identity Management Part II – RBAC and User Provisioning

August 31, 2016

by

Marc Boorshtein

In our last episode we talked about the new SSO features in Kubernetes 1.3 and got it working.  The flip-side of the access coin from SSO is identity management.  SSO answers the question “who?”, identity management answers “what?” and should also answer “why?”.  In this episode we’re going to walk through Kubernetes’ RBAC model and show off its integration with OpenUnison.

Enable Authorization

When we setup our SSO connection with Kubernetes, we told k8s to use the user_role claim from our JWT to represent our groups with the flag :

-- CODE language-text --
--oidc-groups-claim=user_role

At the time this didn’t have much impact because we weren’t doing anything with groups so we didn’t really discuss the user_role claim.  This claim is defined in Keycloak and is mapped to the departmentNumber attribute in our FreeIPA server.  This attribute will store the roles as they’re defined in k8s (more on that later).  So when you authenticate, Keycloak load’s your “departmentNumber” attribute and creates a claim in in the JWT that the API server knows to map to an authorization.

Natively, k8s doesn’t support any form of authorization.  It supports plugins that anyone can write to enforce authorization rules.  There are two available out of the box:

  1. Roles Based Access Control (RBAC) – The folks at CoreOS moved OpenShift’s model into k8s
  2. Webhooks – You can implement your own simple RESTful web service that accepts a specific payload of who is making a request and what the request is for and the service says “yes” or “no”

While #2 offers some interesting possibilities, I’ve worked with the RBAC model in OpenShift so I decided to start there.  The RBAC model provides a simple language to define a Role, which defines a type of access and a RoleBinding which defines the subjects.  So for a user to be authorized:

  1. k8s finds a Role that matches the request being made
  2. k8s sees if your user matches any of the subjects in the RoleBindings for that Role

Before we can do much we need to enable the RBAC model in the API server.  To enable RBAC:


-- CODE language-text --
--runtime-config=extensions/v1beta1/networkpolicies=true,rbac.authorization.k8s.io/v1alpha1
--authorization-mode=RBAC
--authorization-rbac-super-user=kube-admin

The first flag will already be in your config, you’ll be adding the part in red.  This will tell k8s to allow RBAC.  The second flag enabled RBAC.  The final flag creates a super user.  The super user is needed because when k8s starts there won’t be any roles and you can’t grant someone more roles then you already have so there’s no way to create your first roles.  This flag bypasses that by creating a single user that can do anything.

A note about the super user.  The simplest route is to use the common name (cn) of the subject of the certificate you are using with kubectl.  You can use your SSO setup by specifying an sso user but you’ll need to prefix the username  with the URL of your OIDC provider.  For instance:

-- CODE language-text --
--authorization-rbac-super-user=https://kcdev.tremolosecurity.com:8443/auth/realms/kubernetes#mmosley.ent2k12.domain.com

This issue came up during testing and working on the sig-auth slack channel.

Create Policies

Once the api server is restarted, its time to create our first policies.  I started with the below policies from https://github.com/micahhausler/k8s-oidc-helper and adapted them a bit:

-- CODE language-yaml --
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admin-role
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
nonResourceURLs: ["*"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admin-binding
subjects:
- kind: Group
name: admin
roleRef:
kind: ClusterRole
name: admin-role

Once this role was added, I was able to use kube-ctl using any user who’s departmentNumber had the value “admin”.  When I tried to login to the dashboard though, it was broken.  Currently the dashboard uses a service account to interact with k8s as opposed to the context of the logged in user.  This is something the k8s dashboard team is looking at supporting, but for now its an all-or-nothing deal for anyone accessing the dashboard.  It also means that the service account that the dashboard uses must be a subject in the ClusterRoleBinding.

-- CODE language-yaml --
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admin-role
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
nonResourceURLs: ["*"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admin-binding
subjects:
- kind: Group
name: admin
- kind: ServiceAccount
name: default
namespace: kube-system
- kind: ServiceAccount
name: openunison
namespace: default
roleRef:
kind: ClusterRole
name: admin-role

We also added an openunison service account.  More on this later.  At this point we’re able to login k8s using kubectl and using the dashboard.

Managing Identities

Now we have SSO and a really basic set of policies.  All authorizations are driven off of the departmentNumber attribute.  How to we add roles to that attribute?  This is where the “management” of identity management comes in.  You could manually just add values through LDAP or the FreeIPA interface but that has numerous issues:

  1. Manual entry error
  2. Loose coupling between k8s and the identity store
  3. How do you track why people have access?
  4. Your admins have better things to do then add attributes to users

OpenUnison’s got some features that help here.  We have an OpenShift dynamic workflow type that will let you create a single workflow for a set of objects returned by Kubernetes (this was originally written for OpenShift but it works great for k8s too).  We can drive workflows off of pretty much anything such as a namespace (similar to OpenShift) or directly access a list of roles.  Whatever we want to use, we tell OpenUnison what to do by adding annotations.  For instance, I added the following annotations to the kube-system namespace:

-- CODE language-yaml --
apiVersion: v1
kind: Namespace
metadata:
 annotations:
   openunison/access: admin
   openunison/approver: cn=infrastructure-approvers,ou=enterprise-groups,dc=domain,dc=com
   openunison/description: Kubernetes system namespace
 creationTimestamp: 2016-08-16T19:56:43Z
 name: kube-system
 resourceVersion: "151839"
 selfLink: /api/v1/namespaces/kube-system
 uid: 8e60c835-63eb-11e6-bf8a-080027e9ede8
spec:
 finalizers:
 - kubernetes
status:
 phase: Active

The first annotation tells OpenUnison the name of the group someone needs to be able to access this namespace.  The second annotation tells OpenUnison who can approve access and finally a description.  Using this information, and a well defined dynamic workflow,  new namespaces that are added with these annotations will automatically be included in OpenUnison.

To make this work, we need a service account and secret for pulling the namespaces.  Finally, we create the OpenShift provisioning target in OpenUnison:

-- CODE language-xml --
<target name="kubernetes" className="com.tremolosecurity.unison.openshiftv3.OpenShiftTarget">
<params>
<!-- The protocol and host and port of the OpenShift api web server. Do NOT include any path information -->
<param name="url" value="https://172.17.4.99:443"/>
<!-- Set to true if using an OpenShift service account instead of a user account -->
<param name="useToken" value="true" />
<!-- IF useToken is true, set this to the token. See "Creating a Service Account" -->
<param name="token" value="eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Im9wZW51bmlzb24tc2VjcmV0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im9wZW51bmlzb24iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJhN2VlNjcyMS02YWQxLTExZTYtOTI4Ni0wODAwMjdlOWVkZTgiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpvcGVudW5pc29uIn0.jgf9KVu_qwn4WuyxiEyyZJIF9Rkhj4rOh8-hqKNbHICEjdRYs6OnYSG8kSUKV8HaBF7AiXV1vH1IW4ZKuPHQmATAgFUBWnZVjeqYOO5a6C8wiGsXWxN5UKqj7iy1EzAKd4a7I01a_0A4KhgEzSlaP0N_7EpvZXUMD4r-5xcslL-_fXeQC4lI1P3_jleLbG72fbPWINI91tpyv-4ihC1Yv-vxfaMdLFOoyUeEtqTiBplmZG1zq8agLEtAhyU83jh-5xgLq18MLAfamMS28WfNi22DB15-IzX27dbV0CNEKQuhHO2P11DBTEVXAzXKa4bDq8XPmK7Au-4aJW3wEKCdnw" />
</params>
</target>

And the workflow:

-- CODE language-xml --
<workflow name="kubernetes" label="Kubernetes Namespace $name$" description="Description - $openunison/description$" inList="true" orgid="687da09f-8ec1-48ac-b035-f2f182b9bd1e">
<dynamicConfiguration dynamic="true" className="com.tremolosecurity.unison.openshiftv3.wf.OpenShiftWorkflows">
<param name="target" value="kubernetes"/>
<param name="kind" value="/api/v1/namespaces"/>
</dynamicConfiguration>
<tasks>
<customTask className="com.tremolosecurity.provisioning.customTasks.LoadGroups">
<param name="nameAttr" value="uid"/>
<param name="inverse" value="false"/>
</customTask>
<approval label="Approve access to Kubernetes namespace $name$">
<emailTemplate>You have open approvals</emailTemplate>
<!-- List of authorization rules to determine who can act on this request, constraints supports parameters -->
<approvers>
<rule scope="group" constraint="$openunison/approver$" />
</approvers>
<onSuccess>
<customTask className="com.tremolosecurity.provisioning.customTasks.LoadAttributes">
<param name="name" value="departmentNumber"/>
<param name="nameAttr" value="uid"/>
</customTask>
<addAttribute name="departmentNumber" value="$openunison/access$" remove="false"/>
<provision sync="true" target="rhelent.lan" setPassword="false" onlyPassedInAttributes="false" >
<attributes>
<value>departmentNumber</value>
</attributes>
</provision>
</onSuccess>
<onFailure>
<customTask className="com.tremolosecurity.provisioning.customTasks.LoadAttributes">
<param name="name" value="departmentNumber"/>
<param name="nameAttr" value="uid"/>
</customTask>
<addAttribute name="departmentNumber" value="$openunison/access$" remove="true"/>
<provision sync="true" target="rhelent.lan" setPassword="false" onlyPassedInAttributes="false" >
<attributes>
<value>departmentNumber</value>
</attributes>
</provision>
</onFailure>
</approval>
</tasks>
</workflow>

This diagram shows how this will work:

k8s_provisioning
  1. The user will login to ScaleJS via Keycloak
  2. The user will look at the list of k8s namespaces
  3. OpenUnison will query k8s for all namespaces, populating the dynamic workflow with information from the annotations
  4. The user requests one of the workflows and submits the request
  5. Once the request is approved, OpenUnison provisions the access value to the user’s departmentNumber attribute

Once this is done (and all steps recorded) our user is able to use OIDC to get into Kubernetes.  Here’s what it all looks like when it’s done:

Special Thanks

Once again, I’d love to take credit for figuring this all out but I’m really just not that smart!  Thankfully the folks on the sig-auth channel are.  Specifically @ericchiang, @deads2k, @micahhausler, @whitlockjc and @erictune.

Whats Next?

Well now that we have this working, I think we can make it easier to deploy.  Setting up a second directory just so you don’t have to pollute your AD can be overkill so I think an object database like MongoDB would work great there.  Also, I’m thinking we can stream line some of the integration too.  If you have any feedback, we’d love to hear it.  And of course, stay tuned!

Related Posts