Cloud Native

Beyond RBAC in OpenShift – Open Policy Agent

August 11, 2018

by

Marc Boorshtein

RBAC, or Role Based Access Controls, in OpenShift are a powerful way to manage who has access to what.  In previous posts we discussed how to manage that access.  RBAC doesn’t always provide fine enough controls on individual resources inside of a project.  How do you make sure that someone doesn’t create a persistent volume claim to a sensitive persistent volume?  In this blog post we’ll talk through how we locked down access to CIFS file shares that are bound to specific users using the Open Policy Agent.

Securing Persistent Volume Access

There are multiple examples of how to create a FlexVolume in Kubernetes and OpenShift for accessing a CIFS (Windows Fileshare) drive.  This is usually done on behalf of the user, so you want to make sure that other users aren’t mounting this volume.  There are two things you need to make sure of:

  1. Persistent Volume Claims are made only against authorized Persistent Volumes – ie I can’t access someone else’s Persistent Volume
  2. I can’t create a claim on non existent volumes – This way I can’t “sneak” access to a volume and claim it as soon as its available

This isn’t something RBAC will let us do.  We’ll need an additional tool

Enter Open Policy Agent (OPA)

OPA provides a great tool for creating policies that are easier to read and manage then code written in a compiled language.  In order to have OpenShift (or Kubernetes) use OPA to extend built in authorization capabilities you need to configure a ValidatingWebHook admission controller.  Once configured, OpenShift will call OPA to validate submitted requests.  The OPA website has an example of how to build a validating webhook.  Once built, our webhook receives an AdmissionReview request:

-- CODE language-yaml --
---
request:
 uid: 12cd747f-929b-11e8-a1f9-525400887c40
 userInfo:
   extra:
     scopes.authorization.openshift.io:
     - user:full
   groups:
   - administrators-freeipa3-project
   - editor-freeipa3-project
   - system:authenticated:oauth
   - system:authenticated
   username: freeipa3@ent2k12.domain.com
 resource:
   resource: persistentvolumeclaims
   version: v1
   group: ''
 kind:
   kind: PersistentVolumeClaim
   version: v1
   group: ''
 namespace: freeipa3-project
 oldObject:
 operation: CREATE
 object:
   metadata:
     uid: 12cd58b8-929b-11e8-a1f9-525400887c40
     name: pvc-cifs-ldrive-freeipa2
     namespace: freeipa3-project
     creationTimestamp: '2018-07-28T19:18:56Z'
   spec:
     storageClassName: ''
     volumeName: pv-cifs-rdrive-freeipa3
     resources:
       requests:
         storage: 100Gi
     accessModes:
     - ReadWriteMany
   status: {}
apiVersion: admission.k8s.io/v1beta1
kind: AdmissionReview

What comes into your webhook is actually JSON, but I find YAML much easier to read.  The main block I want to look at is the userInfo block.  It tells us the userName and the groups in OpenShift, but nothing else.  That’s because OpenShift doesn’t generally track any other additional attributes about a user.  OpenShift has its own internal OpenID Connect identity provider (IdP) that developers and users interact with.  This is different then upstream Kubernetes where you need to supply your own IdP.  In upstream Kubernetes any of the attributes (aka claims) in your OpenID Connect token will be included in the userInfo block, but because OpenShift has its own IdP even when using OpenID Connect the attributes sent to OpenShift don’t make it into the webhook request.

This causes an issue for us.  We’re marking PersistentVolumes with the user’s uidNumber, which we don’t get from the AdmissionReview:

-- CODE language-yaml --
apiVersion: v1
kind: PersistentVolume
metadata:
 annotations:
   com.tremolosecurity.accessType: user_only
   pv.kubernetes.io/bound-by-controller: "yes"
 creationTimestamp: 2018-07-10T15:33:23Z
 name: pv-cifs-rdrive-freeipa3
 resourceVersion: "16602333"
 selfLink: /api/v1/persistentvolumes/pv-cifs-xdrive-freeipa3
 uid: 9510d354-8456-11e8-9959-525400887c40
spec:
 accessModes:
 - ReadWriteMany
 capacity:
   storage: 100Gi
 claimRef:
   apiVersion: v1
   kind: PersistentVolumeClaim
   name: pvc-cifs-rdrive-freeipa3
   namespace: freeipa3-project
   resourceVersion: "10261262"
   uid: 95369594-8456-11e8-9959-525400887c40
 flexVolume:
   driver: azure/cifs
   options:
     source: //adfs.ent2k12.domain.com/xdrive
     uidnum: "160812450"
     username: freeipa3@ENT2K12.DOMAIN.COM
 persistentVolumeReclaimPolicy: Retain
status:
 phase: Bound

The username is present in both the AdmissionReview and the PersistentVolume, why not use that?  The best practice is to always use an identifier that is immutable (can never change) and is not derived from anything that may change (such as a last name).

Injecting Identity Data with Unison

The user’s uidnumber is stored in FreeIPA. For OPA to know about this number it needs to be able to get it from:

  1. An attribute in the AdmissionReview
  2. Inside of a token used to authenticate requests to OPA
  3. From inside of OPA’s own internal data store

Kubernetes doesn’t know how to generate tokens for each individual user when calling Admission Controller webhooks and OPA’s internal storage system is in-memory only so we’d need to be constantly syncing data into OPA.  Instead we decided to use Unison as a reverse proxy between OpenShift and OPA.  We already had the connection to FreeIPA established in Unison so we could access the uidnumber and since Unison is a reverse proxy we could inject an additional field into the userInfo block.  We decided to create a new open source module for Unison and OpenUnison to handle this interaction with OPA.  This module will be merged into the main OpenUnison branch in a future release.  Using this new module we were able to create a new attribute in the userInfo block of the AdmissionReview that is a JWT of our additional identity data.  We decided on a JWT because:

  1. We wanted to clearly differentiate between data we added and the original data
  2. We wanted to add a way to verify the data since its being used for authorization decisions
  3. We wanted a way to expire the data so it can’t be abused in the future when it was no longer valid

Once we deployed Unison as a reverse proxy, the AdmissionReview had a new attribute:

-- CODE language-yaml --
---
request:
 uid: 12cd747f-929b-11e8-a1f9-525400887c40
 userInfo:
   injectedIdentity: eyJraWQiOiJDTj1qd3Qtc2lnLCBPVT1kZXYsIE89ZGV2LCBMPWRldiwgU1Q9ZGV2LCBDPWRldi1DTj1qd3Qtc2lnLCBPVT1kZXYsIE89ZGV2LCBMPWRldiwgU1Q9ZGV2LCBDPWRldi0xNTI4Njg0NDk0MTI5IiwiYWxnIjoiUlMyNTYifQ.eyJpc3MiOiJodHRwczovL3VuaXNvbi1vcGEudW5pc29uLnN2Yy5jbHVzdGVyLmxvY2FsIiwiYXVkIjoiaHR0cHM6Ly9vcGEudW5pc29uLnN2YyIsImV4cCI6MTUzMjgwNTYwMCwianRpIjoid3kyOFctTFExb0pzTngtZEVSdzFWUSIsImlhdCI6MTUzMjgwNTU0MCwibmJmIjoxNTMyODA1NDgwLCJub25jZSI6ImE0NWVhMTQxLWJlYmEtNDBhNC05MjUxLWQ2ODAwZjBhODM4MCIsInN1YiI6ImZyZWVpcGEzQGVudDJrMTIuZG9tYWluLmNvbSIsInJvbGVzIjpbIk9wZW5TaGlmdCAtIGFkbWluaXN0cmF0b3JzLWZyZWVpcGEzLXByb2plY3QiLCJPcGVuU2hpZnQgLSBlZGl0b3ItZnJlZWlwYTMtcHJvamVjdCIsIkZyZWVJUEEgLSBhcHBsaWNhdGlvbnMtb3BlbnNoaWZ0IiwiRnJlZUlQQSAtIGFwcGxpY2F0aW9ucy1vcGVuc2hpZnQta2V5dGFicyJdLCJ1aWRudW1iZXIiOiIxNjA4MTI0NDkifQ.YJV06bPlntoZsXV5DKCUSZ974Fkrxlxp-BxUwZQwQBzrh4bA5-HsGlvOfikN3AoVIvSkgOGwL98PpmP4e9IN8idfTOoiXehB1rJkNKfJ6GMTkB_hkBz3OYnsuOF1UJGc-I_8XMVt1SMYd1EJX7ZRScxFgY_zSJNRyWyan7xcepQ
   extra:
     scopes.authorization.openshift.io:
     - user:full
   groups:
   - administrators-freeipa3-project
   - editor-freeipa3-project
   - system:authenticated:oauth
   - system:authenticated
   username: freeipa3@ent2k12.domain.com
 resource:
   resource: persistentvolumeclaims
   version: v1
   group: ''
 kind:
   kind: PersistentVolumeClaim
   version: v1
   group: ''
 namespace: freeipa3-project
 oldObject:
 operation: CREATE
 object:
   metadata:
     uid: 12cd58b8-929b-11e8-a1f9-525400887c40
     name: pvc-cifs-ldrive-freeipa2
     namespace: freeipa3-project
     creationTimestamp: '2018-07-28T19:18:56Z'
   spec:
     storageClassName: ''
     volumeName: pv-cifs-freeipa2-freeipa2-share-x
     resources:
       requests:
         storage: 100Gi
     accessModes:
     - ReadWriteMany
   status: {}
apiVersion: admission.k8s.io/v1beta1
kind: AdmissionReview

There’s now an additional attribute called injectedidentity that is a JWT with our identity data:

-- CODE language-json --
{
 "iss": "https://unison-opa.unison.svc.cluster.local",
 "aud": "https://opa.unison.svc",
 "exp": 1532805600,
 "jti": "wy28W-LQ1oJsNx-dERw1VQ",
 "iat": 1532805540,
 "nbf": 1532805480,
 "nonce": "a45ea141-beba-40a4-9251-d6800f0a8380",
 "sub": "freeipa3@ent2k12.domain.com",
 "roles": [
   "OpenShift - administrators-freeipa3-project",
   "OpenShift - editor-freeipa3-project",
   "FreeIPA - applications-openshift",
   "FreeIPA - applications-openshift-keytabs"
 ],
 "uidnumber": "160812449"
}

We now have identity data about the user including their uidnumber and roles in FreeIPA.

Telling OPA About Persistent Volumes

Now that we know about the user, we needed to know about the available persistent volumes.  OPA comes with a sidecar for running in Kubernetes that will keep its internal data store eventually synchronized with Kuberenetes.  By eventually, I mean its not in real time but its pretty quick.  The example from OPA syncs in namespace data, but we needed persistent volumes.  The view role used by OPA by default doesn’t provide the ability to view cluster objects, so we needed to create one:

-- CODE language-yaml --
---
# Define cluster role for OPA/kube-mgmt to read persistentvolumes.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
 name: opa-cluster-persistentvolumes-reader
rules:
- apiGroups: [""]
 resources: ["persistentvolumes"]
 verbs: ["list","get", "watch"]
---
# Grant OPA/kube-mgmt read-only access to persistentvolumes. This let's kube-mgmt
# replicate resources into OPA so they can be used in policies.
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
 name: opa-persistentvolume-viewer
roleRef:
 kind: ClusterRole
 name: opa-cluster-persistentvolumes-reader
 apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
 namespace: unison
 name: default

This will give the OPA sidecar access to replicate data about persistent volumes into its internal datastore.

OPA Policies

Finally, OPA knows about the user and the persistent volumes so there’s enough data to make an authorization decision.  Building on the example from OPA here are our polices:

-- CODE language-javascript --
# If the volume doesn't exist yet, fail
deny[msg] {
   pv_name = input.request.object.spec.volumeName
   not pv_exists
   msg = sprintf("invalid persistent volume %q", [pv_name])
}

#Ensure the volume being requested is associated with the requestor's uidnumber OR the user is a cluster-admin
deny[msg] {
 pv_name = input.request.object.spec.volumeName
 not sa_cluster_admins
 not pv_owned
 not pv_cluster_admin
 # the user isn't authorized for this persistent volume
 msg = sprintf("not authorized for persistent volume %q", [pv_name])
}

# If the volume doesn't exist yet, fail
pv_exists {
 pv_name = input.request.object.spec.volumeName
 persistentvolumes[pv_name]
 #msg = sprintf("invalid persistent volume %q", [pv_name])
}

#Ensure the volume being requested is associated with the requestor's uidnumber OR the user is a cluster-admin
pv_owned {
 pv_name = input.request.object.spec.volumeName
 pv = persistentvolumes[pv_name]

 user_data

 pv.metadata.annotations["com.tremolosecurity.accessType"] == "user_only"
 pv.spec.flexVolume.driver == "azure/cifs"  
 pv.spec.flexVolume.options.uid == user_data.uidnumber
}

# if the pv requires a uidnumber match, it can still be mounted if the requester is a cluster admin
pv_cluster_admin {
 pv_name = input.request.object.spec.volumeName
 pv = persistentvolumes[pv_name]

 user_data
 pv.metadata.annotations["com.tremolosecurity.accessType"] == "user_only"
 pv.spec.flexVolume.driver == "azure/cifs"
 groups = {x | x := user_data.roles[_]}
 groups["OpenShift - cluster-admins"]
}

# lets check if certain service accounts are cluster-admins
sa_cluster_admins {
 user_data
 allowed_sa = {"system:serviceaccount:unison:unison","system:serviceaccount:kube-system:persistent-volume-binder"}
 allowed_sa[user_data.sub]
}

The first two blocks are our policies, with the rest of the blocks being supporting functions.  In the first block, we want to make sure you can’t request a volume that hasn’t been created yet.  Since there’s data on the volume that we need to authorize it we can’t allow a binding to be created without knowing about the volume.  The second block looks to see if the volume is marked for requiring the user, if the user making the request is a cluster administrator or one of the allowed service accounts and then sees if the requester is in fact the owner of the drive.

Conclusions and Whats Next

Using Unison to inject identity data made for a very powerful tool for working with OPA.  We can now drive authorization decisions based on not just identity data in OpenShift, but from other sources as well.  In our next step, we needed to inject a sidecar for use with Kerberos.  Leveraging what we’ve done here we needed to create a Mutating Webhook to update submitted pods which required even more identity data.  We’ll cover that in another blog though.

If you’d like to learn more, please reach out to us on Twitter or on the web!

Related Posts