Cloud Native

Secure Access to Kubernetes From Your Pipeline

by

Marc Boorshtein

In Pipelines and Kubernetes Authentication we talked about why you shouldn't be using static ServiceAccount tokens from your pipelines but should instead be using your OpenID Connect identity provider. This creates a more secure pipeline by cutting down on token exposure with short lived tokens, but introduces some other issues:

  • Doesn't work well with Multi-Factor Authentication - The second factor can be difficult (if not impossible) to use forcing you to downgrade your security on your service account.
  • Still a password - Which means rotation per your organization's policies.
  • Your identity provider might not be as simple - We used Okta, but other identity providers might make it harder to simulate a login from a script.

If you're CI/CD infrastructure runs on a cloud  that has a built in identity system and can be used with your cluster, that would be the best solution.

The next best solution would be to use certificate authentication. It doesn't create a secret that has to travel over the wire. Depending on your CI/CD solution a certificate and private key could be stored in a hardware security module! As we discussed multiple times you should never use certificates with Kubernetes though, so we need a different solution.

Since we can't use certificate authentication, what can we use? What if we used a certificate to sign a JWT directly from our pipeline, bypassing an identity provider? In this model, our pipeline can generate a short lived JWT as needed, the "secret" information (private key) is only known by the pipeline, and we can layer on multiple mitigating features to lock down our connection to the API server. How would this work?

  1. Your pipeline would generate a signed JWT with a short life span, identifying it's self
  2. When the pipeline makes a request to the API server, it does so to a proxy that is configured with the appropriate permissions that validates the JWT
  3. The proxy validates the JWT, checking that it is still valid, properly signed, and has the correct issuer and audience
  4. The proxy uses it's own service account for talking to the API server with it's own permissions

This gives you the ability to call the API server without having to have a credential produced by the API server it's self. We are still generating a token, but that token is short lived so if it is compromised has limited value since it would have expired once it has been retrieved by a threat. The next question is how can we better secure the proxy doing the work of interacting with the API server?

It's important to have multiple layers of defense, so if one fails there are others that can pick up the slack. This is called Defense in Depth. A failure at a security layer isn't always nefarious. A simple configuration error can lead to a failed layer of security, or a bug in a dependency. In this case we want to make sure we secure:

  • Private Key - Who has access to the private key and why? Can it be easily rotated?
  • Authentication - How are we validating the incoming JWT?
  • Limit Access - Are we making sure that containers inside the cluster can't access our proxy?
  • Service Account Token - What happens if someone gets access to our ServiceAccount's token that's mounted into our proxy's Pod?
  • Authorization - What is our proxy's ServiceAccount able to do? Is there an audit trail?

This is on top of other measures in your cluster you should take to protect it such as node level protections.

OpenUnison Management Proxy

Today we're announcing the availability of the OpenUnison Management Proxy to help secure your pipelines. The OpenUnison Management Proxy helps you tackle the above issues in an automated way:

  1. OpenUnison has built in support for validating JWTs against an OIDC Discovery Document or a known public key.
  2. The helm chart supports the creation of NetworkPolicy objects to limit access to the proxy from your ingress controller, limiting access from other pods in your cluster
  3. The helm charts and OpenUnison support the TokenRequest API out-of-the-box, giving each instance of a Pod it's own unique identity that can be tracked and is only available for ten minutes at a time.

The first point, being able to validate a JWT directly, is an important security measure. This lets your pipeline create a token without having to interact with a third party identity provider. If you can use an issuer that supports an OpenID Connect Discovery document, just like the API server would if configured with OpenID Connect, then you can rotate keys automatically by just updating your centralized document! OpenUnison will pick it up automatically. If you can't generate a discovery document, then you can still update by pushing out a new public key to your clusters by any method that works best for you. Using a NetworkPolicy to limit access to the proxy cuts down an attacker's ability to get into the proxy through it's front door from inside the cluster. Finally, supporting the TokenRequest API limits the risk of the proxy's token being used after a compromise. The API server will only accept it for 10 minutes after its creation and if it were to be abused, you'd know exactly which pod was compromised making it easier to identify how that happened. Also, using the TokenRequest API means that the token isn't stored in your etcd database either!

Using The Management Proxy

Our proxy will be called from a pipeline in a typical fashion. We'll assume that we have Python 3 and kubectl installed in our pipeline runner. The first step is to generate a key pair. This key pair will be used for signing your JWT. It is not for TLS. I find the simplest way to do this is using OpenSSL:

$ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout privateKey.key -out certificate.crt

You'll be prompted to set some values for your subject. This information is arbitrary. Once we have a keypair, we can generate a JWT. Here's an example of a raw JWT:

eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJ1cm46cmVtb3RlLXBpcGVsaW5lIiwiaXNzIjoidXJuOnJlbW90ZS1waXBlbGluZSIsImF1ZCI6Imt1YmVybmV0ZXMiLCJqdGkiOiI3YjI4NWVlMS01MGE1LTRiMGItYmJkNy0xYzUyNjQyNDc1ZDkiLCJpYXQiOjE2MTgzMjIxODAsImV4cCI6MTYxODMyMjc4MCwibmJmIjoxNjE4MzIyMTIwfQ.MCc8qaYDIvfEIhUP2YMo-j_wsxsaO9pBGMFBmVNSQBNdy2gGL3sH-Gu2l3aDQbgv3bV5j9yRSkyJql5LW8IwQoj4gbwLcbP9wYThQq4bGNi0Q8I1BHdgAqJb08V4IKZxX3ju53Vl6eh88xC87JVmQrt_6nncVo-RCxoJAbKMp67ZZKfeVb6znqbVi5siAlC1sbfgECDI1ptuIN7NLq5ofDKhRTpXLHdwDCoiRkC_in1Tm4M67JuKX27cQ62LhvSAqbo6Hwa4R1PI-j3gzCE_4HauHQOyyJk4sQLyg5DK0Xtv_qlffWU-Mg20-bGFfUmO7vt5lZX4I4EH0JAfF7KVZw

Let's drop this into jwt.io:

{
  "sub": "urn:remote-pipeline",
  "iss": "urn:remote-pipeline",
  "aud": "kubernetes",
  "jti": "7b285ee1-50a5-4b0b-bbd7-1c52642475d9",
  "iat": 1618322180,
  "exp": 1618322780,
  "nbf": 1618322120
}

This looks really similar to our previous JWT with a series of "claims":

  • sub - The "subject" of the claim, or the user. Since our proxy doesn't map to real users this is an arbitrary identifier.
  • iss - Issuer, this has to match what the proxy is expecting and must be a valid URL
  • aud - The audience that will consume this JWT, must match your proxy's expectation
  • jti - A nonce to add uniqueness to your JWT
  • iat - When the JWT was created in seconds since the epoch (Jan 1, 1970 UTC)
  • exp - When the JWT expires in seconds since the epoch (Jan 1, 1970 UTC)
  • nbf - The minimum time in seconds since the epoch (Jan 1, 1970 UTC)

There's no groups claim in this JWT. That's because our proxy isn't configured to use impersonation. You could use impersonation to make your proxy "multi-tenant", but that's for another blog post. Once we have our JWT, we can use it just as we would any other token with Kubernetes. Now let's step back and create out JWT with some Python:

private_key = open('/path/to/privateKey.key').read()

jwt_claims = {
    'sub': "urn:remote-pipeline",
    'iss': "urn:remote-pipeline",
    'aud':'kubernetes',
    'jti':str(uuid4()),
    'iat':datetime.datetime.utcnow(),
    'exp':datetime.datetime.utcnow() + datetime.timedelta(seconds=int(60)),
    'nbf':datetime.datetime.utcnow() - datetime.timedelta(seconds=60)
}

signed_jwt = jwt.encode(jwt_claims,key=private_key,algorithm='RS256')

The value of signed_jwt is binary, so you'll need to convert it to a string. Now your token can be used like any other! The full source for this script, which will generate your kubectl configuration for you, is available as a gist.

Deploying The Management Proxy

Now that we know how to generate a JWT and what it will look like, it's time to deploy our proxy into our cluster. This is done using Helm. The first step is to download the example values.yaml and customize it. For instance for my OCP deployment I'm testing on:

services:
  api_server_host: "ou-mgmt-proxy.apps-crc.testing"
  issuer_url: "urn:remote-pipeline"
  enable_tokenrequest: true
  token_request_audience: https://kubernetes.default.svc
  token_request_expiration_seconds: 600
  enable_cluster_admin: false
  issuer_from_well_known: false
  issuer_certificate_alias: issuer


cert_template:
  ou: "Kubernetes"
  o: "MyOrg"
  l: "My Cluster"
  st: "State of Cluster"
  c: "MyCountry"

image: "docker.io/tremolosecurity/openunison-k8s-managementproxy:latest"

certs:
  use_k8s_cm: false

trusted_certs:
  - name: issuer
    pem_b64: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURoekNDQW0rZ0F3SUJBZ0lVRHhobXlNYmNqNHo1emI4azRhOGNwSEs3MStjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1V6RUxNQWtHQTFVRUJoTUNkWE14RERBS0JnTlZCQWdNQTJSbGRqRU1NQW9HQTFVRUJ3d0RaR1YyTVF3dwpDZ1lEVlFRS0RBTmtaWFl4RERBS0JnTlZCQXNNQTJSbGRqRU1NQW9HQTFVRUF3d0RaR1YyTUI0WERUSXhNRFF3Ck9ERXpNVFl5TWxvWERUTXhNRFF3TmpFek1UWXlNbG93VXpFTE1Ba0dBMVVFQmhNQ2RYTXhEREFLQmdOVkJBZ00KQTJSbGRqRU1NQW9HQTFVRUJ3d0RaR1YyTVF3d0NnWURWUVFLREFOa1pYWXhEREFLQmdOVkJBc01BMlJsZGpFTQpNQW9HQTFVRUF3d0RaR1YyTUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUFxVWxoCnRPcTRrbGxTelkrNGVsRVlDME13MkFoWGMvS28welQxeDF2NE9rdkxGUVNaZWNweFJBVjlhWVlGeGFlYnJkU3oKT0RzNVdNRVVrSm8rc2ltY2lyMXRrajZkbVJ3MkdsY0dPWGR3VUdJWi9MWUZBVHhQaExpRklhSGJFNDMwMGt4RAo2R2lXZkRDTkQ3NkdvbjFNUDJNYUxid1dLaWdNblpKNDVZd2xYSjJWZnVPVnhsMWVUSzNhU0N6ZE5zSEpQQlZsCitkWVBEa3BHTEFHQW5NeDVFS2QvQ0I5eWRSZnNtMWZERkZkQ1JCc2tvUzJIc0R3SGpLYlh2d2E1SmlDcldHTWMKTzFJTFNGS240MittNGgzQktkbDl0UnMrcW04T0ZFUGZFTGhvalRyVmRUNGlOQXpHNXQrNHRGMUNmWUZxellkNQpUMjY2UWxMNldkUmcycEhkc3dJREFRQUJvMU13VVRBZEJnTlZIUTRFRmdRVUMvUzNNMTZ1MnBaaE9kWWtMKzV6ClQ0eERUWFF3SHdZRFZSMGpCQmd3Rm9BVUMvUzNNMTZ1MnBaaE9kWWtMKzV6VDR4RFRYUXdEd1lEVlIwVEFRSC8KQkFVd0F3RUIvekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBQmdERHNyQ1ZOb05icHJvNVhEcHFqdDhobmpVYQpOS2RjUjFoQkZBRTFvRW5CL0M5N0FScGVZVEFkZ0tKek9lU1ZVUFdJUkhjMDh1NE4yakpEOFNaZFpobytJcHVuCnljYWhQZWdVWndBSFdKL0NYZEdldVhkOU9YeE1oTk9GM0xBeGVZdllXWVFhQjIvUGFxamRqcjU5SFJFYkg5ejAKdFp1ZkJNK1ZrZ1NRR1BPamRCdEpXeTNyUVE0L1VRaDBRZVByK3lYYTk2c3krWmRDd1J3NnJ3bnp6TFNxNG1RZQp6c1BnU1d6TFJwakJKNVBZSytOdFJTVllwZjZNTGcySkkwaDRZTXhnR3FHNEVoY0lRT3BxSWxLRmdsRGFpbXNJCnd0Ulk2bW1XeDFYUTB6TC9wZWRCcWFabmFkQnp4TnBoOEJ1ZFFIOFQ5Sm56dHJ1cW1DWTZLMDd3NEE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0t


monitoring:
  prometheus_service_account: system:serviceaccount:monitoring:prometheus-k8s

network_policies:
  enabled: true
  ingress:
    enabled: true
    labels:
      app.kubernetes.io/name: ingress-nginx
  monitoring:
    enabled: true
    labels:
      app.kubernetes.io/name: monitoring
  apiserver:
    enabled: false
    labels:
      app.kubernetes.io/name: kube-system

openunison:
  replicas: 1
  non_secret_data:
    IGNORE_OPENSHIFT: "true"
  secrets: {}

The services section is where most of the work happens. servies.api_server_host is the host name your pipeline will use. This is a proxy, so instead of making calls directly to the API server, you'll instead make calls to the proxy. So your curl command would use the value of this setting:

curl -k -v -XGET  -H "User-Agent: kubectl/v1.20.1 (linux/amd64) kubernetes/c4d7527" -H "Authorization: Bearer ..." -H "Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json" 'https://ou-mgmt-proxy.apps-crc.testing/api/v1/namespaces/openunison-management-proxy/pods?limit=500'

enable_tokenrequest is what lets OpenUnison use the TokenRequest api. In this example, we don't have a published OIDC discovery document so we're setting issuer_from_well_known to false and specifying which certificate to use to validate JWTs. The value of issuer_certificate_alias must name a certificate listed in the trusted_certs section. Use the base 64 encoded value of the certificate.crt file we generated earlier (the entire contents, don't remove any headers or whitespace). Finally, under network_policies set enabled to true for the policies to be created for you.

Next, deployment is a couple of helm chart deployments away:

$ helm repo add tremolo https://nexus.tremolo.io/repository/helm/
$ helm repo update
$ kubectl create ns ou-mgmt-proxy
$ helm install openunison tremolo/openunison-operator --namespace ou-mgmt-proxy

Next, create a Secret that will store your password for the keystore that will be generated by the OpenUnison operator:

apiVersion: v1
type: Opaque
metadata:
  name: orchestra-secrets-source
  namespace: ou-mgmt-proxy
data:
  unisonKeystorePassword: aW0gYSBzZWNyZXQ=
kind: Secret

Finally, deploy the proxy:

$ helm install management-proxy tremolo/openunison-k8s-managementproxy --namespace ou-mgmt-proxy -f /path/to/values.yaml

Once your proxy is deployed, your ready to start working with it from your pipeline.

What Can Go Wrong?

As much as I'd like to pretend this is an iron-clad solution, that would be a lie. Nothing is fool proof and there are risks to every solution. In this particular scenario, the biggest risk is the compromise of the private key used to sign JWTs. If you've been following the news about the SolarWinds breach, you may have heard about a "Golden Ticket" attack. This is how the attackers were able to gain access to so many systems. They used their malicious code to gain access to the private keys used to sign SAML assertions (which are similar to JWTs) and generate their own authentication tokens much in the same way we are letting them login to any application that trusts it. To protect against this kind of attack, its important to protect your private keys. One way to do that is to rotate them on a regular basis. If instead of using a certificate embedded into the values.yaml, you instead published an oidc discovry document you could regularly rotate the key without explicitly having to tell OpenUnison. As keys get rotated your proxies will pick it up automatically. If this sounds familiar that's because it's precisely how the OpenID Connect functionality for Kubernetes works! You'll also need to lock down your CI/CD system, which is well beyond the scope of this blog post but stay tuned, we will definitely talk about it in the near future!

Getting Started

I hope you've enjoyed this blog! Everything described in this post is open source and available on GitHub. Please don't hesitate to reach out by opening issues on GitHub! If you're interested in a supported version with an SLA, reach out to our sales team for details!

Related Posts