TL;DR
- Updated dependencies and Go version
- Added better logs for integrating with SIEM
- Added kubectl --as support
- Moved from Alpine to Ubuntu
- Automating rebuilds
We're thrilled to announce the release of our updates to JetStack's kube-oidc-proxy. This great tool hasn't seen major updates in over a year and hasn't had a release since April of 2020. We wanted to contribute some much needed updates, which we'll detail through this blog, and add a few new features. First, let's review what kube-oidc-proxy is and why you should use it.
Kubernetes and Authentication Proxies
Kubernetes offers multiple ways to authenticate users to the API server. The best way to go, when available, is to use OpenID Connect (OIDC). We've talked about why you shouldn't use certificates for kubernetes authentication, but most cloud providers won't let you configure the API server flags needed to integrate managed clusters into an OIDC identity provider. Amazon's EKS added this functionality, but it has limitations that make it less useful. The solution for using OIDC to authenticate to managed clusters is to use an authenticating reverse proxy that leverages Kubernete's built in impersonation capabilities. Using these capabilities a proxy can authenticate a user, then tell the API server who you are and what groups you're a member of. It doesn't matter how the user authenticates to the proxy, that's up to the proxy. Since kubectl, and most other clients, know how to use OpenID Connect, you can use OIDC with your proxy and then send the appropriate impersonation headers to the API server:
When I authenticate to OpenUnison, kubectl gets an id_token that will look like:
This is what is sent to the authenticating proxy in the Authorization header. Dropping this into a tool like jwt.io (NOTE: never do this with production credentials!) I get some JSON that represents my user to my cluster:
On each request, the authenticating reverse proxy will validate this id_token than create HTTP headers for impersonation. For our example JWT it will generate the below headers:
The API server will see these headers, check to make sure the request is being made by a user with permissions to impersonate the requested attributes, and run the request as the impersonated user.
The user interacts with the reverse proxy as if it were the API server. This lets you control how users access clusters without having to have direct control of the API server's flags. OpenUnison has its own integrated reverse proxy. It's what what we use to integrate the Kubernetes Dashboard, for example. When we wanted to support the impersonation model, we initially launched using our own integrated proxy. We quickly ran into an important limitation. The kubectl command, and the client-go SDK that most Kubernetes clients are written in, uses an old protocol called SPDY that isn't supported by OpenUnison, or most other modern network infrastructure anymore. This severely limited our support for common commands like kubectl exec. Next we'll explore SPDY and its impact on Kubernetes.
SPDY's Continued Use in Kubernetes
SPDY is a protocol invented by Google to replace HTTP. Where HTTP is a pretty simple message based protocol where a client asks "Can you give me X?" and the server responds with "OK, here's X", SPDY was designed to support richer web applications like Gmail that needs to be in constant communications with the server. It allows for many of the features binary protocols already provide like running multiple requests over a single connection. Kubernetes adopted SPDY for any kind of bi-directional communication very early on. This included communications between Kubelets and the API server, as well as kubectl's interactive commands like exec, cp, etc. SPDY would later become HTTP2, after it went through a standards process, and has seen wide adoption. SPDY would be deprecated and removed by Google from Chrome in 2015, other browsers have followed suit. Kubernetes, unfortunately, did not keep up. With SPDY being removed from support for both clients and servers, Kubernetes added support for another protocol for bi-directional communications: WebSockets. WebSockets have a different internal definition from SPDY since it solves a different problem, but the overlap solved the SPDY issue well for SDKs written in Python, Java, and other languages that didn't support SPDY. While Kubernetes has hadsupport WebSockets for years, the client-go SDK continues to rely on SPDY. There have been multiple efforts to update the SDK to HTTP2 and WEbSockets, but those efforts have never had enough support to make progress.
This became an issue for OpenUnison because, even though we support WebSockets, the most utilized client tools couldn't. This meant that you could login to a shell through the dashboard, but you couldn't use kubectl exec. This became an issue for our users. The problem with supporting SPDY was that Undertow, the web server we run on, understandably removed support. So did Jetty, and Tomcat. NGINX supports it, but Envoy doesn't. The only tools that still support SPDY are built for Kubernetes in Go. This is where kube-oidc-proxy comes in. It's a small, lightweight proxy that did almost everything we needed AND supported SPDY! We integrated it with OpenUnison last year and it became a popular integration. We decided we needed to become more than just consumers of the technology. Our next step was to look into contributing and making sure we were ready to support our customers using kube-oidc-proxy.
Needed Updates to kube-oidc-proxy
Throughout 2021, we saw that no major updates to the source had been made and there were no releases either. We pride ourselves on keeping our software update to date. As the Log4J vulnerabilities have shown us, we need to stay vigilant and make sure we're updating our containers regularly. Having a container that hadn't been updated in almost two years wasn't OK for us. We also wanted to make sure that users could use kubectl the way they needed to, so we wanted to add support for kubectl --as to aid in debugging RBAC policies. Finally, we wanted to make integrating with common Security Information and Event Management (SIEM) tools, that are mainly log driven, easier. At Tremolo Security, we live open source. We wanted to contribute to this great project and be confident that we could own any issues our customers and users encounter. We'll go through each of these issues and how we addressed them.
Updating Dependencies
The kube-oidc-proxy project hadn't had a major update in a couple of years. Thankfully, the "if it aint broke, don't fix it" of the past is no longer acceptable, so the first thing we did was bump Go to 1.17 and all the immediate dependencies to their latest versions. Jetstack did a great job of building an automated test suite on top of Kubernetes in Docker (KinD), but hadn't updated it either. Our next job was to update the test suites so they would run. Once we got the test suite working with the latest KinD, we were able to test all the version updates. Thankfully with all the version bumps the only changes that were needed were pretty minimal. As we do with OpenUnison and MyVirtualDirectory, we ran the updated codebase through Snyk.io to see if there were any vulnerabilities in our secondary dependencies that needed to be updated. Thankfully there weren't any. With our dependencies updated, we next wanted to tackle how to publish kube-oidc-proxy.
Switching from Alpine to Ubuntu
With our dependencies updated, we wanted to figure out how to publish the new container. Our goal was always to submit our changes back to JetStack, but it would be unreasonable to expect them to just hit the "Approve" button on the pull request! With our other containers, we use Anchore's scanning technology to check for updates to known CVEs and republish. The problem was, that kube-oidc-proxy used Alpine linux which doesn't really do updates and patches. You need to go to the next release of Alpine to get updates. There are other issues with Alpine, namely that it uses a different DNS library from Ubuntu and RHEL, which can lead to unexpected results. Finally, since all of our other containers are released on Ubuntu, we wanted to be consistent. Even though the container size went from about fifteen megabytes to forty-five megabytes, we thought it was the best long term move. Just as with our other containers, we check daily for patched CVEs and republish accordingly. Now that we had a strong foundation for continued updates, we next turned to adding new features.
Supporting kubectl --as
The kubectl command has added several options to make debugging RBAC easier. One of these tools, the --as and --as-groups flags, rely on impersonation. There's a security issue here though, because the reverse proxy is entirely responsible for authenticating the user. If the proxy were not careful, it could allow users to escalate their privileges. This is why the previous version of kube-oidc-proxy would error out when you tried to use kubectl --as. We wanted to make this feature work. Now, when you use kubectl --as, we verify with the API server that the authenticated user is able to impersonate the user (and groups), that is being attempted by submitting SubjectAccessReviews. This way the API server is still the authoritative source for authorizing the impersonation request. In addition to sending the requested impersonation from the user, the proxy also sends the original user as additional attributes that are stored in the request's UserInfo object and are sent to the API server's audit logs. This way every transaction that includes an impersonation is tracked back to the original user. We also added additional logging to for SIEMs to more easily track each request, which we'll detail next.
Simpler Logging Support
The original kube-oidc-proxy didn't log much. It would generate audit logs the same way the API server would. Most of our customers rely on logging systems to track transactions and wanted to be able to see the proxy generating its own logs. We added simple logs to track the URL, the user, the IP of the request, and if there was an additional impersonation event. This also makes debugging much easier as there is a log to follow of requests. When the proxy gets. a request that succeeds, the logs will now show something like:
First, there's a date and time stamp. Next, if the authentication of the inbound token was successful. After the authentication result indicator, the source IP of the request. Assuming the source IP is actually an Ingress controller, both the source IP of the request and the source IP from X-FORWARDED-FOR http headers are included. After the source, the authenticated URI is included and finally the user from the inbound token. The first part of the inbound user is the user's login, then groups, and finally extra-info if provided. If the inbound request includes impersonation headers from kubectl --as then an outbound section is added with the user information that will be sent to the API server.
Similar to a successful request, a failed request will indicate there was an issue:
This way you can look for failed requests in your SIEM for better anomaly tracking. There's one last update to cover for improved security.
TokenRequest API Support
The ability of the kube-oidc-proxy to impersonate an authenticated user relies on the service account it runs as to be authorized by the API server, via RBAC, to impersonate other users. This turns this account into a privileged account that can be easily abused. If someone were to compromise the token used by the ServiceAccount, they could impersonate any user they wished! The good news is Kubernetes includes a feature to mitigate this risk called the TokenRequest API. Instead of generating a token that can be used for ever, the TokenRequest API will generate a relatively short lived token. The new version of the client-go SDK detects these tokens and renews them as needed. This way if someone gets a token that has expired it can be tracked. If the token comes from a container that is no longer running, the API server will reject it. There's no additional configuration needed on clusters running 1.20+.
Ongoing Support and Development
Tremolo Security is committed continuing the development of kube-oidc-proxy. While we'll continue to maintain our own fork, all changes will be submitted upstream to Jetstack. We hope they'll all be accepted, though we understand if they're not. We'll also stay on the lookout for updates to Jetstack's upstream and pull in updates as appropriate.
Assuming there isn't a critical issue that needs to be addressed between releases, we'll update the libraries and re-release whenever we update OpenUnison and MyVirtualDirectory. We usually make 3-5 releases per year. In between releases, we'll continue to scan the containers we publish for kube-oidc-proxy and whenever Ubuntu publishes an update to a known CVE, we'll rebuild. This means that the container and project will be kept up to date. If there is a critical issue between release cycles, like the log4j issue, we'll address it as needed.
The current development's automated testing relies on KinD and a generic test issuer, and this will continue to be the case. We do our own integration testing with OpenUnison. If you are using an identity provider outside of OpenUnison, we'll do our best to reproduce any issues with OpenUnison. We'll respond to issues on GitHub as best we can that aren't related to OpenUnison's integration but realize our primary focus will be on OpenUnison integration rather then integration and deployment with other identity providers.
Finally, if you're a commercial customer of Tremolo Security, our container for kube-oidc-proxy is covered in your support contract! We are not selling support for kube-oidc-proxy on its own at this time. Since OpenUnison deploys kube-oidc-proxy pre-integrated, with automatic certificate management, NetworkPolicy integration, and support for the dashboard and other cluster management applications, why roll your own deployment? If you're interested in a commercial support contract, take a look at our pricing!
Getting Started with kube-oidc-proxy
All new deployments of OpenUnison will use our image for kube-oidc-proxy (docker.io/tremolosecurity/kube-oidc-proxy:latest). If you've already deployed OpenUnison and want to use our new image (which we would certainly recommend), update your helm chart's values.yaml:
Moving forward you'll get all the great benefits of Tremolo Security's build of kube-oidc-proxy!