Kubernetes does not have its own user management and relies on external providers like Keycloak. This blog post will describe how to configure Kubernetes to use Keycloak as an authentication provider.
We are running Kubernetes clusters based on OpenStack Magnum. Magnum provides an integrated Keystone authentication provider. But as we didn’t want to add every user who wants to use Kubernetes to our OpenStack, we wanted to add our Keycloak server directly to Kubernetes.
The Kubernetes documentation explains how to integrate an OpenID provider . But that’s just one part.
Kubernetes & Keycloak – Configuration
First, you have to get the above configuration correct. We had to change
prefered_name, because in our setup
sub is a UUID, which isn’t very handy for login.
prefered_name is the combination
firstname.lastname, which is much user-friendlier.
The next step is to get the configuration for authorization right. Blog posts and documentation on the internet aren’t a one-size-fits-all in this case. We are running Kubernetes 1.13 and Keycloak 4.8 and had to configure the user role binding in Kubernetes like this:
kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 subjects: - kind: User name: "#" namespace: "" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin
For groups, the mapping looks like this:
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: /
The next step is to make your integration with kubectl easier for your users. Luckily Hidetake Iwata has already written a blog post how to do this. It describes the basic OIDC client configuration in Keycloak as well as the login procedure using kubectl with the Kubelogin plugin.
TLS connection from Kubernetes to Keycloak
We also had a problem with the TLS connection from Kubernetes to Keycloak. Our Keycloak TLS certificate is signed by Sectigo/Comodo. But the CA wasn’t included in the default ca-bundle used by the Kubernetes API server . We can fix it by overwriting the ca-bundle file location for the kubernetes API server:
We hope this helps you to set up OIDC integration in your Kubernetes installation easily.