Congratulations, you have just finished the first shippable version of your software product. You created container images for your software and want to make deploying to Kubernetes as simple as possible. You could provide plain YAML files along with your documentation, but the majority seems to gravitate towards Helm Charts (68% as of a 2018 CNCF survey ). But what about the new and uprising Kubernetes Operators?
In this article, I want to briefly introduce you to Kubernetes Operators and show you a way to create your own operators out of existing Helm Charts with low effort.
TL;DR? Jump right to the summary .
Controversy regarding Tiller
Tiller is a component of Helm that has to run inside the Kubernetes cluster for Helm to work. It is responsible for actually installing the charts and keeping a history of installed versions and configuration. For these kinds of operations, Tiller needs to run using a service account with quite extensive permissions. This in itself could be a problem, but you can minimize risks by only granting permissions that are absolutely necessary and even installing multiple Tiller instances – each one only managing a single namespace.
A more serious issue is that Tiller itself does not provide any granular access control. One can only have complete access to Tiller or none. This has led to many discussions and some people even discouraging the use of Helm in general.
To address this kind of issue, Tiller will be removed in the future release of Helm 3. But for now, only an early alpha version is available. Another way to avoid Tiller is to use Helm for templating only and apply the resulting YAML files using other tooling. However, you completely lose Helm’s release management capabilities (deployment history, rollbacks, etc.) in that case.
Kubernetes Operators offer another way to install and manage custom software solutions in a Kubernetes cluster. They consist of one or more Custom Resource Definitions (CRDs) and a controller running inside the cluster.
The custom resources (1) represent instances of the software to be installed on a very high level: For example, the “Elasticsearch” resource of Elastic’s newly announced operator allows you to describe an entire Elasticsearch cluster with only a few high-level configuration parameters. The controller (2) takes care of monitoring these custom resources and deploying or modifying the actual workloads in the cluster to match the desired configuration.
Kubernetes Operators were introduced by CoreOS in 2016 and gathered momentum after the release of the Operator Framework in 2018 and the platform OperatorHub in 2019. The number of existing operators is growing constantly as more and more organizations release operators for their software.
Each operator is designed to handle a single software product or a group of related products. You can write your own operators in Go, describe them using Ansible, or create them based on existing Helm Charts. In any case, the Operator SDK takes care of bootstrapping, building and packaging the operator. While Go-based operators are clearly the most versatile, Helm-based ones may come in handy if you have already created charts for your software.
A major advantage of Helm-based operators compared to traditional Helm installations is that operators do not require Tiller to be present in the cluster. The included charts can be installed by the operator’s controller on its own. This way, the user of your operator does not even notice that Helm Charts are used under the hood.
The permissions of the controller’s service account can be more strict since the controller only manages installations of a single type of software, not all installations – as Tiller does. You can use Kubernetes’ built-in access control to restrict who is able to install software using the operator by granting and denying rights for administering custom resources. Thus, the security concerns mentioned earlier regarding Tiller do not apply to Kubernetes Operators.
Building an example Helm operator
Even though the official tutorials are excellent, I want to show you the required steps to create a custom Helm-based operator. You can find the entire code on Github .
As our starting point, we have a simple web application that returns a fixed, configurable response when called. There is also a Dockerfile for creating a container image of this application. We have a Helm Chart in place that allows us to set the response during the chart installation. If we were to use Helm for the installation, we could run and call the application:
1# Install the chart and wait for pods to start 2helm install ./chart/ --name examplesrv \ 3 --set "response=Response from Helm" --wait 4 5# Open a local port to access the deployed service 6kubectl port-forward svc/examplesrv 8080:8080 & 7 8# Perform a request (returns "Response from Helm") 9curl "http://localhost:8080/" 10 11# Stop the port-forwarding job and uninstall the chart 12kill %1 && helm delete --purge examplesrv
However, we want to create an operator based on this chart. With the Operator SDK installed, we can bootstrap the operator by providing the name of the operator, the name of our custom resource definition, its API version and the Helm Chart it should be based on. If you have cloned my repo, delete the pre-existing folder
1operator-sdk new example-operator \ 2 --kind=ExampleService \ 3 --api-version=example.org/v1alpha1 \ 4 --type=helm --helm-chart ./chart/
This will create a folder for the newly created operator (in our case
example-service) and place a copy of the chart into it. Afterwards, we can build the container image of the operator’s controller by running the following in the operator’s folder:
1operator-sdk build helm-operator-example:latest
The image is now available locally and ready to be pushed to a container registry. I have already done that by publishing it on Dockerhub as
romansey/helm-operator-example. Let’s move on to installing and using our newly built operator.
During bootstrapping of the operator, the SDK has also created example YAML files in the
deploy folder for installing the operator in Kubernetes. Before using these YAML files, we need to modify them according to our needs:
- Set the operator image in the file
romansey/helm-operator-example, so that our previously built image will be pulled.
- In the current version, the SDK misses adding access to pods in the role definition. We need to add access to the “pods” resource in the file
Now, to install the operator, run the following from the
1# Install the custom resource definition 2kubectl apply -f crds/example_v1alpha1_exampleservice_crd.yaml 3 4# Create a service account, role and role binding for the operator 5kubectl apply -f service_account.yaml 6kubectl apply -f role.yaml 7kubectl apply -f role_binding.yaml 8 9# Create the operator deployment 10kubectl apply -f operator.yaml
Once the operator is up and running, it will start looking for our newly defined “ExampleService” resources – so let’s create one. Create a new file
1apiVersion: example.org/v1alpha1 2kind: ExampleService 3metadata: 4 name: my-example-service 5spec: 6 response: "Response from custom resource" 7 service: 8 name: examplesrv
spec portion of the resource, we can set any configuration that the underlying Helm Chart accepts. Thus, the user is only required to install the operator and is then able to describe their desired instances of our software on a high level. Sadly, there is currently no way to influence the exact release name that will be used to install the chart. So in order to get a predictable name for our service resource to access the application, we have to set it manually via a chart value.
1# Apply the newly created file 2kubectl apply -f my_example_service.yaml 3 4# Watch the operator spin up a pod with our application 5kubectl get pods -w 6 7# Once ready, open a port to test the service 8kubectl port-forward svc/examplesrv 8080:8080 & 9 10# Perform a request (returns "Response from custom resource") 11curl "http://localhost:8080/" 12 13# Stop port forwarding 14kill %1
We can also change the response in the YAML file and re-apply it. In this case, the operator will notice the change and adjust the deployment as necessary (just like
helm upgrade would). Let’s now clean our cluster up again:
1# Remove the instance of our application 2kubectl delete -f my_example_service.yaml 3 4# Watch the operator remove the pod of our application 5kubectl get pods -w 6 7# Remove the entire operator installation 8kubectl delete -f operator.yaml 9kubectl delete -f role_binding.yaml 10kubectl delete -f role.yaml 11kubectl delete -f service_account.yaml 12kubectl delete -f crds/example_v1alpha1_exampleservice_crd.yaml
That’s it! We have gone through the process of building an operator based on an existing Helm Chart. We have also installed the newly built operator in our cluster and deployed an instance of our application.
One final note: In our example, we have installed the operator in the scope of a namespace. This means that the operator is only watching custom resources in the same namespace it was installed. If you want to use an operator in a cluster-wide scope, you can read the official instructions for that.
- Kubernetes Operators are a way to simplify your software’s installation in a Kubernetes cluster by providing high-level CRDs for the end user.
- Custom operators can be created using Go, Ansible or based on existing Helm Charts.
- The Operator SDK helps create custom operators.
- Helm-based operators do not require Tiller to be installed in the cluster and avoid security concerns raised against Tiller.
Your job at Codecentric?
More articles in this subject area
Discover exciting further topics and let the codecentric world inspire you.