Beliebte Suchanfragen

Cloud Native



Agile Methoden



Multiple-stage Kubernetes deployments with GitLab and Kustomize

27.11.2019 | 8 minutes of reading time

This article outlines a lean setup for a CI/CD setup to multiple Kubernetes clusters as a step-by-step guide. We will use GitLab CI with the GitLab Docker Registry and the Kustomize customization engine.

A containerized microservice-oriented project is subject to be deployed on multiple types of Kubernetes clusters, such as a local cluster on a developer’s machine, staging and production systems.

Although all those clusters may share the same base application setup, they are likely to vary in terms of environmental factors, such as:

  • the version of certain deployment artifacts
  • authorization, authentication and accounting
  • availability and setup of diagnostics
  • internal and external systems discovery

Furthermore, a deployment strategy can benefit from a high degree of automation and an understandable configuration with as little redundancy and ceremony as possible.

Build and deployment to GitLab Docker Registry

Preparing a project for GitLab CI

For a simple start, we use a direct relationship between the branches in the project’s Git repository and the Docker image tags, so a “master” branch will result in a new image tagged as “master”.

The .gitlab-ci.yml below tests, builds and deploys a Node project to a GitLab-hosted Docker registry. For further information on GitLab CI, please refer to the official documentation .

  DOCKER_DRIVER: overlay2

  - test
  - build
  - build-docker

  image: node:lts-slim
  stage: test
    - npm ci
    - npm run test

  image: node:lts-slim
  stage: build
      - .
    - npm ci

  image: docker:latest
  stage: build-docker
    - privileged
    - develop
    - build
    - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $REGISTRY
    - docker build --network host -t $IMAGE_NAME:$IMAGE_TAG -t $IMAGE_NAME:latest .
    - docker push $IMAGE_NAME:$IMAGE_TAG
    - docker push $IMAGE_NAME:latest

The project also needs a Dockerfile to be picked up during the Docker build:

FROM node:lts-slim

WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
CMD [ "npm", "start" ]

During the build-docker step, the new Docker image is pushed to the integrated registry.

Kubernetes cluster setup

Assuming that the cluster setup is complete and the Kubernetes CLI Kubectl is able to interact with the cluster, the following actions must be performed in order to deploy the images created in the previous steps:

  • Create a Deploy Token in GitLab to grant Docker registry access to the cluster
  • Create a Cluster Secret for the deploy token
  • Create Kubernetes objects for the application
  • Mention the cluster secret in the Kubernetes deployment object
  • Adapt and apply Kubernetes objects with Kustomize

Setting up a GitLab Deploy Token

By default, external access to the GitLab Docker registry is prohibited for non-authenticated users. A way to grant per-project access to an external Kubernetes cluster is creating a shared secret via the deploy token mechanism in GitLab, which can be reached via (Project, Settings, Repository, Deploy Tokens).

For the purpose of permitting an image pull operation, the deployment token needs to be associated with the read_registry scope. After creating a token, GitLab will present the username and the newly generated token. Once created, this is the only opportunity to save the token – there is no recovery option, so a lost token needs to be revoked and replaced with a new one.

The token can be registered as a cluster secret on the Kubernetes cluster by using kubectl’s create secret command.

kubectl create secret docker-registry api-service-deployment-token
--docker-server=(docker registry from gitlab instance)
--docker-username=(content of "Your New Deploy Token,Username")
--docker-password=(content of "Use this token as password")

Note: The command above creates the secret in the “default” namespace of the cluster. To use a custom namespace, either add a namespace to your context in your Kubeconfig (~/.kube/config on Unix-based systems) or add the -–namespace=(your namespace) argument to each kubectl call.

Creating the Kubernetes deployment

Assuming that the service created in the previous step provides an internal API not exposed to the outside world, defining a service and a deployment is sufficient.

├── deployment.yaml
└── service.yaml

The deployment describes the “workshop layer” in the cluster by providing information about how to obtain and maintain Docker images and containers. The tag spec.template.spec.imagePullSecrets declares a reference to the cluster secret “api-service-deployment-token” created during the registration of the GitLab deploy token.

apiVersion: apps/v1
kind: Deployment
  name: api-service
  replicas: 1
      run: api-service
        run: api-service
        - name: api-service
        - name: api-service-deployment-token

The service describes how a deployment or a set of deployments is exposed and discovered by other services.

apiVersion: v1
kind: Service
  name: api-service
    - name: http
      port: 8080
      targetPort: 8080
      protocol: TCP
    run: api-service

For further details on Kubernetes objects, options and how they discover each other, please refer to the official documentation .

Cluster-specific customization with Kustomize

Even though Kubernetes offers significant flexibility with respect to wiring and discovering services from the start, the options for adaptions on the resources deployed to certain clusters are limited. Cluster-specific adaptions, such as changing the deployment set or deploying different versions depending on the clusters purpose, required either creative folder management which didn’t scale well, additional tooling (such as Helm) or custom postprocessing (i.e. with sed/awk or envsubst).

This gap has been filled by Kustomize (see ), which recently became part of kubectl. Kustomize adds features like cluster-based customization and (multi-)inheritance to Kubernetes resource descriptions, eliminating the need for duplicate cluster configuration.

Kustomize employs the concept of a common base set, multiple overlays which may inherit from the base and each other, resource specifications and transformations.

A resource specification adheres to the following conventions:

  • It is stored in a file named kustomization.yml
  • It can refer to any Kubernetes resource as long as it is stored in a child folder relative to kustomization.yml
  • Referring to a resource in a parent folder requires the target folder to be a resource specification itself (in other words, it provides a kustomization.yml)
  • At present, a resource specification is required to explicitly include every required Kubernetes recipe, no wildcards are supported yet.

Creating a Kustomize resource definition

The configuration folder structure should reflect the way Kustomize works:

├── base
|   ├── api-service
│   └── kustomization.yml
└── overlays
    ├── development
    │   └── kustomization.yml
    ├── production
    │   └── kustomization.yml
    └── testing
        └── kustomization.yml

A Kustomize base folder contains the application’s common resources. Depending on the application and requirements, this could be one big common or a segmented base to allow compositions of smaller aspects.

As our demo application consists of only one service, the base kustomize.yml just contains a few references in the resources section:

kind: Kustomization
namespace: myservice

  - ./api-service/deployment.yaml
  - ./api-service/service.yaml
# other services here...

The overlays folders contain all customizations that are supposed to be applied on the base set.

The example below performs the following actions:

  • Inherit from the base definition
  • Change the image tag to be pulled for api-service from latest to develop
  • Add stage-specific resources
kind: Kustomization
namespace: my-service

# Apply a transformation to replace the image tags (latest -> develop)
  - name:
    newTag: develop

  - ../../base

# Add additional resources only applicable on that cluster
  - services/diagnostics-service.yaml
  - config/sso-config.yml

In case of a set of rules shared between multiple overlays, it is also possible to compose the target state using multiple inheritance:

kind: Kustomization
namespace: my-service

  - ../../base/services
  - ../../base/rules/use-develop-images
  - ../../base/diagnostics

More examples of transformations are available in the official Kustomize repository .

To preview and apply the effective resource definitions for a given cluster, the definitions need to be compiled into standard Kubernetes resources. This step can either be performed using the Kustomize CLI itself with kustomize build (folder) or a recent build of kubectl, by calling kubectl apply -k (folder). To perform a dry run with kubectl, use kubectl apply -k (stage) --dry-run -o yaml.

For the configuration used in this example, the resources for the “testing” stage can be built with kubectl apply -k overlays/testing.

As Kustomize is now part of Kubectl, there is no need to add another dependency to the CI pipeline, so it is advisable to use kubectl -k instead of kustomize.

After applying the compiled resources, the cluster should start pulling and running the images referenced in the resource specifications.

Kubernetes deployment from GitLab CI

After a successful CI build on a branch or tag relevant for deployment, the artifact should be deployed on the cluster without any additional manual action.

For an automatic deployment, a service account has to be created on the cluster, added to GitLab and referenced by an additional pipeline step.

The definition below sets up a service account for the namespace my-service with administrator privileges:

apiVersion: v1
kind: ServiceAccount
  name: gitlab-service-account
  namespace: my-service
kind: RoleBinding
  name: gitlab-service-account-role-binding
  namespace: my-service
  kind: ClusterRole
  name: admin
  - kind: ServiceAccount
    name: gitlab-service-account
    namespace: my-service

After applying the ServiceAccount manifest above with kubectl apply, the token can be found and obtained via kubectl get-secret.

kubectl describe sa gitlab-service-account
Name:                gitlab-service-account
Namespace:           my-service
Mountable secrets:   gitlab-service-account-token-08aah
Tokens:              gitlab-service-account-token-08aah

kubectl get secret gitlab-service-account-token-08aah -o yaml
apiVersion: v1
  ca.crt: (Cluster CA certificate here)
  token: (base-64 encoded token here)

Those credentials have to be registered to GitLab as variables which can be referred from the actual deployment pipeline step:

  • CLUSTER_ADDRESS: Address of the cluster from GitLab CI’s point of view
  • CA_AUTH_DATA: Cluster certificate
  • K8S_TOKEN: Base64-decoded service token

The extended .gitlab-ci.yml file:

  - test
  - build
  - build-docker
  - deploy

# ... existing content omitted

    name: kubectl:latest
    entrypoint: [""]
  stage: deploy
    - privileged
  # Optional: Manual gate
  when: manual
    - build-docker
    - kubectl config set-cluster k8s --server="$CLUSTER_ADDRESS"
    - kubectl config set clusters.k8s.certificate-authority-data $CA_AUTH_DATA
    - kubectl config set-credentials gitlab-service-account --token=$K8S_TOKEN
    - kubectl config set-context default --cluster=k8s --user=gitlab-service-account --namespace=my-service
    - kubectl config use-context default
    - kubectl rollout restart $K8S_DEPLOYMENT_NAME

After a successful build, the new image will be deployed to the cluster.

share post




More articles in this subject area

Discover exciting further topics and let the codecentric world inspire you.


Gemeinsam bessere Projekte umsetzen.

Wir helfen deinem Unternehmen.

Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.

Hilf uns, noch besser zu werden.

Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.