Beliebte Suchanfragen

Cloud Native



Agile Methoden



Cloud Native Buildpacks / in GitLab CI without Docker & pack CLI

24.10.2021 | 9 minutes of reading time

You may have heard about all the benefits of Cloud Native Buildpacks?! But Paketo’s pack CLI depends on Docker. So what about our Kubernetes-based CI systems, where we might not have a Docker daemon available?

Cloud Native Buildpacks – blog series

Part 1: Goodbye Dockerfile: Cloud Native Buildpacks with & layered jars for Spring Boot
Part 2: Cloud Native Buildpacks / in GitLab CI without Docker & pack CLI

I was recently thrown into a customer project in which everything is based on GitLab CI . Which is a great tool, although I tend to favour GitHub Actions right now because of their great reusable Pipeline-as-Code Building Blocks . But I was keen to work with the latest GitLab CI version to see how it has progressen since the last time I had used it intensively. And really the first thing we wanted to do in GitLab was to use Cloud Native Buildpacks / to build our container images! Remember that the Dockerfile is dead – so today, we shouldn’t need to write and maintain them anymore.

Well, hasn’t this cool Auto DevOps feature been introduced by GitLab CI? And there’s already a Buildpacks integration ready! As stated in the docs our .gitlab-ci.yml only needs to look like this:

2  - template: Auto-DevOps.gitlab-ci.yml

But soon we looked stupidly at this build log:

1Building Cloud Native Buildpack-based application with builder heroku/buildpacks:18...
2ERROR: failed to build: failed to fetch builder image '': Cannot connect to the Docker daemon at tcp://localhost:2375. Is the docker daemon running?

What was this all about?

GitLab CI with unprivileged Kubernetes Runners – and no Docker socket!

GitLab has been setup by the great folks over at codecentric cloud , where our customers just rent it as a managed service. In a state-of-the-art setup this GitLab uses Amazon EKS to run its builds on Kubernetes-based Runners. Quite early in the process, Daniel from codecentric cloud made one thing clear: in the default setup we wouldn’t be able to access the Docker daemon socket inside our containers running inside the Kubernetes cluster. And we also wouldn’t have any containers in privileged mode (aka Docker in Docker or docker:dind). And that should be the case, if we take a look at all the security risks being introduced otherwise.

As this could be seen as a common setup, it shouldn’t be a big deal to use & Cloud Native Buildpacks in this scenario right?! Therefore we should lift our gaze from the Auto DevOps feature and get our hands dirty! Let’s write our own GitLab CI yaml files! Commonly the basic ingredient of using and Cloud Native Buildpacks is to utilize pack CLI . Here’s an example command which is usually everything you need in order to build and publish an OCI container image:

1pack build microservice-api-spring-boot:latest \
2    --builder paketobuildpacks/builder:base \
3    --path .

Installing pack CLI inside our .gitlab-ci.yml and simply using it to build our container images plunged us into an already known trap:

1$ pack build $CI_REGISTRY_IMAGE:latest --builder paketobuildpacks/builder:base --path .
2ERROR: failed to build: failed to fetch builder image '': Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

These errors finally reminded me of my colleague Raphael . Right after writing about all those merits of Cloud Native Buildpacks he tore me out of my reveries:

“Hey Jonas, that’s really cool stuff! I love it. But do we really need to have Docker installed in order to use Paketo with its nice and shiny pack CLI? What about my CI systems? I love Kubernetes and don’t want to introduce all those security risks  to my pipelines…”

But stepping back and refraining from using Cloud Native Buildpacks wasn’t an option to us! I also thought about using kaniko as stated in the GitLab CI docs . But kaniko needs Dockerfiles. And not needing to write and maintain them anymore is the whole point here…

Directly using Paketo’s lifecycle instead of Docker

But finally it came to me. I had already read about this issue with pack CLI. I came across it writing the first article of this blog series: . The initial question nails it:

There are many users who are starting to not have Docker installed on their systems because there are other alternatives that let them create containers in a secure way as they typically run these containers on remote systems (e.g. kubernetes clusters). […] Pack, although not depending on docker build […] does require Docker to be running on your container.

Followed by a big discussion there was only one comment that included a hint on how to deal with the issue:

If you’re looking to build images in CI (not locally), I’d encourage you to use the lifecycle directly for that, so that you don’t need Docker. Here’s an example

The comment refers to the Tekton implementation on how to use buildpacks in a Kubernetes environment . And that brought us onto the right path. Here we can get a first clue about what is referred to as “to use the lifecycle directly”. The crucial point there is the usage of the command: ["/cnb/lifecycle/creator"]. We finally found the lifecycle mentioned above. And there’s also a good documentation about this command which can be found in this Cloud Native Buildpacks RFC .

A good base image: paketobuildpacks/builder:base

So how can we get closer to a working .gitlab-ci.yml?
It’s always a good way to start as simple as possible. Digging into the Tekton implementation, you’ll see that the lifecycle command is executed inside an environment defined in BUILDER_IMAGE. This variable is defined as The image on which builds will run (must include lifecycle and compatible buildpacks).

Maybe that sounds familiar?!
Can’t we simply pick the builder image paketobuildpacks/builder:base from our pack CLI command? Let’s try this locally on our workstation before committing too much noise into our GitLab. Before running the build, we need to choose an application project we want to turn into a container image. I created an example Spring Boot app on which you can clone. Or use your own app, if you’d like. Having the application project available locally, we can simply spin up a container based on the Paketo builder image paketobuildpacks/builder:

1docker run --rm -it -v "$PWD":/usr/src/app -w /usr/src/app paketobuildpacks/builder bash

With this command we mount the application’s source code into the container in order to have it available for the Paketo lifecycle. Now, inside the container we can try to run the Paketo lifecycle directly with:

1/cnb/lifecycle/creator -app=. microservice-api-spring-boot:latest

I only used the -app parameter of the creator command’s many possible parameters. That should be okay for many cases since most of them have quite good defaults. I configured the app path as the default one at /workspace doesn’t hold our application source. Instead it’s the current directory. Also we need to define an image-name at the end, which will be used as the resulting container image name.

The execution of the lifecycle should trigger a normal Paketo build using Cloud Native Buildpacks! But this time we did not need pack CLI 🙂

The first .gitlab-ci.yml

As now we know how to run the Paketo lifecycle directly, we can craft our .gitlab-ci.yml. Using some GitLab CI predefined variables the command could look like this:

1/cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest

Choosing the wrong image name could lead to a breaking Buildpack process analyser step and no image gets pushed to the GitLab Container Registry!
So let’s craft our first .gitlab-ci.yml like this:

1image: paketobuildpacks/builder
4  - build
7  stage: build
8  script:
9    - /cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest

Commiting this to our repository in GitLab should already start a GitLab CI build using Buildpacks! Yay! 🙂
The only problem is that it might run into an error like this:

2    ERROR: failed to get previous image: connect to repo store "": GET DENIED: access forbidden
3    Cleaning up project directory and file based variables 00:01
4    ERROR: Job failed: command terminated with exit code 1

docker login without Docker

Using the build image paketobuildpacks/builder:base we don’t have the command docker available inside our Kubernetes Runners. So we can’t login to GitLab Container Registry as described in the official docs . That’s why we run into the DENIED: access forbidden error, because our pipeline has no access to the GitLab Container Registry.

Luckily my colleague Marco gave me the hint that there’s the approach described in this stackoverflow answer . Instead of using docker login we can create the Docker configuration file ~/.docker/config.json ourselves. So let’s create it containing our GitLab Container Registry’s login information. This way the Paketo build will pick them up as stated in the docs :

If CNB_REGISTRY_AUTH is unset and a docker config.json file is present, the lifecycle SHOULD use the contents of this file to authenticate with any matching registry.

Inside our .gitlab-ci.yml creating a correct ~/.docker/config.json could look like this:

2      - mkdir ~/.docker
3      - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_JOB_TOKEN\"}}}" >> ~/.docker/config.json

Again be sure to use the handy GitLab CI predefined variables like $CI_REGISTRY (the GitLab Container Registry url) and the variables $CI_REGISTRY_USER & $CI_JOB_TOKEN, which contain the credentials.

Our final .gitlab-ci.yml

So bringing it all together, our .gitlab-ci.yml finally looks like this:

1image: paketobuildpacks/builder
4  - build
6# We somehow need to access GitLab Container Registry with the Paketo lifecycle
7# So we simply create ~/.docker/config.json as stated in
9  - mkdir ~/.docker
10  - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_JOB_TOKEN\"}}}" >> ~/.docker/config.json
13  stage: build
14  script:
15    - /cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest

Here’s the example project’s fully working .gitlab-ci.yml .
Our builds should now run successfully, using Paketo/Buildpacks without pack CLI and Docker:

The example project also has the full build log available here , if you’re interested.

Great to see Cloud Native Buildpacks / working in GitLab CI without Docker & pack CLI!

I’m absolutely glad to see my favourite container build solution working together with a state-of-the-art CI/CD setup like GitLab featuring Kubernetes runners! Being able to use Cloud Native Buildpacks / without the Docker daemon or pack CLI, enables a much broader usage of this absolutely great technology. It’s no problem to stay with pack CLI locally, if you have Docker installed. But you now know what it means to use the lifecycle directly! Besides Tekton there’s also another gread CI/CD tooling using this trick: kpack .
Hopefully, I will find the time to write about it in the near future 🙂

share post




More articles in this subject area

Discover exciting further topics and let the codecentric world inspire you.


Gemeinsam bessere Projekte umsetzen.

Wir helfen deinem Unternehmen.

Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.

Hilf uns, noch besser zu werden.

Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.