containerbau mit kaniko

Container building with Kaniko

Automation of container image building processes usually starts with porting docker build directly into a CI pipeline. This command is directed to the host’s Docker daemon, which is mounted as a socket in the pipeline sandbox.

This is problematic mainly because it allows to start arbitrary processes on the host from the pipeline, possibly without further isolation from their environment. This makes the CI pipeline a security risk. There are several ways around this problem, all of which have their advantages and disadvantages:

  • The Docker daemon itself can be run in a container (“Docker in Docker”). The pipeline then talks only to a containerized Docker daemon, which promises more isolation. The main problem here is that the daemon container requires extensive kernel privileges in order to do its job (essentially starting other processes). Such privileged containers, in turn, are themselves considered a security risk, as it would be possible to break out of them.
  • Replacing Docker with Buildah eliminates the need for the Docker daemon altogether. However, the buildah container itself must still run privileged to be able to start processes in the containers being built.
  • Arbitrary variations of both previous variants by other tools or config (e.g. “rootless Docker in Docker”, Docker with userspace remapping, …).

Both solutions require privileged containers, which are not acceptable in Kubernetes, among others. A solution without this problem offers Kaniko. Kaniko can be run in Docker, as well as Kubernetes, and provides a containerized environment for image building without requiring extensive kernel capabilities.

The concept behind Kaniko

Since a Kaniko container runs non-privileged, the image building process in the container works slightly differently than in Docker. The main difference here is that all phases of the build (building the base filesystem, executing commands in isolation, snapshotting the filesystem) take place in user space instead of system space. However, this is not noticeable from the application side.

The only differences in operation come from the fact that a Kaniko container is intended for one-time use – building a single image. After that, ideally a new Kaniko container should be started to start again in a new clean environment, because Kaniko does not delete the filesystem of old containers automatically. Alternatively, for “cleaning up” in the running Kaniko, there is now a way (see later) to allow multiple images to be built in one container. Best practice in the old Kubernetes manner, however, remains “one task – one pod”.


Building and pushing a container image with Kaniko

Unlike Docker, Buildah, etc., the official Kaniko container image is completely minimalist in design. In particular, it does not include a shell, so all preparations for container building (including providing container registry credentials) must take place outside of Kaniko. A typical build flow for pushing to a Docker Registry therefore looks like this:

  • Creating a Docker config.json with credentials via

"auths": {
"auth": "$(echo -n $USER:$PASS | base64)"
  • starting a Kaniko container that mounts this config under /kaniko/.docker/config.json. The build context, including the Dockerfile, must also be mounted under/workspace.
docker run -it --rm --name kaniko-build
-v "$WORKSPACE":/workspace
-v dockerconfig.json:/kaniko/.docker/config.json:ro
The entry point executed in the process is the /kaniko/executor.

At the end of the build, Kaniko pushes the container image directly to the registry specified using --destination. Multiple tags for the same image, e.g. the combination commit SHA, commit reference (branch / tag) and “latest”, are possible via multiple use of the --destination flag.

Building multiple images in the same Kaniko container

After the push, the Kaniko executor process ends and the container stops. The container now still contains remnants of the created filesystem, re-running /kaniko/executor can (and will!) produce corrupted images. Accordingly, it is recommended to use a new, fresh Kaniko container for further builds. If this is not an option, running /kaniko/executor with the --cleanup flag will allow you to “clean up” temporary files afterwards:

docker run -it --rm --name kaniko-build
-v "$WORKSPACE":/workspace
-v dockerconfig.json:/kaniko/.docker/config.json:ro

This makes the build process take a bit longer, but the container becomes reusable.

Now, to allow multiple calls to the /kaniko/executors in sequence (in the same process), a shell is needed. This is not included in the minimal image, but it is in the “debug” tags. Thus, a local script can be called in the Kaniko container,

docker run -it --rm --name kaniko-multi-build
-v "$WORKSPACE":/workspace
-v dockerconfig.json:/kaniko/.docker/config.json:ro
--entrypoint /kaniko/
which builds several images in succession. This then looks like this, for example:
/kaniko/executor --dockerfile="Dockerfile1" --destination="" --cleanup
/kaniko/executor --dockerfile="Dockerfile2" --destination=""

Different build context folders can also be selected via the --context flag.

Best Practices

  • Kaniko does not need to run as a privileged container, but root privileges in the container (RunAsUser: 0) are still necessary during the container build, for example to install packages. In order to still “mitigate” this remaining problem, it is necessary, for example User-Namespace Remappings. Unfortunately, these are not (yet) available under Kubernetes, which is why existing security policies / admission rules may have to be softened for Kaniko. Kaniko pods should therefore still run in Kubernetes on separate nodes whenever possible, which they do not share with other workloads.
  • As described above, the “debug” images make it possible to build multiple images in the same Kaniko container. Nevertheless, a fresh new container remains best practice for any image. The reason for this is the better isolation of the individual build processes.
  • The absence of a shell in the image requires some rethinking. For example, if a pipeline is to build a dynamic number of images (e.g., for Automated Base Image Maintenance), an image list must first be created outside the build container (in a preparation step). Then, starting from this list, the pipeline must start one Kaniko container per image. In GitLab, for example, this can be realized with a parallel job matrix or, if the number of different image tags is not known until the runtime of the pipeline, with dynamic child pipelines. This approach has another advantage besides the better isolation: All build processes run in parallel, which is why the pipeline can run much faster.

You need support or advice in the implementation of IT automation or container platforms? Weare here to help:

The following two tabs change content below.

Pascal Fries

Als IT Consultant für Cloud Native Technologien berät Pascal Fries unsere Kunden in den Themen Infrastructure as Code und Continuous Deployment, insbesondere im Containerumfeld.

Latest posts by Pascal Fries (see all)