ATIX AG
  • Services
    • Consulting
      • Linux Platform Operations​
      • Infrastructure Automation
      • Container Platforms and Cloud
      • DevOps Processes, Tooling and Culture
      • Cloud Native Software Development
    • Products
      • orcharhino
        • About orcharhino
        • Support
        • orcharhino operation
      • Hangar
        • About Hangar
        • Hangar Roadmap
        • Hangar Community
    • Technologies
      • Ansible
      • Docker
      • Foreman
      • GitLab
      • Istio
      • Kubernetes
      • Linux Distributions
      • OpenShift
      • Puppet
      • OpenVox
      • Rancher
      • Rundeck
      • SaltStack
      • SUSE Manager
      • Terraform
  • Trainings
    • Ansible Training
    • Container Training
    • Docker Training
    • Git Training
    • Go Training (Golang)
    • Istio Training
    • Kubernetes Training
    • OpenShift Training
    • orcharhino Training
    • Puppet Trainings
    • Terraform Training
  • Events
    • Webinars
  • Blog
  • Company
    • About Us
    • References
    • Corporate values
    • Social engagement
    • Newsroom
    • Newsletter
    • Contact us
  • Career
  • Search
  • Menu Menu

Docker Alternatives

Docker has started a revolution with its easy access to Linux containers: Containerized applications, microservices, DevOps, GitOps, and Kubernetes are noticeably spreading. But then suddenly it says “Docker support in [kubernetes] is now deprecated and will be removed in a future release”.

What does this mean for you and your work with containers? With Kubernetes? For starters, the good news is that you don’t have to change much at all. This blog briefly outlines existing alternatives, categorizes them, and provides a brief example for developers. Those interested in more details should take a look at our Docker training.

What are the alternatives to Docker?

A Google search for Docker alternatives quickly yields an extensive list:

  • buildah
  • buildkit
  • by hand: skopeo, ostree, runc
  • containerd
  • cri-o
  • ftl
  • img
  • k3c
  • kaniko
  • (lxc)
  • (OpenVZ)
  • orca-build
  • packer
  • podman
  • pouch
  • (rkt)
  • source-to-image

Which alternative is suitable for what?

Those who have already dealt with the subject know that this is a motley mix. Some of the tools listed no longer correspond to the current technical status. In addition, cri-o, for example, serves a completely different target group than kaniko. Tools that do not play a role are in parentheses. rkt had launched the company CoreOS even before Red Hat bought it. It has since been discontinued as a project. Linux Containers (lxc) and OpenVZ also do not fit into this set. Both are aimed at operating system virtualization – as opposed to the Docker-type “application containers”.

Categorization

The Open Container Initiative (OCI) has been around since 2015. This project of the Linux Foundation provides specifications for images and runtimes. Images must therefore contain the files that the application in the container requires at runtime. The runtime must create containers from it. Previously, Docker served both requirements. Now we distinguish between tools that build images and those that make these images run as containers. The OCI standards ensure that, for example, image-build with kaniko and Kubernetes with cri-o are a valid combination.

Backend in Kubernetes

From the list, cri-o and containerd are good as container runtimes in the Kubernetes cluster. And only as that. Both provide an API that addresses Kubernetes via what is known as the container runtime interface (CRI). They can download images from registries and start and stop containers. Docker can do that and more – that’s why there’s an intermediate layer for Kubernetes: docker-shim. This effectively restricts the Docker API to what Kubernetes needs and customizes the API calls. This intermediate layer was previously a core component of Kubernetes. That it won’t be in the future is the statement of the opening quote from the Kubernetes release notes.

cri-o is a completely new implementation of the Runtime. It is named after the container runtime interface and the Open Container Initiative (cri + oci = cri-o). It equips containers with fewer of the so-called capabilities than Docker – permissions that the kernel assigns to processes, such as the right to modify file owners. Those who have made use of the extended capabilities Docker offers so far might run into problems here.

In this case, containerd might be a good choice. This is more or less the core of Docker, but somewhat stripped down. The direct path leads from Docker to here: containerd uses the same libraries as Docker and you can assume that nothing changes during operation.

In Build-Pipelines

The big innovation is that different tools are used for construction and operation. The OCI specification ensures that this works. But, how smoothly does it happen? At this point, once again: Docker will continue to exist and quite excellently continue to build container images. It keeps its bugs for the most part: It is a central daemon that is allowed quite a lot. In our training, we explain, among other things, how to set up a Docker-in-Docker “rootless” service. This allows developers to build container images in isolation from the infrastructure and fellow users.

My container projects are all in the form of Dockerfiles. I am following approaches that cannot deal with this in theory only so far. Please note: There is nothing wrong with taking an approach that does not use Dockerfiles, I just prefer a way without migration. In addition, most projects – including those of our customers – are in this form, which is, in a sense, a standard.

GitLab pipeline with different executors

The Dockerfile and all other necessary data are located in a Git repository. At ATIX, we use GitLab, which comes with a CI/CD pipeline. If you add a .gitlab-ci.yml to the repository, the image build and push will start automatically after each git push. This additional file contains the instructions on how to create an image from the repository. Participants of our training set this up themselves and work with it in order to get to know the described approaches in detail. Of course, the following can be applied to other systems. I would like to present two approaches here with examples: kaniko and buildah.

Cloud-native: kaniko

A modern approach is provided by kaniko, which fits naturally into pipelines that also run in a Kubernetes cluster. The following .gitlab-ci.yml builds an image from the Dockerfile and uploads it to GitLab’s built-in registry.

---
build:
  stage: build
  image:
    name: gcr.io/kaniko-project/executor:debug
    entrypoint: [""]
  script:
    - mkdir -p /kaniko/.docker
    - >
      echo "{"auths":{"$CI_REGISTRY":{
        "username":"$CI_REGISTRY_USER",
        "password":"$CI_REGISTRY_PASSWORD"}}}" > /kaniko/.docker/config.json
    - >
      /kaniko/executor
      ——context $CI_PROJECT_DIR
      ——dockerfile $CI_PROJECT_DIR/Dockerfile
      ——destination $CI_REGISTRY_IMAGE:latest
    - >
      /kaniko/executor
      ——context $CI_PROJECT_DIR
      ——dockerfile $CI_PROJECT_DIR/Dockerfile
      ——destination $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG

All the magic takes place in the script section. First the code stores access data for the internal registry, then it builds and pushes first an image with the tag latest, then another one with the tag $CI_COMMIT_REF_SLUG. This is an automatically set variable that contains the branch name or tag in a URL-compliant version. This pipeline works with the Docker executor and requires no further configuration.

Straightforward: buildah

The second approach is familiar to anyone familiar with Docker. Again, a minimal example that could be in the same repository as in the case of kaniko:

build:
  stage: build
  image: my.registry:5000/image/mit/buildah:1.2
  tags:
    - docker
  before_script:
    - buildah login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
  script:
    - buildah bud -t $CI_REGISTRY_IMAGE:latest .
    - buildah push $CI_REGISTRY_IMAGE:latest
    - buildah push $CI_REGISTRY_IMAGE:latest $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG

The first script line uses the bud directive, short for build-using-dockerfile. This implies that there is an alternative way.

Just like Docker, buildah starts a process in its own namespace. In it, the RUN commands run in the Dockerfile. For it to be able to do this, it needs the appropriate privileges. For those using Docker executor, buildah requires containers to run privileged and commands to run as root.

Conclusion

On the laptop, I primarily continue to rely on Docker or more frequently switch to Buildah. Windows and Mac users can simply stick with Docker. Some of the tools mentioned also offer binaries or packages for non-Linux, but the switch is not mandatory.

What is working in the Kubernetes backend is then unimportant if you have adhered to the OCI specifications. Most distributors have already changed that anyway. OpenShift relies on cri-o, Rancher and SuSE have opted for containerd, but will also continue to maintain docker-shim.

I would definitely switch pipelines. My personal favorite is kaniko. The only downside is that you have to use the debug image, which is not the standard version – at least for GitLab And maybe that will change.

Links

  • Release notes of Kubernetes 1.20 with the notice to drop Docker as backend
  • Docker Images Without Docker — A Practical Guide
  • You Don’t Have to Use Docker Anymore
  • The Many Ways to Build an OCI Image without Docker
  • LXE announcement
  • LXE on GitHub
  • BuildKit on GitHub
  • img on GitHub
  • orca-build on GutHub
  • umoi on GitHub → archived!!!
  • buildah on GitHub
  • ftl on GitHub
  • rules_docker on GitHub
  • kaniko on GitHub

Docker Training

This course is intended for students who have little or no experience with Docker. It starts with an introduction to containers to ensure a common level of knowledge. After that, participants will set up GitLab as a containerized application. With this infrastructure, they learn to build images: first entirely by hand, eventually fully automatically. Finally, participants learn about Docker alternatives and build their images with Buildah or kaniko, for example.

Find out more
You might also like
Docker Swarm: A herd of containers
Kafka and AnsibleAutomating Kafka with Ansible
Creation of Foreman RPM packages with Docker containers
deploying kubernetes clusterDeploying a Kubernetes Cluster with orcharhino
Workshops in the Cloud – What Ansible, Docker and the GitLab CI/CD offer for this
Docker: Composition of containers
Jan Bundesmann
+ postsBio
  • Jan Bundesmann
    https://atix.de/en/blog/author/janb/
    ATIX at ConfigMgmtCamp 2024
  • Jan Bundesmann
    https://atix.de/en/blog/author/janb/
    Switching from Docker to CRI-O
  • Jan Bundesmann
    https://atix.de/en/blog/author/janb/
    Registers and Macros in Vim
  • Jan Bundesmann
    https://atix.de/en/blog/author/janb/
    Installing OpenShift through Foreman
Expertise that Drives your IT Forward

💡 Tackling complex IT challenges?
We consult — strategically and hands-on. Future-proof your IT with expert consulting.
👉 Learn more »

🔔 Technology. Trends. Dates.
Stay up to date with the latest IT developments and upcoming events. Subscribe now and stay informed.
👉 Subscribe to our newsletter »

ISO Certified Certificate
Newsletter
Never miss anything again. Sign up for the ATIX newsletter!
Sign up now
Blog
  • Blog Start Page
  • ATIX Insights
  • Cloud Native
  • Container Plattformen und Cloud
  • DevOps
  • Infrastructure Automation
  • Linux Platform Operations
  • orcharhino
Privacy & Legal

Privacy Policy

Imprint

Terms and Conditions

B2B

Twitter     Facebook    LinkedIn    Xing     Youtube     mastodon=

© Copyright – ATIX AG

Scroll to top