The Mystery of Container Images Demystified

  1. Basics
  2. Dive into Image
  3. Proof it

„Any sufficiently advanced technology is indistinguishable from magic.“
Arthur C. Clarke

Containers have been around much longer than Docker and Kubernetes. But certainly these applications made them more wellknown and also much more accessible. How to build an image for a container and also how it works at runtime has been discussed many times; but what about the state in between? In order to best use and optimize container images in our daily work, we need to understand how they are built. Thus, the following article – garnished with a specially handcrafted image – sheds light on the concept of OCI images.

1. Basics

Basically, containers make sandboxing capabilities of the (Linux) kernel accessible. Under the hood, they make use of “control groups”, or cgroups for short, under Linux. They are used to group processes [nach Containern]and thus to be able to divide up, monitor and limit resource consumption. Linux namespaces are used to isolate the containers.These separate the processes into their own namespaces, so that the processes cannot see each other or interact directly with each other.This way, for example, a namespace is not even aware that it is in a namespace and that there may be other processes outside it.

Since we IT people want to optimize our time at the coffee machine, the manual use of cgroups and namespaces is too time-consuming for us. As a solution, we use a runtime environment such as containerd (docker) or cri-o (podman) as an abstraction layer. More information about Docker Alternatives can be found in our blog.

Container images bundle all the important information for us for our containers, such as file system, user and start command. To start the container, the image thus provides the runtime environment with the necessary information. This in turn creates cgroups and namespaces, making our management much easier.

2. Dive into Image

Containers offer us the possibility to run our software in isolation. And an image provides the blueprint with all the necessary information. But how can you look at this info? After all, for most users, the images live behind the `docker images` command. With `docker save -o out.tgz <img>` he desired image can be saved from Docker’s internal storage to the file system.
Obviously, an image is really simply a tarball in which files are stored. Nothing more, nothing less (and certainly no magic).

In this tarball you can find different files with different tasks. The basic information can be found in the `manifest.json` file. It tells us where in the tarball we can find the configuration, where and in what order the layers for our file system are located, and how to layer them. In the case of Docker, you can also find the repository tag here. The config file specified in the manifest contains information such as the user with which the container is to be operated, the environment variables, the start command or for which operating system and processor architecture the image is built.

While all configuration information is located in one file, our file system in the container is usually divided into several layers. These layers are not relevant at runtime, but are primarily used to store the same information once when storing multiple containers with a similar build process. This allows multiple images in a registry to “share” the same layer.
Immediately before the container is started, the image layers are written to the new container’s file system in the order in which they are listed in the manifest. Thus, if the files are the same, later layers overwrite the files of earlier layers. For those who want to delve deeper into the file system of an image, the tool dive is recommended.

3. Proof it

That is, now you should actually be able to cobble together a container without a build system at all.

For this experiment you first need a working folder, `tiny-atix`.

This gets a config.json, manifest.json and repositories file and a ./base/layer path.


tiny-atix < folder

├── base < folder

│ └── layer < folder

│ └── hello-atix.txt < file

├── config.json < file

└── manifest.json < file

In our files we write the following:


# manifest.json

[{

"Config": "config.json",

"RepoTags": ["atix:tiny"],

"Layers": ["base/layer.tar"]

}]


# config.json

{

"architecture": "amd64",

"os": "linux",

"rootfs": {

"type": "layers",

"diff_ids": ["sha256:5962285ecca4a05d873887b24651922e65d9d32f4902ed0c54ada32c26559e72"]

}

}


# hello-atix.txt

hello atix

In our files we write the following:


# layer packen

$ tar -C tiny-atix/base/layer -czvf tiny-atix/base/layer.tar .

# image packen

$ tar -C tiny-atix -czf atix.tar .

The exciting moment is the loading of the image. In the case of these images, starting is not exciting, because we have not specified a start command, nor a binary that can be executed.

# image laden
$ docker load -i atix.tar
# image anschauen
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
atix tiny a308f37e9ac7 N/A 10B
# image starten
$ docker run atix:tiny
docker: Error response from daemon: No command specified.

Of course, further research can now be done to understand images and the given possibilities even better.
The next step would be a start command and a binary that prints us our `hello-atix.txt` file.
The best way to get this into the image is via a second layer. 😉

But for this article that should be it now.

For those who want to continue the research expedition, the following sources and tools are recommended:
oci spezifikationen
dive

The following two tabs change content below.

Lukas Paluch

Latest posts by Lukas Paluch (see all)

This post is also available in: German