Docker: Komposition von Containern ATIX Blog

Docker: Composition of containers

The article on the overview of Docker (Part 1) dealt with the idea behind container virtualizations. In this part, we will briefly explain how to get a container image and how to start it easily.

Starting a container image is not that difficult – a container with Ubuntu is created with: docker run ubuntu

Docker first searches the locally stored images and uses them. If no image can be found under this name, public images from the Docker hub are searched and if an image with a suitable name exists, it is downloaded and started.

In this case, however, the container has no function yet, i.e. in reality you take an image on which the finished program is already installed and is automatically started with the container. But if you want to build your own image with your own program, it is of course easier to start with a base image of a Linux distribution. To configure the container, you can pass certain parameters, such as open ports, directly at startup with the “docker run” command or connect to the container (like an SSH connection) to install required programs. It is easier to create an image using a Dockerfile file. In this file, instructions are configured that assemble and configure the corresponding image step by step.

Once you have created your own image, you can upload it to the Docker Hub and make it available privately or publicly. Alternatively, you can also create a Docker registry on a server in your own local network as a collection point for container images. The registry is in turn a container in the Docker environment and can collect the images and make them available. Version numbers can also be assigned to re-save an image after changes and still have the option of reverting to an older version. A separate server with the required images in the local network has the further advantage that the transfer rates are likely to be higher than if each image has to be downloaded from the Internet first or a direct connection to the Internet may not even be available.

Most applications require additional configuration parameters despite the container image being ready. If the application consists of several containers, these may need to communicate and be connected via certain ports, require links to a database or should use a folder path outside the container. By including paths outside the container or in other containers, the files are retained if the corresponding container is deleted or replaced. The docker-compose extension can be used to simplify or reduce long command line commands. For docker-compose, you create a text file with the name docker-compose.yml, which contains all the necessary configuration parameters including the container itself in the correct formatting. You can then use a container to start the container or several containers with a simple “docker-compose up” via the command line. This makes it easy to combine images (Dockerfile) and the configuration (docker-compose.yml) in two text files that are easy to edit and portable. This means that each container can also be easily recreated or recreated from one Docker environment in another Docker environment.

The following explains a docker-compose.yml file for a WordPress blog consisting of a mySQL database in one container and WordPress in another container. The example can also be found in the official documentation of docker-compose. docker-compose.yml: docker-compose.yml:

version: '2'
services:
  database:   image: mysql:5.7
    volumes:
      - "./.data/db:/var/lib/mysql"
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: wordpress
      MYSQL_DATABASE: wordpress
      MYSQL_USER: wordpress
      MYSQL_PASSWORD: wordpress

  wordpress:
    depends_on:
      - database
    image: wordpress:latest   links:
      - database
    ports:
      - "8000:80"
    restart: always
    environment:
      WORDPRESS_DB_HOST: database:3306
      WORDPRESS_DB_PASSWORD: wordpress

Here, “version: ‘v2’ ” stands for version 2 of the docker-compose.yml syntax (and is not absolutely necessary) and both containers are defined under “services:”. The image of mySQL version 5.7 is to be used for the database container

should be used for the database container, while the currently highest available version of WordPress should be used as the image for the wordpress container. Both containers are also configured so that they are restarted automatically if, for example, the Docker environment is restarted. The 4 mySQL environment variables listed are also defined for the database container (here all simply with the value “wordpress”). Under “volumes”, a folder with the path “./.data/db” on the Docker environment machine is linked to the path “/var/lib/mysql” in the container. This means that data stored in the container under this path (the database data) is retained even if the container is deleted and can therefore also be transferred to a new container. The wordpress container was made dependent on the database container by “depends_on”, i.e. the WordPress container is only started as soon as the container with the mySQL database is already running. The container is also linked to this container and the corresponding environment variables are defined in order to access the database. In addition, the public port 8000 is linked to the internal port 80 of the WordPress application, making the WordPress blog available under port 8000 of the machine with Docker. Only with this docker-compose file, the two required containers can be started in a few minutes and are available in a very short time after downloading (usually less than a minute after starting).

This is part 2 of a series of blog posts on the topic of Docker/Rancher.

Part 1 provides an overview of Docker and container environments

Part 2 explains the functions of a Docker registry and docker-compose

Part 3 introduces Docker Swarm with a Docker environment distributed across multiple hosts

Part 4 shows Rancher as an orchestration tool for Docker (and other container environments)

Part 5 contains brief information on the Rancher functions of a catalog and rancher-compose

Docker Training

This course is intended for students who have little or no experience with Docker. It starts with an introduction to containers to ensure a common level of knowledge. After that, participants will set up GitLab as a containerized application. With this infrastructure, they learn to build images: first entirely by hand, eventually fully automatically. Finally, participants learn about Docker alternatives and find out how to build their images with Buildah or kaniko, for example.

The following two tabs change content below.

atixadmin