Docker: Composition of containers
In this case, however, the container has no function yet, i.e. in reality you take an image on which the finished program is already installed and is automatically started with the container. But if you want to build your own image with your own program, it is of course easier to start with a base image of a Linux distribution. To configure the container, you can pass certain parameters, such as open ports, directly at startup with the “docker run” command or connect to the container (like an SSH connection) to install required programs. It is easier to create an image using a Dockerfile file. In this file, instructions are configured that assemble and configure the corresponding image step by step.
Once you have created your own image, you can upload it to the Docker Hub and make it available privately or publicly. Alternatively, you can also create a Docker registry on a server in your own local network as a collection point for container images. The registry is in turn a container in the Docker environment and can collect the images and make them available. Version numbers can also be assigned to re-save an image after changes and still have the option of reverting to an older version. A separate server with the required images in the local network has the further advantage that the transfer rates are likely to be higher than if each image has to be downloaded from the Internet first or a direct connection to the Internet may not even be available.
Most applications require additional configuration parameters despite the container image being ready. If the application consists of several containers, these may need to communicate and be connected via certain ports, require links to a database or should use a folder path outside the container. By including paths outside the container or in other containers, the files are retained if the corresponding container is deleted or replaced. The docker-compose extension can be used to simplify or reduce long command line commands. For docker-compose, you create a text file with the name docker-compose.yml, which contains all the necessary configuration parameters including the container itself in the correct formatting. You can then use a container to start the container or several containers with a simple “docker-compose up” via the command line. This makes it easy to combine images (Dockerfile) and the configuration (docker-compose.yml) in two text files that are easy to edit and portable.
This means that each container can also be easily recreated or recreated from one Docker environment in another Docker environment. The following explains a docker-compose.yml file for a WordPress blog consisting of a mySQL database in one container and WordPress in another container. The example can also be found in the official documentation of docker-compose. docker-compose.yml: docker-compose.yml:
version: '2' services: database: image: mysql:5.7 volumes: - "./.data/db:/var/lib/mysql" restart: always environment: MYSQL_ROOT_PASSWORD: wordpress MYSQL_DATABASE: wordpress MYSQL_USER: wordpress MYSQL_PASSWORD: wordpress wordpress: depends_on: - database image: wordpress:latest links: - database ports: - "8000:80" restart: always environment: WORDPRESS_DB_HOST: database:3306 WORDPRESS_DB_PASSWORD: wordpress
Where “version: ‘v2’ ” stands for version 2 of the docker-compose.yml syntax (and is not absolutely necessary) and both containers are defined under ’services:”. The image of mySQL version 5.7 should be used for the database container , while the currently highest available version of WordPress should be used as the image for the wordpress container.
Both containers are also configured so that they are restarted automatically if, for example, the Docker environment is restarted. The 4 mySQL environment variables listed are also defined for the database container (here all simply with the value “wordpress”). Under “volumes”, a folder with the path “./.data/db” on the Docker environment machine is linked to the path “/var/lib/mysql” in the container. This means that data stored in the container under this path (the database data) is retained even if the container is deleted and can therefore also be transferred to a new container. The wordpress container was made dependent on the database container by “depends_on”, i.e. the WordPress container is only started as soon as the container with the mySQL database is already running. The container is also linked to this container and the corresponding environment variables are defined in order to access the database. In addition, the public port 8000 is linked to the internal port 80 of the WordPress application, making the WordPress blog available under port 8000 of the machine with Docker. Only with this docker-compose file, the two required containers can be started in a few minutes and are available in a very short time after downloading (usually less than a minute after starting).
This is part 2 of a series of blog posts on the topic of Docker/Rancher.
Part 1 provides an overview of Docker and container environments
Part 2 explains the functions of a Docker registry and docker-compose
Part 3 introduces Docker Swarm with a Docker environment distributed across multiple hosts
Part 4 shows Rancher as an orchestration tool for Docker (and other container environments)
Part 5 contains brief information on the Rancher functions of a catalog and rancher-compose
atixadmin
Latest posts by atixadmin (see all)
- Docker: Composition of containers - 22. September 2016
- Tech Blog: How to install a WLAN access point on Fedora - 26. October 2015
- Btrfs vs ZFS: The future of file systems - 4. July 2014