Docker: Storing apps in containers
One possibility is container virtualization. There are various ways to do this, one of which is Docker, a widely used container environment. The basic idea for containers is the division of programs. Each container only takes on one task or one function and is otherwise a separate system. An entire application usually consists of several containers. For example, a container takes over the database behind a web application (e.g. a blog), while the web application itself is housed in another container. These containers can be networked and communicate with each other, but remain separate systems. If the container with the web application crashes or does not work for other reasons, the database remains available and can supply other containers with the web application.
Each container consists of a minimalized version of a Linux operating system and contains all the necessary libraries needed to perform the corresponding function. Each container therefore represents its own sandbox system. Since only one function is required, unnecessary programs and libraries can be removed in order to keep the images of these containers as small as possible. This means that a container image usually requires significantly less storage space than a normal operating system with the corresponding programs. This minimalization means that images can be downloaded or distributed over the network very quickly – and then launched.
Virtualizations are not something completely new, as virtual machines (VMs) have been around for a long time. In contrast to containers, a complete hardware system with specific specifications is emulated. These VMs are often used to divide large physical machines into several virtual systems, which can then be used as servers, for example. Containers are therefore significantly more economical in terms of memory requirements, as the additional effort of emulating hardware is eliminated. Another difference is the management of RAM. While a VM reserves and blocks the memory allocated to it even when it is not in use, a container only uses the amount of memory that is currently required and thus shares the available memory with other programs and other containers. This means that many more containers can run on one server at the same time. These are also started and ready to work in just a few minutes. However, the containers will not simply replace the typical VMs but can be used very well as a supplement. So if several VMs are created as servers, they can be used with a Docker environment to run the desired applications distributed across several containers. Since the containers contain everything that is needed to operate the container, they are independent which platform they are operated on. Docker is based on Linux, but the corresponding environments can now also be installed on Windows and Mac systems. In addition, some cloud providers (such as Digital Ocean, Amazon EC2 and Azure) offer the option of using Docker environments in the cloud.
This is part 1 in a series of blog posts about Docker/Rancher.
Part 1 provides an overview of Docker and container environments
Part 2 explains the functions of a Docker registry and docker-compose
Part 3 introduces Docker Swarm with a Docker environment distributed across multiple hosts
Part 4 shows Rancher as an orchestration tool for Docker (and other container environments)
Part 5 contains brief information on the Rancher functions of a catalog and rancher-compose
atixadmin
Latest posts by atixadmin (see all)
- Official Puppet trainings - 8. October 2018
- Sphinx - 28. March 2017
- Docker: Storing apps in containers - 4. November 2016