Docker containers – a lightweight alternative to virtualization
Docker containers are still a relatively young technology that nevertheless show an astonishing maturity.
Difficulties that can be solved with Docker containers
However, the economic use of resources is difficult when it comes to managing many small systems, each of which produces little load on its own. Even if virtualization tries to be very clever with virtual RAM, CPU and mass storage, there is an unavoidable overhead. This can very quickly lead to the individual virtual machines occupying resources without actually being fully utilized.
Unternehmen, die ihre Infrastruktur selbst betreiben, können in diesem Fall an die Grenzen der Ressourcen ihrer Hardware stoßen. Unternehmen, die stattdessen IaaS nutzen und in der Cloud virtualisieren, müssen immer weitere virtuelle Maschinen beziehen. In beiden Fällen werden die Kosten ohne echte Not immer weiter nach oben getrieben.
Docker takes a different approach here with containers, because instead of virtualizing an entire machine, the individual applications run in hermetically sealed environments. All containers running on a server only share the kernel process. Programs, such as databases or web servers, can run independently of each other in the container. This means that programs and processes can be strictly separated from each other on a system.
Docker containers in a virtual machine
Since Docker is not a virtualization, but rather a partitioning of individual processes, containers can also be used in a virtual machine. This makes it possible to implement staging with several test levels on just one virtual machine. As a rule, development and test environments are not under load most of the time, but these machines still tie up resources in the form of storage, RAM and computing power. Not so with Docker, as there are no redundant system processes running in the container and there are no redundant files.
The quick usability of Docker containers
Another advantage of Docker technology is that the containers are ready for use in seconds. The reason is that Docker maintains a repository of container images for the most common use cases. For example, there are ready-made images with WordPress. Once the image is initially downloaded, containers are rolled out in seconds. Existing images can be adapted to your own needs using so-called Docker files. Creating Dockerfiles is far easier than dealing with RPM Spac files, Makefiles or Kickstart files. There are now distributions such as CoreOS that do without package management and use Docker containers instead.
Although the term “images” is used in classic virtualization and the new Docker technology, this term has little to do with each other in the respective contexts. A Docker image is essentially the blueprint for a container. The flexibility of a finished image can be increased even further by the possibility of parameterizing the container. In this way, the behavior of services can be influenced when the containers are started. This makes it possible to operate multiple container instances of an image in parallel. In this way, any number of test stages can be provided or different containers can be connected to one another in seconds.
CoreOS https://coreos.com/docs/running-coreos/platforms/iso/
ATIX-Crew
Latest posts by ATIX-Crew (see all)
- Foreman Birthday Party 2024 - 1. August 2024
- CrewDay 2024 - 6. June 2024
- Navigating the XZ Security Vulnerability: A Comprehensive Guide - 9. April 2024