Docker Swarm ATIX AG

Docker Swarm: A herd of containers

Welcome to Docker Swarm! Containers thrive like pack animals in large numbers, but their distribution on the servers is crucial. Docker Swarm, which has been integrated into the Docker Engine since version 1.12, is ideal for this. Learn more about manager and worker nodes, service variants, load balancing and scalability. This is part 3 of our Docker/Rancher series, in which we look at the distributed environment of Docker Swarm.

Containers are pack animals – they usually occur in large quantities. Containers are pack animals – they usually occur in large quantities. However, since every machine reaches its limits once a certain quantity has been reached, you want to distribute the containers across several servers. There are several ways to distribute the containers on a cluster. One of these is Docker Swarm.
Docker Swarm was originally made available as an extension, but since version 1.12 it has been included directly in the Docker Engine. This not only allows the containers to be distributed across multiple servers, but also offers a number of additional features.

Docker Swarm distinguishes between two server types (=nodes): manager-nodes and worker-nodes. While the worker nodes, as the name suggests, do the work, manager nodes are used to start new containers. The manager nodes then distribute the new containers to be started to the worker nodes so that all servers are utilized as evenly as possible. This creates a large Docker environment distributed across several servers that can be easily expanded.In addition, different variants of the services are also introduced by name: tasks, which are the task of the manager nodes to start new containers; services, which consist of a container that runs on one of the servers; replica services – initially like a normal container, but in multiple versions of the same server, which are distributed across the cluster; and finally global services, which consist of a container that should run once on each node in the network. A load balancer is usually introduced for the replica servers, which distributes the tasks evenly across the containers in order to distribute the load as widely as possible across the network. The scaling, i.e. the number of times a particular container should exist, can also be adjusted after the container has been started.
With a large, expandable Docker environment, a large number of containers can be started and therefore many applications can be operated in containers. In addition, scaling and load balancing make it possible to distribute loads and traffic across several servers. Since the Docker Swarm administration, like Docker, works via the command line, it is difficult to maintain an overview of a large cluster with many containers.
This is part 3 of a series of blog posts on the topic of Docker/Rancher.

Part 1 provides an overview of Docker and container environments

Part 2 explains the functions of a Docker Registry and docker-compose

Part 3 introduces Docker Swarm with a Docker environment distributed across multiple hosts

Part 4 shows Rancher as an orchestration tool for Docker (and other container environments)

Part 5 contains brief information on the Rancher functions of a catalog and rancher-compose

Docker Training

This course is intended for students who have little or no experience with Docker. It starts with an introduction to containers to ensure a common level of knowledge. After that, participants will set up GitLab as a containerized application. With this infrastructure, they learn to build images: first entirely by hand, eventually fully automatically. Finally, participants learn about Docker alternatives and find out how to build their images with Buildah or kaniko, for example.

The following two tabs change content below.

ATIX-Crew

Der ATIX-Crew besteht aus Leuten, die in unterschiedlichen Bereichen tätig sind: Consulting, Development/Engineering, Support, Vertrieb und Marketing.

Latest posts by ATIX-Crew (see all)