Docker Swarm
Enables us to take multiple hosts and pulls them together into a swarm. A manager node will spread the load between multiple worker nodes.
Enable swarm on manager node
root@pzolo1c:~# docker swarm init --advertise-addr 172.31.25.177
Swarm initialized: current node (i5pzauemeje1pfcdznqr3vroe) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-56p1ly19vbhpejakwnb4p3ooom2cfeen4s1jb8w84tu5fhnen4-7iq8usdv0fkw687k8vmk48f7f 172.31.25.177:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
On the workers:
root@pzolo3c:~# docker swarm join --token SWMTKN-1-56p1ly19vbhpejakwnb4p3ooom2cfeen4s1jb8w84tu5fhnen4-7iq8usdv0fkw687k8vmk48f7f 172.31.25.177:2377
This node joined a swarm as a worker.
List swarm status on master:
root@pzolo1c:~# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
i5pzauemeje1pfcdznqr3vroe * pzolo1c.mylabserver.com Ready Active Leader 19.03.11
rudihv22bbfwnvbo6gmfg6c0d pzolo2c.mylabserver.com Ready Active 19.03.11
xp2n4759amxb1eq91lpw2bc7s pzolo3c.mylabserver.com Ready Active 19.03.11
Let's create a service with 2 replicas of the nginx container.
root@pzolo1c:~# docker service create --replicas 2 -p 80:80 --name myweb nginx
kzqy1s69xlfc3nl2xd60z4hbx
overall progress: 2 out of 2 tasks
1/2: running [==================================================>]
2/2: running [==================================================>]
verify: Service converged
The manager node assigns the tasks based on the number of replicas.
root@pzolo1c:~# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
kzqy1s69xlfc myweb replicated 2/2 nginx:latest *:80->80/tcp
On the worker:
root@pzolo2c:~# docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ad9a9d8aa4c8 nginx:latest "/docker-entrypoint.…" 3 minutes ago Up 3 minutes 80/tcp myweb.2.ggqhv3fa4fdcd2m8hgi3z40en
Details of the service. One instance is running in the manager:
root@pzolo1c:~# docker service ps myweb
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
igq2zwielyr8 myweb.1 nginx:latest pzolo1c.mylabserver.com Running Running 5 minutes ago
ggqhv3fa4fdc myweb.2 nginx:latest pzolo2c.mylabserver.com Running Running 5 minutes ago
Even if the service is only running on 2 nodes, it should be accesible from ANY working node:
root@pzolo3c:~# curl localhost:80 -v -so /dev/null
* Rebuilt URL to: localhost:80/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> Host: localhost
> User-Agent: curl/7.52.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.19.0
< Date: Sun, 21 Jun 2020 21:04:27 GMT
< Content-Type: text/html
< Content-Length: 612
< Last-Modified: Tue, 26 May 2020 15:00:20 GMT
< Connection: keep-alive
< ETag: "5ecd2f04-264"
< Accept-Ranges: bytes
<
{ [612 bytes data]
* Curl_http_done: called premature == 0
* Connection #0 to host localhost left intact
In case one of the workers fails, the manager will move the running container to another worker in order to maintain the number of required repolicas. For example, if we stop the docker service on worker2:
root@pzolo1c:~# docker service ps myweb
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
igq2zwielyr8 myweb.1 nginx:latest pzolo1c.mylabserver.com Running Running 20 minutes ago
isk181g1xc6z myweb.2 nginx:latest pzolo3c.mylabserver.com Running Running 2 minutes ago
ggqhv3fa4fdc \_ myweb.2 nginx:latest pzolo2c.mylabserver.com Shutdown Running 20 minutes ago