docker swarm集群
# docker swarm 集群
# 是什么
- 是docker host集群管理工具
- docker官方提供的
- docker 1.12版本以后
- 用来统一集群管理的,把整个集群资源做统一调度
- 比kubernetes要轻量化
- 实现scaling 规模扩大或缩小
- 实现rolling update 滚动更新或版本回退
- 实现service discovery 服务发现
- 实现load balance 负载均衡
- 实现route mesh 路由网格,服务治理
# docker swarm 架构
节点(node)就是一台docker host
- 管理节点(manager node)负责管理集群中的节点并向工作节点分配任务
- 工作节点(worker node)接收管理节点分配的任务并执行任务
服务(service)在工作节点上运行 由多个任务共同组成
任务(task) 运行在工作节点中容器或容器中包含应用 是集群中调度最小管理单元
# 怎么做(How)
# 实践 docker swarm环境部署
# 集群初始化
docker swarm init --listen-addr 192.168.122.100:2377
Swarm initialized: current node (5agiwlur6fziotng0y7art7ti) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-
3p19n7axxygdhbg4debkqc5395pdqevll9x6zxm5tqrnrxmmi4-as8fhsxxuiqpr2bzxxuc88m9p
192.168.122.100:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the
instructions.
1
2
3
4
5
6
7
8
2
3
4
5
6
7
8
可以支持多个master节点
# 添加工作节点
docker swarm join --token SWMTKN-1-
3p19n7axxygdhbg4debkqc5395pdqevll9x6zxm5tqrnrxmmi4-as8fhsxxuiqpr2bzxxuc88m9p
192.168.122.100:2377
1
2
3
2
3
查看集群节点
docker node ls
ID HOSTNAME STATUS AVAILABILITY
MANAGER STATUS ENGINE VERSION
5agiwlur6fziotng0y7art7ti * node1 Ready Active
Leader 18.09.5
kodqq1taqnn2jyz9zl2vjus1j node2 Ready Active
18.09.5
x21s1j3hx78nztrrs4v4iuf28 node3 Ready Active
18.09.5
1
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
# 发布一个服务docker service和docker stack
- 在manager node上
docker service create --replicas 2 --publish 8090:80 --name nginxsvc 192.168.122.100/library/centos-nginx:v1
qu78yi4an2i640kyyhapma8yo
overall progress: 2 out of 2 tasks
1/2: running
2/2: running
verify: Service converged
1
2
3
4
5
6
7
2
3
4
5
6
7
- 验证
docker service ls
ID NAME MODE REPLICAS IMAGE
PORTS
qu78yi4an2i6 nginxsvc replicated 2/2
192.168.122.100/library/centos-nginx:v1 *:8090->80/tcp
docker service ps nginxsvc
ID NAME IMAGE NODE
DESIRED STATE CURRENT STATE ERROR PORTS
k7cmbni9gezt nginxsvc.1 192.168.122.100/library/centos-nginx:v1
node2 Running Running 2 minutes ago
p25jcebm3tq1 nginxsvc.2 192.168.122.100/library/centos-nginx:v1
node1 Running Running 3 minutes ago
1
2
3
4
5
6
7
8
9
10
11
12
2
3
4
5
6
7
8
9
10
11
12
- 扩展或缩小
docker service scale nginxsvc=3
nginxsvc scaled to 3
overall progress: 3 out of 3 tasks
1/3: running
2/3: running
3/3: running
verify: Service converged
或
docker service scale nginxsvc=1
nginxsvc scaled to 1
overall progress: 1 out of 1 tasks
1/1: running
verify: Service converged
docker service ps nginxsvc
ID NAME IMAGE NODE
DESIRED STATE CURRENT STATE ERROR PORTS
k7cmbni9gezt nginxsvc.1 192.168.122.100/library/centos-nginx:v1
node2 Running Running 9 minutes ago
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
- 滚动更新或版本回退
docker service update --image 192.168.122.100/library/centos-nginx:v2 nginxsvc
nginxsvc
overall progress: 1 out of 1 tasks
1/1: running
verify: Service converged
1
2
3
4
5
2
3
4
5
间隔更新方法
docker service update --replicas 3 --image 192.168.122.100/library/centos-nginx:v2 --update-parallelism 1 --update-delay 10s nginxsvc
1
- 删除服务
docker service rm nginxsvc
1
查看服务
docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
1
2
2
# 储存卷
# 本地储存卷
docker service create --replicas 1 --mount
"type=bind,source=$PWD,target=/abc" --publish 8081:80 --name cnginxsvc
192.168.122.100/library/centos-nginx:v1
eezhdem1fwli0wa58e4vmwoak
overall progress: 1 out of 1 tasks
1/1: running
verify: Service converged
1
2
3
4
5
6
7
2
3
4
5
6
7
验证
[root@node1 ~]# docker service create --replicas 1 --mount
"type=bind,source=$PWD,target=/abc" --publish 8081:80 --name cnginxsvc
192.168.122.100/library/centos-nginx:v1
eezhdem1fwli0wa58e4vmwoak
overall progress: 1 out of 1 tasks
1/1: running
verify: Service converged
[root@node1 ~]# docker service ps cnginxsvc
ID NAME IMAGE NODE
DESIRED STATE CURRENT STATE ERROR PORTS
ctk7nswxmzqt cnginxsvc.1 192.168.122.100/library/centos-nginx:v1
node1 Running Running 47 seconds ago
[root@node1 ~]# docker exec cnginxsvc.1.ctk7nswxmzqtuwtl0twstwf9y ls /abc
anaconda-ks.cfg
centos.tar
docker-compose.yaml
harbor
harbor-offline-installer-v1.7.5.tgz
nginx.tar
nginxtest
portainer.tar
visualizer.tar
#
[root@node1 ~]# docker service scale cnginxsvc=3
cnginxsvc scaled to 3
overall progress: 3 out of 3 tasks
1/3: running
2/3: running
3/3: running
verify: Service converged
[root@node1 ~]# docker service ps cnginxsvc
ID NAME IMAGE NODE
DESIRED STATE CURRENT STATE ERROR
PORTS
ctk7nswxmzqt cnginxsvc.1 192.168.122.100/library/centos-nginx:v1
node1 Running Running 5 minutes ago
9dfxbyl8mieb cnginxsvc.2 192.168.122.100/library/centos-nginx:v1
node2 Running Running about a minute ago
y0xf1xefiurj cnginxsvc.3 192.168.122.100/library/centos-nginx:v1
node3 Running Running about a minute ago
[root@node2 ~]# docker ps
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS NAMES
bf3334199443 192.168.122.100/library/centos-nginx:v1 "/bin/sh -c /usr/sbi…"
2 minutes ago Up About a minute 80/tcp
cnginxsvc.2.9dfxbyl8mieb1cr5bcfezby5h
[root@node2 ~]# docker exec bf333 ls /abc
anaconda-ks.cfg
node2.txt
[root@node3 ~]# touch node3.txt
[root@node3 ~]# docker ps
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS NAMES
eef61182c176 192.168.122.100/library/centos-nginx:v1 "/bin/sh -c /usr/sbi…"
2 minutes ago Up 2 minutes 80/tcp
cnginxsvc.3.y0xf1xefiurjj6hlxkgmdaj8q
[root@node3 ~]# docker exec eef6 ls /abc
anaconda-ks.cfg
node3.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
# 网络储存卷
- 实现跨docker host之间容器的数据共享
- 持久化保存网络服务器中
第一步:创建nfs服务 node3上完成nfs服务
#所有节点均安装
[root@nodeX ~]# yum -y install nfs-utils rpcbind
#创建共享目录
[root@node3 ~]# mkdir /opt/dockervolume
#共享
[root@node3 ~]# cat /etc/exports
/opt/dockervolume *(rw,sync,no_root_squash)
#输出共享
[root@node3 ~]# exportfs -rv
exporting *:/opt/dockervolume
[root@node3 ~]# systemctl enable rpcbind nfs-server
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to
/usr/lib/systemd/system/nfs-server.service.
[root@node3 ~]# systemctl start rpcbind nfs-server
[root@node3 ~]# systemctl status rpcbind nfs-server
#查看是否共享
[root@node3 ~]# cat /var/lib/nfs/etab
/opt/dockervolume *
(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,s
ecure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,secure,no_root_squash,n
o_all_squash)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
第二步:在docker swarm中创建volume
#查看卷列表
[root@node1 ~]# docker volume ls
DRIVER VOLUME NAME
local 1e43364d8093004c63da5db7620b0173951eb8fc6c9350fd1ae3eec975b89203
local 6d5ab4ce239c8662ef91d920697045087d1725d3292f5af23f5ff649d715333d
local 49e2b7a92e65863eea49f79cfb27821e97032abab87a8665c9e26810be9057fa
local 90b0437f701a74fa5296deac14e31cf560cc235f98b4ec2b585dcf29e4bc46ba
local 586ffd04cf12bfa1774a284247fb8b0347413a5a83c0597595629d0388c8290d
local 633c6fb7a889e30c3791adf9e0db821ed7bd1aca4464d42d7055bb7ef587a758
local 665c7d39ceb6d5b600f1ab6e49a9035def5435d0fd46edbb001dded6686ee2d3
local a06e038abbb7fc766852f138fabfa4aafe8f0bc193bd68d4a96c3541ae755622
local dcf83c973c4574dfec91c611e0eaae59181aa52304ac56a8f90147ff96b5c5ba
local e7802d2eb9f16ce8aa85b0d8c2fc265c4414719ef2f5a0fe90509cd797b8474f
[root@node1 ~]# docker volume create --driver local --opt type=nfs --opt
o=addr=192.168.122.120,rw --opt device=:/opt/dockervolume smartgovolume1
smartgovolume1
[root@node1 ~]# docker volume ls
DRIVER VOLUME NAME
local 1e43364d8093004c63da5db7620b0173951eb8fc6c9350fd1ae3eec975b89203
local 6d5ab4ce239c8662ef91d920697045087d1725d3292f5af23f5ff649d715333d
local 49e2b7a92e65863eea49f79cfb27821e97032abab87a8665c9e26810be9057fa
local 90b0437f701a74fa5296deac14e31cf560cc235f98b4ec2b585dcf29e4bc46ba
local 586ffd04cf12bfa1774a284247fb8b0347413a5a83c0597595629d0388c8290d
local 633c6fb7a889e30c3791adf9e0db821ed7bd1aca4464d42d7055bb7ef587a758
local 665c7d39ceb6d5b600f1ab6e49a9035def5435d0fd46edbb001dded6686ee2d3
local a06e038abbb7fc766852f138fabfa4aafe8f0bc193bd68d4a96c3541ae755622
local dcf83c973c4574dfec91c611e0eaae59181aa52304ac56a8f90147ff96b5c5ba
local e7802d2eb9f16ce8aa85b0d8c2fc265c4414719ef2f5a0fe90509cd797b8474f
local smartgovolume1
[root@node1 ~]# docker volume inspect smartgovolume1
[
{
"CreatedAt": "2019-04-30T10:36:21+08:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/smartgovolume1/_data",
"Name": "smartgovolume1",
"Options": {
"device": ":/opt/dockervolume",
"o": "addr=192.168.122.120,rw",
"type": "nfs"
},
"Scope": "local"
}
]
[root@node2 ~]# docker volume create --driver local --opt type=nfs --opt
o=addr=192.168.122.120,rw --opt device=:/opt/dockervolume smartgovolume1
smartgovolume1
[root@node3 ~]# docker volume create --driver local --opt type=nfs --opt
o=addr=192.168.122.120,rw --opt device=:/opt/dockervolume smartgovolume1
smartgovolume1
#手动发布服务时需要在集群中每台主机添加volume
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
第三步:使用docker service发布服务时使用volume
- 手动启动 service时使用volume 前提条件是nfs服务已经被挂载到本地 创建多个副本时需要在每个容器运行的docker hosts上添加volume
[root@node1 ~]# docker service create --replicas 3 --publish 81:80 --mount
"type=volume,source=smartgovolume1,target=/usr/share/nginx/html" --name cnginxsvc-
volume-test 192.168.122.100/library/centos-nginx:v1
nczb5rj8uggnz0z9jvteuecbm
overall progress: 3 out of 3 tasks
1/3: running
2/3: running
3/3: running
verify: Service converged
[root@node3 ~]# echo "volume test" >> /opt/dockervolume/index.html
[root@node3 ~]# echo "volume test" >> /opt/dockervolume/index.html
您在 /var/spool/mail/root 中有新邮件
[root@node3 ~]# ls /opt/dockervolume/
404.html 50x.html index.html nginx-logo.png poweredby.png
[root@node2 ~]# docker ps
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS NAMES
c4c05052354d 192.168.122.100/library/centos-nginx:v1 "/bin/sh -c /usr/sbi…"
33 seconds ago Up 32 seconds 80/tcp cnginxsvc-volume-
test.3.ohbwdb4hfcnjiogyl5c2et3dw
[root@node2 ~]# docker exec c4c0 cat /usr/share/nginx/html/index.html
nginx is running!!! v1
volume test
[root@node3 ~]# docker ps
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS NAMES
27a0eb22ead8 192.168.122.100/library/centos-nginx:v1 "/bin/sh -c /usr/sbi…"
3 minutes ago Up 3 minutes 80/tcp cnginxsvc-volume-
test.1.jqt4gl5xwtcy4uxtf0rrv9nm9
[root@node3 ~]# docker exec 27a0 cat /usr/share/nginx/html/index.html
nginx is running!!! v1
volume test
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
修改nfs服务器上目录内容,再次验证
[root@node3 ~]# echo "AAAA" > /opt/dockervolume/index.html
1
- 使用编排部署方式使用volume
请清除上一次实验创建的网络储存卷挂载
[root@nodeX ~]# docker volume rm smartgovolume1
smartgovolume1
#所有节点
1
2
3
2
3
第一步:编排部署文件
[root@node1 nginx-volume]# cat nginx-volume.yaml
1
version: '3.3'
services:
cnginxsvc:
image: 192.168.122.100/library/centos-nginx:v1
deploy:
mode: replicated
replicas: 3
restart_policy:
condition: on-failure
ports:
- "81:80"
volumes:
- "smartgovolume1:/usr/share/nginx/html"
volumes:
smartgovolume1:
driver: local
driver_opts:
type: "nfs"
o: "addr=192.168.122.120,rw"
device: ":/opt/dockervolume"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
第二步:使用docker stack 发布服务
[root@node1 nginx-volume]# docker stack deploy -c nginx-volume.yaml nginx-stack
Creating network nginx-stack_default
Creating service nginx-stack_cnginxsvc
[root@node1 nginx-volume]# docker stack ls
NAME SERVICES ORCHESTRATOR
nginx-stack 1 Swarm
[root@node1 nginx-volume]# docker stack services nginx-stack
ID NAME MODE REPLICAS
IMAGE PORTS
l8t8q0my72hv nginx-stack_cnginxsvc replicated 3/3
192.168.122.100/library/centos-nginx:v1 *:81->80/tcp
[root@node1 ~]# docker volume ls
DRIVER VOLUME NAME
local 1e43364d8093004c63da5db7620b0173951eb8fc6c9350fd1ae3eec975b89203
local 6d5ab4ce239c8662ef91d920697045087d1725d3292f5af23f5ff649d715333d
local 49e2b7a92e65863eea49f79cfb27821e97032abab87a8665c9e26810be9057fa
local 90b0437f701a74fa5296deac14e31cf560cc235f98b4ec2b585dcf29e4bc46ba
local 586ffd04cf12bfa1774a284247fb8b0347413a5a83c0597595629d0388c8290d
local 633c6fb7a889e30c3791adf9e0db821ed7bd1aca4464d42d7055bb7ef587a758
local 665c7d39ceb6d5b600f1ab6e49a9035def5435d0fd46edbb001dded6686ee2d3
local a06e038abbb7fc766852f138fabfa4aafe8f0bc193bd68d4a96c3541ae755622
local dcf83c973c4574dfec91c611e0eaae59181aa52304ac56a8f90147ff96b5c5ba
local e7802d2eb9f16ce8aa85b0d8c2fc265c4414719ef2f5a0fe90509cd797b8474f
local nginx-stack_smartgovolume1
[root@node2 ~]# docker volume ls
DRIVER VOLUME NAME
local nginx-stack_smartgovolume1
[root@node3 ~]# docker volume ls
DRIVER VOLUME NAME
local nginx-stack_smartgovolume1
[root@node3 ~]# ls /opt/dockervolume/
404.html 50x.html index.html nginx-logo.png poweredby.png
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
# docker stack
# 是什么
- 早期使用service发布,每次只能发布一个service
- yaml可以发布多个服务,但是使用docker-compose只能在一台主机发布
- 借助docker swarm同时发布多服务
- 是docker生态service发布的最高层次
# 案例
目标:
- 1.远程管理docke host
- 2.监控docker host上运行的容器
docker-compose.yaml如下:
version: "3"
services:
nginx:
image: nginx:alpine
ports:
- 80:80
deploy:
mode: replicated
replicas: 4
visualizer:
image: dockersamples/visualizer
ports:
- "9001:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
replicas: 1
placement:
constraints: [node.role == manager]
portainer:
image: portainer/portainer
ports:
- "9000:9000"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
replicas: 1
placement:
constraints: [node.role == manager]
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
- docker stack运行多个服务
[root@node1 ~]# docker stack deploy -c docker-compose.yaml stack-demo
Creating network stack-demo_default
Creating service stack-demo_portainer
Creating service stack-demo_nginx
Creating service stack-demo_visualizer
1
2
3
4
5
2
3
4
5
[root@node1 ~]# docker stack ls
NAME SERVICES ORCHESTRATOR
stack-demo 3 Swarm
1
2
3
2
3
编辑 (opens new window)
上次更新: 2024/06/15, 15:12:25