- Docker
- 一、Docker基础指令:
- 1、commit镜像:
- 2、容器数据卷
- 使用数据卷
- 3、安装Mysql
- 二、DockerFile
- CMD:
- ENTRYPOINT:
- 实战:制作tomcat镜像
- 发布镜像到dockerhub
- 三、Docker 网络(容器互联)
- docker0:
- tomcat01:
- tomcat02:
- ping:
- 总结:
- 1.--link:(不推荐)
- 2.自定义网络
- 创建自己的网络
- 使用自己定义的网络启动容器
- 3.网络联通 docker connect
- 四、实战:部署redis集群
- 五、springboot 项目打包docker
- 六、Docker Compose
- 简介:
- 安装:
- 体验:
- 小结:
- yaml 规则:
- 只有三层:
- 实战开源项目:博客
- 实战:个人项目docker-compose部署
- 总结:
- 七、Docker Swarm
- 工作模式:
- 搭建集群:
- Raft 一致性协议:
- 体会 弹性、扩缩容、集群:
- 体验:
- 对比:
- 启动nginx 实战:
- 端口换成9999 重新操作、访问测试:
- scale扩容、缩容:
- 概念总结:
- 原理说明:
- 项目启动节点(全局、工作节点)
- 数据卷挂载:
- swarm网络模式:
- Docker Stack
- Docker Secret
- Docker Config
–help 无敌!!!
1、commit镜像:docker commit -m="提交描述信息" -a="作者" 容器id 目标镜像名:[tag]2、容器数据卷
什么是容器数据卷?
docker理念:将应用打包成镜像!
数据?如果数据都在容器中,那么我们容器删除,数据就会丢失!需求:数据可以持久化。
卷技术!目录挂载,将容器目录内容,挂在到linux上面!
总结:容器持久化和同步操作!容器间也可以数据共享!
使用数据卷方式一:直接使用命令挂载;
docker run -it -v 主机目录:容器内目录 -p ...3、安装Mysql
docker pull mysql:5.7 docker run -d -p 3310:3306 -v /home/mysql/conf.d:/etc/mysql/conf.d -v /home/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=root --name mysql01 mysql:5.7
两个mysql同步数据:
二、DockerFile编写文件:
FROM centos MAINTAINER chb<1293272650@qq.com> ENV MYPATH /usr/local WORKDIR $MYPATH RUN yum -y install vim RUN yum -y install net-tools EXPOSE 80 CMD echo $MYPATH CMD echo "---end---" CMD /bin/bash
构建镜像:
docker build -f mydockerfile01 -t mycentos:0.1 .
CMD ENTRYPOINT
CMD:不能被追加命令
ENTRYPOINT:可追加
CMD:FROM centos CMD ["ls","-a"]
docker build -f mydockerfile02 -t cmdtest .
[root@LinuxMain dockerfile]# docker run cmdtest . .. .dockerenv bin dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var
# CMD 不能追加命令 [root@LinuxMain dockerfile]# docker run cmdtest -l docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "-l": executable file not found in $PATH: unknown.
-l 没有追加到 ls -a,而是替换了那条cmd命令。而 -l 不是cmd命令,所以报错。
# 只能输入完整的命令 [root@LinuxMain dockerfile]# docker run cmdtest ls -al total 0 drwxr-xr-x. 1 root root 6 Dec 6 06:22 . drwxr-xr-x. 1 root root 6 Dec 6 06:22 .. -rwxr-xr-x. 1 root root 0 Dec 6 06:22 .dockerenv lrwxrwxrwx. 1 root root 7 Nov 3 2020 bin -> usr/bin drwxr-xr-x. 5 root root 340 Dec 6 06:22 dev drwxr-xr-x. 1 root root 66 Dec 6 06:22 etc drwxr-xr-x. 2 root root 6 Nov 3 2020 home lrwxrwxrwx. 1 root root 7 Nov 3 2020 lib -> usr/lib lrwxrwxrwx. 1 root root 9 Nov 3 2020 lib64 -> usr/lib64 drwx------. 2 root root 6 Sep 15 14:17 lost+found drwxr-xr-x. 2 root root 6 Nov 3 2020 media drwxr-xr-x. 2 root root 6 Nov 3 2020 mnt drwxr-xr-x. 2 root root 6 Nov 3 2020 opt dr-xr-xr-x. 249 root root 0 Dec 6 06:22 proc dr-xr-x---. 2 root root 162 Sep 15 14:17 root drwxr-xr-x. 11 root root 163 Sep 15 14:17 run lrwxrwxrwx. 1 root root 8 Nov 3 2020 sbin -> usr/sbin drwxr-xr-x. 2 root root 6 Nov 3 2020 srv dr-xr-xr-x. 13 root root 0 Dec 1 06:28 sys drwxrwxrwt. 7 root root 171 Sep 15 14:17 tmp drwxr-xr-x. 12 root root 144 Sep 15 14:17 usr drwxr-xr-x. 20 root root 262 Sep 15 14:17 varENTRYPOINT:
[root@LinuxMain dockerfile]# cat dockerfile03 FROM centos ENTRYPOINT ["ls","-a" ] # 创建镜像: docker build -f dockerfile03 -t cmdtest03 . [root@LinuxMain dockerfile]# docker run cmdtest03 . .. .dockerenv bin dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var
追加命令测试:
[root@LinuxMain dockerfile]# docker run cmdtest03 -l total 0 drwxr-xr-x. 1 root root 6 Dec 6 06:27 . drwxr-xr-x. 1 root root 6 Dec 6 06:27 .. -rwxr-xr-x. 1 root root 0 Dec 6 06:27 .dockerenv lrwxrwxrwx. 1 root root 7 Nov 3 2020 bin -> usr/bin drwxr-xr-x. 5 root root 340 Dec 6 06:27 dev drwxr-xr-x. 1 root root 66 Dec 6 06:27 etc drwxr-xr-x. 2 root root 6 Nov 3 2020 home lrwxrwxrwx. 1 root root 7 Nov 3 2020 lib -> usr/lib lrwxrwxrwx. 1 root root 9 Nov 3 2020 lib64 -> usr/lib64 drwx------. 2 root root 6 Sep 15 14:17 lost+found drwxr-xr-x. 2 root root 6 Nov 3 2020 media drwxr-xr-x. 2 root root 6 Nov 3 2020 mnt drwxr-xr-x. 2 root root 6 Nov 3 2020 opt dr-xr-xr-x. 251 root root 0 Dec 6 06:27 proc dr-xr-x---. 2 root root 162 Sep 15 14:17 root drwxr-xr-x. 11 root root 163 Sep 15 14:17 run lrwxrwxrwx. 1 root root 8 Nov 3 2020 sbin -> usr/sbin drwxr-xr-x. 2 root root 6 Nov 3 2020 srv dr-xr-xr-x. 13 root root 0 Dec 1 06:28 sys drwxrwxrwt. 7 root root 171 Sep 15 14:17 tmp drwxr-xr-x. 12 root root 144 Sep 15 14:17 usr drwxr-xr-x. 20 root root 262 Sep 15 14:17 var实战:制作tomcat镜像
1.准备 镜像文件 tomcat压缩包 jdk压缩包
[root@LinuxMain tomcat]# pwd /root/tomcat [root@LinuxMain tomcat]# ll 总用量 185412 -rw-r--r--. 1 root root 11579748 12月 6 15:15 apache-tomcat-9.0.55.tar.gz -rw-r--r--. 1 root root 178276087 12月 6 14:53 jdk-16.0.1_linux-x64_bin.tar.gz
2.编写dockerfile文件;
官方命名Dockerfile。build指令会自动寻找。
[root@LinuxMain tomcat]# cat Dockerfile FROM centos MAINTAINER chb ADD apache-tomcat-9.0.55.tar.gz /usr/local/ ADD openjdk-8u41-b04-linux-x64-14_jan_2020.tar.gz /usr/local/ RUN yum -y install vim ENV MYPATH /usr/local WORKDIR $MYPATH ENV JAVA_HOME /usr/local/java-se-8u41-ri ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar ENV CATALINA_HOME /usr/local/apache-tomcat-9.0.55 ENV CATALINA_BASH /usr/local/apache-tomcat-9.0.55 ENV PATH $PATH:$JAVA_HOME/bin:$CATALINA_HOME/lib:$CATALINA_HOME EXPOSE 8080 CMD /usr/local/apache-tomcat-9.0.55/bin/startup.sh && tail -F /usr/local/apache-tomcat-9.0.55/bin/logs/catalina.out
# 创建 docker build -t mytomcat . # [root@LinuxMain tomcat]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE mytomcat latest 0423864522ff 2 minutes ago 584MB
# 运行 [root@LinuxMain tomcat]# docker run -d -p 9090:8080 --name chbtomcat -v /home/chb/build/tomcat/test:/usr/local/apache-tomcat-9.0.55/webapps/test -v /home/chb/build/tomcat/tomcatlogs/:/usr/local/apache-tomcat-9.0.55/logs/ mytomcat发布镜像到dockerhub
[root@LinuxMain tomcatlogs]# docker login --help
Usage: docker login [OPTIONS] [SERVER]
Log in to a Docker registry.
If no server is specified, the default is defined by the daemon.
Options:
-p, --password string Password
--password-stdin Take the password from stdin
-u, --username string Username
#
[root@LinuxMain tomcatlogs]# docker login -u chenhongbao
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
- 发布
# 直接push不行 [root@LinuxMain home]# docker push mytomcat:latest The push refers to repository [docker.io/library/mytomcat] d32644301902: Preparing a20c672f7972: Preparing 17cc5cd4234c: Preparing 74ddd0ec08fa: Preparing denied: requested access to the resource is denied # REPOSITORY 需要加上自己 用户名 [root@LinuxMain home]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE mytomcat latest 0423864522ff 25 minutes ago 584MB [root@LinuxMain home]# docker tag mytomcat:latest chenhongbao/tomcat:1.0 [root@LinuxMain home]# docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE chenhongbao/tomcat 1.0 0423864522ff 26 minutes ago 584MB mytomcat latest 0423864522ff 26 minutes ago 584MB [root@LinuxMain home]# docker push chenhongbao/tomcat:1.0 # 成功三、Docker 网络(容器互联)
结论:tomcat01 和 tomcat02 是共用的一个路由器,docker 0。
所有容器在不指定网络的情况下,默认都是有docker0路由的,docker会给我们分配一个默认可用的ip。
0~255 A B C
255.255.0.1/16
子网掩码只有一个作用,就是将某个IP地址划分成网络地址和主机地址两部分。ip/网络地址位数。
docker中,所有的网络接口都是虚拟的,转发效率高。
容器删除,对应网桥一对就没了.
docker run -d -P --name tomcat01 tomcatdocker0: tomcat01:
# tomcat01: 10: eth0@if11:tomcat02:mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever # 主机中: 11: veth4bdb866@if10: mtu 1500 qdisc noqueue master docker0 state UP group default link/ether d6:36:b6:5f:64:c2 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::d436:b6ff:fe5f:64c2/64 scope link valid_lft forever preferred_lft forever
# tomcat02: root@fb73a5c8661e:/usr/local/tomcat# ip addr 12: eth0@if13:ping:mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever # 主机中: 13: veth68f0ea0@if12: mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 92:2d:f2:52:0f:27 brd ff:ff:ff:ff:ff:ff link-netnsid 1 inet6 fe80::902d:f2ff:fe52:f27/64 scope link valid_lft forever preferred_lft forever
# 主机ping 容器: [root@LinuxMain home]# ping 172.17.0.3 PING 172.17.0.3 (172.17.0.3) 56(84) bytes of data. 64 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.201 ms # 容器 之间用ip ping; root@3aeb422755f3:/usr/local/tomcat# ping 172.17.0.3 PING 172.17.0.3 (172.17.0.3) 56(84) bytes of data. 64 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.095 ms
# 容器间用 容器名ping,不可以(--link 连接后可以) root@3aeb422755f3:/usr/local/tomcat# ping tomcat02 ping: tomcat02: Name or service not known总结: 1.–link:(不推荐)
# 使用--link 启动03 --link连接02; [root@LinuxMain ~]# docker run -d -P --name tomcat03 --link tomcat02 tomcat # 03能通过名字ping 02; 02不能反过来ping 03。 root@fb4643deef8a:/usr/local/tomcat# ping tomcat02 PING tomcat02 (172.17.0.3) 56(84) bytes of data. 64 bytes from tomcat02 (172.17.0.3): icmp_seq=1 ttl=64 time=0.160 ms 64 bytes from tomcat02 (172.17.0.3): icmp_seq=2 ttl=64 time=0.058 ms
root@fb4643deef8a:/usr/local/tomcat# cat /etc/hosts 127.0.0.1 localhost ... # 本质是在hosts文件 增加了02的ip 172.17.0.3 tomcat02 fb73a5c8661e 172.17.0.4 fb4643deef8a
本质:是在hosts文件 增加了02的ip;我们需要自定义网络;
2.自定义网络# 查看docker 网络 [root@LinuxMain ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 332a671aaadc bridge bridge local 966118ebdaaf host host local 2304de3dc4fb none null local
网络模式:
bridge:桥接 docker(默认)
none: 不配置网络
host: 和宿主机共享网络
container :
创建自己的网络# 默认有 --net bridge 即docker0 [root@LinuxMain ~]# docker run -d -P --name tomcat01 tomcat [root@LinuxMain ~]# docker run -d -P --name tomcat01 --net bridge tomcat # 创建网络 # --driver bridge # --subnet 192.168.0.0/16 # --gateway 192.168.0.1 网关,路由器地址 [root@LinuxMain ~]# docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet dd919f3545d287325d312fe9d8175e27d4a33d3d3e344f1d910bf1863054a8f3 [root@LinuxMain ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 332a671aaadc bridge bridge local 966118ebdaaf host host local dd919f3545d2 mynet bridge local 2304de3dc4fb none null local使用自己定义的网络启动容器
[root@LinuxMain ~]# docker run -d -P --name tomcat-net-01 --net mynet tomcat
b55e152ce8dd15a0acfd2c044b9444970593301e2c07b4177295269f01f3c614
[root@LinuxMain ~]# docker run -d -P --name tomcat-net-02 --net mynet tomcat
7d141ae6ff87a469c5172320434ea4eabf53c26b6f7cb158715d632ca304b344
[root@LinuxMain ~]# docker network inspect mynet
[
{
"Name": "mynet",
"Id": "dd919f3545d287325d312fe9d8175e27d4a33d3d3e344f1d910bf1863054a8f3",
"Created": "2021-12-07T10:03:16.666003359+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
# 自定义网络信息
"Config": [
{
"Subnet": "192.168.0.0/16",
"Gateway": "192.168.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
# 给启动的容器 分配了自定义网络
"Containers": {
"7d141ae6ff87a469c5172320434ea4eabf53c26b6f7cb158715d632ca304b344": {
"Name": "tomcat-net-02",
"EndpointID": "11ee024b0a70be6afdbd624a33daccb14c5d7b4233358dab4fc512de5ca0f7b4",
"MacAddress": "02:42:c0:a8:00:03",
"IPv4Address": "192.168.0.3/16",
"IPv6Address": ""
},
"b55e152ce8dd15a0acfd2c044b9444970593301e2c07b4177295269f01f3c614": {
"Name": "tomcat-net-01",
"EndpointID": "2dd56e2b136bcac9b48128e7ad5cffc59b18a57275ebd35252a900e1c1102b1b",
"MacAddress": "02:42:c0:a8:00:02",
"IPv4Address": "192.168.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
- 自定义网络:能直接ping ip、通过容器名 ping;
- 网络间 容器不通;
[root@LinuxMain ~]# docker exec -it tomcat01 /bin/bash root@f7ca1363932a:/usr/local/tomcat# ping tomcat-net-01 ping: tomcat-net-01: Name or service not known
# 将网络和容器连通
[root@LinuxMain ~]# docker network connect mynet tomcat01
[root@LinuxMain ~]# docker network inspect mynet
[
{
"Name": "mynet",
"Id": "dd919f3545d287325d312fe9d8175e27d4a33d3d3e344f1d910bf1863054a8f3",
"Created": "2021-12-07T10:03:16.666003359+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.0.0/16",
"Gateway": "192.168.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"7d141ae6ff87a469c5172320434ea4eabf53c26b6f7cb158715d632ca304b344": {
"Name": "tomcat-net-02",
"EndpointID": "30c95ed341d0377adc1e132cd4f6918a94fd2f3eca6015337532b42bd27007ad",
"MacAddress": "02:42:c0:a8:00:03",
"IPv4Address": "192.168.0.3/16",
"IPv6Address": ""
},
"b55e152ce8dd15a0acfd2c044b9444970593301e2c07b4177295269f01f3c614": {
"Name": "tomcat-net-01",
"EndpointID": "4f732fd9d999beec6c2dd232bb25a739cc7aa912b774389d5e0608512a67df68",
"MacAddress": "02:42:c0:a8:00:02",
"IPv4Address": "192.168.0.2/16",
"IPv6Address": ""
},
"f7ca1363932abe5f6ef22d5e4dfc9bd0e23461325fcc7e192f5ea7ff378d1cdf": {
"Name": "tomcat01",
"EndpointID": "0f557afb1a3362f4b90c5f439658aaee5ca271eee73c2165b40c8df058585026",
"MacAddress": "02:42:c0:a8:00:04",
"IPv4Address": "192.168.0.4/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
# 连通之后,将tomcat01容器,加入到 mynet网络。一个容器两个网络
# 连通了 [root@LinuxMain ~]# docker exec -it tomcat01 ping tomcat-net-01 PING tomcat-net-01 (192.168.0.2) 56(84) bytes of data. 64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.161 ms四、实战:部署redis集群
# 创建网络 [root@LinuxMain ~]# docker network create redis --subnet 172.38.0.0/16 27f5a0386991f0dcbfc7fa0e923490dc22c035576898b779e77ce14393643d4f [root@LinuxMain ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 77300b656e61 bridge bridge local 966118ebdaaf host host local dd919f3545d2 mynet bridge local 2304de3dc4fb none null local 27f5a0386991 redis bridge local
创建6个redis配置文件
for port in $(seq 1 6);
do
mkdir -p /mydata/redis/node-${port}/conf
touch /mydata/redis/node-${port}/conf/redis.conf
cat < /mydata/redis/node-${port}/conf/redis.conf
port 6379
bind 0.0.0.0
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.38.0.1${port}
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
EOF
done
启动redis:
[root@LinuxMain ~]# docker run -p 6371:6379 -p 16371:16379 --name redis-1 -v /mydata/redis/node-1/data:/data -v /mydata/redis/node-1/conf/redis.conf:/etc/redis/redis.conf -d --net redis --ip 172.38.0.11 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
[root@LinuxMain ~]# docker run -p 6372:6379 -p 16372:16379 --name redis-2 -v /mydata/redis/node-2/data:/data -v /mydata/redis/node-2/conf/redis.conf:/etc/redis/redis.conf -d --net redis --ip 172.38.0.12 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
# 脚本启动
for port in $(seq 1 6);
do
docker run -p 637${port}:6379 -p 1637${port}:16379 --name redis-${port} -v /mydata/redis/node-${port}/data:/data -v /mydata/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf -d --net redis --ip 172.38.0.1${port} redis redis-server /etc/redis/redis.conf
done
创建集群:
[root@LinuxMain conf]# docker exec -it redis-1 /bin/sh # 创建集群 redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379 --cluster-replicas 1 # redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379 --cluster-replicas 1 >>> Performing hash slots allocation on 6 nodes... Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica 172.38.0.15:6379 to 172.38.0.11:6379 Adding replica 172.38.0.16:6379 to 172.38.0.12:6379 Adding replica 172.38.0.14:6379 to 172.38.0.13:6379 M: 3c7bbd26ef973a8e79d966b08df402b80b89ee02 172.38.0.11:6379 slots:[0-5460] (5461 slots) master M: d9f35be81cb9a94e37ca1fbc92c14536417152ff 172.38.0.12:6379 slots:[5461-10922] (5462 slots) master M: 1ec90d7aa4dc2b9116635d9bf419f6664a3a83fe 172.38.0.13:6379 slots:[10923-16383] (5461 slots) master S: 3583c6d75acf98fdc5ef40b5aa5384a64421047f 172.38.0.14:6379 replicates 1ec90d7aa4dc2b9116635d9bf419f6664a3a83fe S: 4576809aa462476c20956710488a2cdbb23adc68 172.38.0.15:6379 replicates 3c7bbd26ef973a8e79d966b08df402b80b89ee02 S: 8898a4ac0330ef44a5594cb2b0a6864c61964fe9 172.38.0.16:6379 replicates d9f35be81cb9a94e37ca1fbc92c14536417152ff Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join . >>> Performing Cluster Check (using node 172.38.0.11:6379) M: 3c7bbd26ef973a8e79d966b08df402b80b89ee02 172.38.0.11:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: 4576809aa462476c20956710488a2cdbb23adc68 172.38.0.15:6379 slots: (0 slots) slave replicates 3c7bbd26ef973a8e79d966b08df402b80b89ee02 M: d9f35be81cb9a94e37ca1fbc92c14536417152ff 172.38.0.12:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: 3583c6d75acf98fdc5ef40b5aa5384a64421047f 172.38.0.14:6379 slots: (0 slots) slave replicates 1ec90d7aa4dc2b9116635d9bf419f6664a3a83fe S: 8898a4ac0330ef44a5594cb2b0a6864c61964fe9 172.38.0.16:6379 slots: (0 slots) slave replicates d9f35be81cb9a94e37ca1fbc92c14536417152ff M: 1ec90d7aa4dc2b9116635d9bf419f6664a3a83fe 172.38.0.13:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
# redis-cli -c 127.0.0.1:6379> cluster info cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:6 cluster_my_epoch:1 cluster_stats_messages_ping_sent:421 cluster_stats_messages_pong_sent:425 cluster_stats_messages_sent:846 cluster_stats_messages_ping_received:420 cluster_stats_messages_pong_received:421 cluster_stats_messages_meet_received:5 cluster_stats_messages_received:846 127.0.0.1:6379> cluster nodes 4576809aa462476c20956710488a2cdbb23adc68 172.38.0.15:6379@16379 slave 3c7bbd26ef973a8e79d966b08df402b80b89ee02 0 1638857773000 1 connected d9f35be81cb9a94e37ca1fbc92c14536417152ff 172.38.0.12:6379@16379 master - 0 1638857773000 2 connected 5461-10922 3583c6d75acf98fdc5ef40b5aa5384a64421047f 172.38.0.14:6379@16379 slave 1ec90d7aa4dc2b9116635d9bf419f6664a3a83fe 0 1638857773523 3 connected 3c7bbd26ef973a8e79d966b08df402b80b89ee02 172.38.0.11:6379@16379 myself,master - 0 1638857773000 1 connected 0-5460 8898a4ac0330ef44a5594cb2b0a6864c61964fe9 172.38.0.16:6379@16379 slave d9f35be81cb9a94e37ca1fbc92c14536417152ff 0 1638857773933 2 connected 1ec90d7aa4dc2b9116635d9bf419f6664a3a83fe 172.38.0.13:6379@16379 master - 0 1638857774546 3 connected 10923-16383
测试:
[root@LinuxMain ~]# docker exec -it redis-1 /bin/sh # redis-cli -c 127.0.0.1:6379> set a hahaha -> Redirected to slot [15495] located at 172.38.0.13:6379 OK 172.38.0.13:6379> get a "hahaha" # 关掉此节点; [root@LinuxMain ~]# docker stop redis-1 redis-1 [root@LinuxMain ~]# docker ps ConTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f6d00af2bd62 redis "docker-entrypoint.s…" 22 minutes ago Up 22 minutes 0.0.0.0:6376->6379/tcp, :::6376->6379/tcp, 0.0.0.0:16376->16379/tcp, :::16376->16379/tcp redis-6 d62fa4684a68 redis "docker-entrypoint.s…" 22 minutes ago Up 22 minutes 0.0.0.0:6375->6379/tcp, :::6375->6379/tcp, 0.0.0.0:16375->16379/tcp, :::16375->16379/tcp redis-5 304b8005c0b7 redis "docker-entrypoint.s…" 22 minutes ago Up 22 minutes 0.0.0.0:6374->6379/tcp, :::6374->6379/tcp, 0.0.0.0:16374->16379/tcp, :::16374->16379/tcp redis-4 72e198b0b691 redis "docker-entrypoint.s…" 22 minutes ago Up 22 minutes 0.0.0.0:6373->6379/tcp, :::6373->6379/tcp, 0.0.0.0:16373->16379/tcp, :::16373->16379/tcp redis-3 bb01fefcea8d redis "docker-entrypoint.s…" 22 minutes ago Up 22 minutes 0.0.0.0:6372->6379/tcp, :::6372->6379/tcp, 0.0.0.0:16372->16379/tcp, :::16372->16379/tcp redis-2 [root@LinuxMain ~]# docker exec redis-2 /bin/sh [root@LinuxMain ~]# docker exec -it redis-2 /bin/sh # redis-cli -c 127.0.0.1:6379> get a -> Redirected to slot [15495] located at 172.38.0.13:6379 "hahaha"
- 集群已实现高可用。
创建项目:
。。。
编写 Dockerfile :
FROM openjdk:16 COPY *.jar /app.jar CMD ["--server.port=8080"] EXPOSE 8080 ENTRYPOINT ["java","-jar","app.jar"]
进行docker build:
[root@LinuxMain javatest]# ls demo-0.0.1-SNAPSHOT.jar Dockerfile [root@LinuxMain javatest]# docker build -t chbtest . Sending build context to Docker daemon 17.49MB Step 1/5 : FROM openjdk:16 16: Pulling from library/openjdk 58c4eaffce77: Pull complete e6a22c806ee8: Pull complete d14afce73328: Pull complete Digest: sha256:bb68f084c2000c8532b1675ca7034f3922f4aa10e9c7126d29551c0ffd6dee8f Status: Downloaded newer image for openjdk:16 ---> d7e4c5a73ccd Step 2/5 : COPY *.jar /app.jar ---> 6a2fac4eb2d2 Step 3/5 : CMD ["--server.port=8080"] ---> Running in 1169bf6ffa79 Removing intermediate container 1169bf6ffa79 ---> 39a1f3210219 Step 4/5 : EXPOSE 8080 ---> Running in 3c580c4f82d7 Removing intermediate container 3c580c4f82d7 ---> 3c55d1700404 Step 5/5 : ENTRYPOINT ["java","-jar","app.jar"] ---> Running in b4177dbc2026 Removing intermediate container b4177dbc2026 ---> 180b2a50ed94 Successfully built 180b2a50ed94 Successfully tagged chbtest:latest [root@LinuxMain javatest]# docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE chbtest latest 180b2a50ed94 14 seconds ago 484MB
运行测试:
[root@LinuxMain javatest]# docker run -d -p 8080:8080 chbtest ac0e2a23a288e32a703893024476ab890036fadab4badd09b53dffbff42a429e [root@LinuxMain javatest]# curl localhost:8080/hello Hello world![root@LinuxMain javatest]#六、Docker Compose 简介:
docker
dockerfile build run 手动、单个操作
微服务 100个微服务!依赖关系。
Docker Compose 高效轻松,管理容器,定义运行多个容器。
官方简介:
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. To learn more about all the features of Compose, see the list of features.
Compose works in all environments: production, staging, development, testing, as well as CI workflows. You can learn more about each case in Common Use Cases.
Using Compose is basically a three-step process:
- Define your app’s environment with a Dockerfile so it can be reproduced anywhere.
- Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.
- Run docker compose up and the Docker compose command starts and runs your entire app. You can alternatively run docker-compose up using the docker-compose binary.
作用:批量容器编排。
理解:
Compose 是Docker的开源项目。需要安装!
Dockerfile 让程序在任何地方运行。web服务、redis、mysql、nginx…
示例:
version: "3.9" # optional since v1.27.0
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
- logvolume01:/var/log
links:
- redis
redis:
image: redis
volumes:
logvolume01: {}
docker-compose up 100个服务。
重要概念:
- 服务services:容器、应用。
- 项目project。一组关联的容器。
1.下载:
# sudo curl -L "https://get.daocloud.io/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose # 官方: sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
2.目录操作授权:
sudo chmod +x /usr/local/bin/docker-compose
3.测试:
[root@LinuxMain bin]# docker-compose version docker-compose version 1.25.5, build 8a1c60f6 docker-py version: 4.1.0 CPython version: 3.7.5 OpenSSL version: OpenSSL 1.1.0l 10 Sep 2019体验:
1.应用 app.py
2.Dockerfile 应用打包为镜像
3.Dockerfile-compose yaml 文件(定义整个服务,需要的环境,web,redis …)完整的上线服务.
4.启动compose(docker-compose up)
小结:1.Docker 镜像 run >- 容器
2.Dockerfile 构建镜像服务打包
3.Docker-compose启动项目(编排、多个微服务环境)
4.Docker 网络
yaml 规则:只有三层:第一层
version: # 版本
第二层
service:
服务1:web
images
build
network
服务2:redis
第三层 #其他配置: 网络、卷、全局规则
volumes:
networks:
configs:
实战开源项目:博客yaml配置官方文档:https://docs.docker.com/compose/compose-file/compose-file-v3/#depends_on
网址:https://docs.docker.com/samples/wordpress/
# 启动项目 [root@LinuxMain myworkpress]# docker-compose up -d [root@LinuxMain myworkpress]# docker ps ConTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 96ddeabac90a wordpress:latest "docker-entrypoint.s…" 44 seconds ago Up 42 seconds 0.0.0.0:8000->80/tcp , :::8000->80/tcp myworkpress_wordpress_1 ca34d2c3a1e3 mysql:5.7 "docker-entrypoint.s…" 44 seconds ago Up 43 seconds 3306/tcp, 33060/tcp myworkpress_db_1 #停止 [root@LinuxMain myworkpress]# docker-compose down Stopping myworkpress_wordpress_1 ... done Stopping myworkpress_db_1 ... done Removing myworkpress_wordpress_1 ... done Removing myworkpress_db_1 ... done Removing network myworkpress_default
博客页面实例:
实战:个人项目docker-compose部署1.写java项目
@RequestMapping("/hello")
@RestController
public class MyController {
@Autowired
StringRedisTemplate stringRedisTemplate;
@GetMapping()
public String test(){
Long count = stringRedisTemplate.opsForValue().increment("counts");
return "Hello world! visit time = "+count;
}
}
# application.properties
server.port=8080
spring.redis.host=redis # 直接通过docker服务名访问 都在docker-compose 网络中
2.编写Dockerfile
FROM openjdk:16 COPY *.jar /app.jar CMD ["--server.port=8080"] EXPOSE 8080 ENTRYPOINT ["java","-jar","app.jar"]
3.编写 docker-compose.yml
version: "3.3"
services:
chbapp:
build: .
image: chbapp
depends_on:
- redis
ports:
- "8080:8080"
redis:
image: redis
4.放到服务器 运行
[root@LinuxMain chb]# ls demo-0.0.1-SNAPSHOT.jar docker-compose.yml Dockerfile [root@LinuxMain chb]# docker-compose up -d Creating network "chb_default" with the default driver Creating chb_redis_1 ... done Creating chb_chbapp_1 ... done [root@LinuxMain chb]# docker ps ConTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1f1ad97ac308 chbapp "java -jar app.jar -…" 3 seconds ago Up 2 seconds 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp chb_chbapp_1 c293b57530ac redis "docker-entrypoint.s…" 3 seconds ago Up 2 seconds 6379/tcp chb_redis_1
5.访问测试:
[root@LinuxMain chb]# curl localhost:8080/hello Hello world! visit time = 1[root@LinuxMain chb]# curl localhost:8080/hello Hello world! visit time = 2[root@LinuxMain chb]# [root@LinuxMain chb]# curl localhost:8080/hello Hello world! visit time = 3[root@LinuxMain chb]#总结:
工程、服务、容器
项目compose分为三层:
- 工程project
- 服务、服务…
- 容器运行实例 -> k8s 容器、pods
集群方式部署、单机。4台
Docker Engine 1.12 introduces swarm mode that enables you to create a cluster of one or more Docker Engines called a swarm. A swarm consists of one or more nodes: physical or virtual machines running Docker Engine 1.12 or later in swarm mode.
There are two types of nodes: managers and workers.
操作都在managers;
搭建集群:swarm命令:
[root@LinuxMain ~]# docker swarm --help Usage: docker swarm COMMAND Manage Swarm Commands: ca Display and rotate the root CA init Initialize a swarm join Join a swarm as a node and/or manager join-token Manage join tokens leave Leave the swarm unlock Unlock swarm unlock-key Manage the unlock key update Update the swarm
docker init:
[root@LinuxMain ~]# docker swarm init --help
Usage: docker swarm init [OPTIONS]
Initialize a swarm
Options:
## 告诉被人怎么连接
--advertise-addr string Advertised address (format: [:port])
--autolock Enable manager autolocking (requiring an unlock key to start a
stopped manager)
--availability string Availability of the node ("active"|"pause"|"drain") (default "active")
--cert-expiry duration Validity period for node certificates (ns|us|ms|s|m|h) (default
2160h0m0s)
--data-path-addr string Address or interface to use for data path traffic (format:
)
--data-path-port uint32 Port number to use for data path traffic (1024 - 49151). If no
value is set or is set to 0, the default port (4789) is used.
--default-addr-pool ipNetSlice default address pool in CIDR format (default [])
--default-addr-pool-mask-length uint32 default address pool subnet mask length (default 24)
--dispatcher-heartbeat duration Dispatcher heartbeat period (ns|us|ms|s|m|h) (default 5s)
--external-ca external-ca Specifications of one or more certificate signing endpoints
--force-new-cluster Force create a new cluster from current state
--listen-addr node-addr Listen address (format: [:port]) (default 0.0.0.0:2377)
--max-snapshots uint Number of additional Raft snapshots to retain
--snapshot-interval uint Number of log entries between Raft snapshots (default 10000)
--task-history-limit int Task history retention limit (default 5)
# root@local130 ~ 克隆的虚拟机 所以名字一样,看注释分主机; 并且要关闭防火墙;
[root@local130 ~]# systemctl stop firewalld.service
[root@local130 ~]# systemctl disable firewalld.service
# 129主机,创建主节点:
[root@LinuxMain ~]# docker swarm init --advertise-addr 192.168.5.129
Swarm initialized: current node (fznsl9yvbdk4i56i0pywfdh4n) is now a manager.
To add a worker to this swarm, run the following command:
#
docker swarm join --token SWMTKN-1-1l9bh6qb81arswpj7kz3zeptvpqtipod8afpj13tr2ypc22j28-cjsnezvr00b81z09g372ml0de 192.168.5.129:2377
#
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
# 130主机 加入主节点:
[root@local130 ~]# docker swarm join --token SWMTKN-1-1l9bh6qb81arswpj7kz3zeptvpqtipod8afpj13tr2ypc22j28-cjsnezvr00b81z09g372ml0de 192.168.5.129:2377
This node joined a swarm as a worker.
# 129中查看各节点:
[root@LinuxMain ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
fznsl9yvbdk4i56i0pywfdh4n * LinuxMain Ready Active Leader 20.10.11
97cf5i4uyxxivjeg0sndn5lia local130 Ready Active 20.10.11
# 可复制之前的命令,或者执行:生成生成工作节点的命令/管理节点
[root@LinuxMain ~]# docker swarm join-token worker
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-1l9bh6qb81arswpj7kz3zeptvpqtipod8afpj13tr2ypc22j28-cjsnezvr00b81z09g372ml0de 192.168.5.129:2377
[root@LinuxMain ~]# docker swarm join-token manager
To add a manager to this swarm, run the following command:
docker swarm join --token SWMTKN-1-1l9bh6qb81arswpj7kz3zeptvpqtipod8afpj13tr2ypc22j28-ctddlqrx7djk9dja0gdvhwr8l 192.168.5.129:2377
# 132主机 作为工作节点加入 :
[root@local130 ~]# docker swarm join --token SWMTKN-1-1l9bh6qb81arswpj7kz3zeptvpqtipod8afpj13tr2ypc22j28-cjsnezvr00b81z09g372ml0de 192.168.5.129:2377
This node joined a swarm as a worker.
# 133主机作为管理节点加入
[root@local130 ~]# docker swarm join --token SWMTKN-1-1l9bh6qb81arswpj7kz3zeptvpqtipod8afpj13tr2ypc22j28-ctddlqrx7djk9dja0gdvhwr8l 192.168.5.129:2377
This node joined a swarm as a manager.
# 129主机中 查看节点
[root@LinuxMain ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
fznsl9yvbdk4i56i0pywfdh4n * LinuxMain Ready Active Leader 20.10.11
97cf5i4uyxxivjeg0sndn5lia local130 Ready Active 20.10.11
vup4zhj0dsbob9yhd997pxykq local130 Ready Active 20.10.11
08hrffgon0nagbvka44s3xbv4 local130 Ready Active Reachable 20.10.11
主要步骤:
- init 生成主节点:
- 加入(管理者manager、工作者worker)
双主双从:假设一个节点挂了!其他节点是否可以使用!
Raft协议:保证大多数节点存活才可以使用,大于一半;
# 129 130 132 133 # 停掉 133 主机的docker # 在129 查看集群情况: [root@LinuxMain ~]# docker node ls Error response from daemon: rpc error: code = DeadlineExceeded desc = context deadline exceeded # 启动133docker 恢复集群。 # 132 工作节点,离开集群 [root@local130 ~]# docker swarm leave Node left the swarm. # 查看集群状态: [root@LinuxMain ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION fznsl9yvbdk4i56i0pywfdh4n * LinuxMain Ready Active Leader 20.10.11 97cf5i4uyxxivjeg0sndn5lia local130 Ready Active 20.10.11 vup4zhj0dsbob9yhd997pxykq local130 Down Active 20.10.11 08hrffgon0nagbvka44s3xbv4 local130 Ready Active Reachable 20.10.11 # 将133 作为管理节点加入 回来: 达到三个主(管理)节点: [root@local130 ~]# docker swarm join --token SWMTKN-1-1l9bh6qb81arswpj7kz3zeptvpqtipod8afpj13tr2ypc22j28-ctddlqrx7djk9dja0gdvhwr8l 192.168.5.129:2377 This node joined a swarm as a manager. [root@local130 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION fznsl9yvbdk4i56i0pywfdh4n LinuxMain Ready Active Leader 20.10.11 08hrffgon0nagbvka44s3xbv4 local130 Ready Active Reachable 20.10.11 97cf5i4uyxxivjeg0sndn5lia local130 Ready Active 20.10.11 p71v9qbl8ey3mwqcmvhp8n6cp * local130 Ready Active Reachable 20.10.11 vup4zhj0dsbob9yhd997pxykq local130 Down Active 20.10.11 # 工作节点就是工作的,管理节点用于操作;
三台管理节点:(测试 关闭其中一个 129)
# 关闭 129 管理节点: [root@LinuxMain ~]# systemctl stop docker Warning: Stopping docker.service, but it can still be activated by: docker.socket # 132管理节点 查看集群状态: 129 已经 Down Unreachable ,3个剩2个集群依然工作 [root@local130 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION fznsl9yvbdk4i56i0pywfdh4n LinuxMain Down Active Unreachable 20.10.11 08hrffgon0nagbvka44s3xbv4 * local130 Ready Active Reachable 20.10.11 97cf5i4uyxxivjeg0sndn5lia local130 Ready Active 20.10.11 p71v9qbl8ey3mwqcmvhp8n6cp local130 Ready Active Leader 20.10.11 vup4zhj0dsbob9yhd997pxykq local130 Down Active 20.10.11 # 再关闭一个管理节点: 3个管理节点,只剩1个;集群不可使用; [root@local130 ~]# docker node ls Error response from daemon: rpc error: code = DeadlineExceeded desc = context deadline exceeded体会 弹性、扩缩容、集群:
告别 docker run !
docker-compose up! 启动一个项目。是单机的。
集群:swarm docker service
未来:k8s service pods
容器=》服务
redis! =3份! 容器!
集群:高可用,web ->redis (3台不同机器)
容器=》服务=》副本
redis 服务 =》 10个副本(同时开启十个redis容器)
体验:docker service 命令:
[root@LinuxMain ~]# docker service --help Usage: docker service COMMAND Manage services Commands: #创建服务 create Create a new service inspect Display detailed information on one or more services logs Fetch the logs of a service or task ls List services ps List the tasks of one or more services rm Remove one or more services rollback Revert changes to a service's configuration # 动态扩缩容 scale Scale one or multiple replicated services # 更新服务 update Update a service
发布项目:灰度发布:金丝雀发布,升级不影响原来的使用。
[root@LinuxMain ~]# docker service create --help
Usage: docker service create [OPTIONS] IMAGE [COMMAND] [ARG...]
Create a new service
Options:
--cap-add list Add Linux capabilities
--cap-drop list Drop Linux capabilities
--config config Specify configurations to expose to the service
--constraint list Placement constraints
--container-label list Container labels
--credential-spec credential-spec Credential spec for managed service account (Windows only)
-d, --detach Exit immediately instead of waiting for the service to converge
--dns list Set custom DNS servers
--dns-option list Set DNS options
--dns-search list Set custom DNS search domains
--endpoint-mode string Endpoint mode (vip or dnsrr) (default "vip")
--entrypoint command Overwrite the default ENTRYPOINT of the image
-e, --env list Set environment variables
--env-file list Read in a file of environment variables
--generic-resource list User defined resources
--group list Set one or more supplementary user groups for the container
--health-cmd string Command to run to check health
--health-interval duration Time between running the check (ms|s|m|h)
--health-retries int Consecutive failures needed to report unhealthy
--health-start-period duration Start period for the container to initialize before counting retries towards
unstable (ms|s|m|h)
--health-timeout duration Maximum time to allow one check to run (ms|s|m|h)
--host list Set one or more custom host-to-IP mappings (host:ip)
--hostname string Container hostname
--init Use an init inside each service container to forward signals and reap processes
--isolation string Service container isolation mode
-l, --label list Service labels
--limit-cpu decimal Limit CPUs
--limit-memory bytes Limit Memory
--limit-pids int Limit maximum number of processes (default 0 = unlimited)
--log-driver string Logging driver for service
--log-opt list Logging driver options
--max-concurrent uint Number of job tasks to run concurrently (default equal to --replicas)
--mode string Service mode (replicated, global, replicated-job, or global-job) (default
"replicated")
--mount mount Attach a filesystem mount to the service
--name string Service name
--network network Network attachments
--no-healthcheck Disable any container-specified HEALTHCHECK
--no-resolve-image Do not query the registry to resolve image digest and supported platforms
--placement-pref pref Add a placement preference
-p, --publish port Publish a port as a node port
-q, --quiet Suppress progress output
--read-only Mount the container's root filesystem as read only
--replicas uint Number of tasks
--replicas-max-per-node uint Maximum number of tasks per node (default 0 = unlimited)
--reserve-cpu decimal Reserve CPUs
--reserve-memory bytes Reserve Memory
--restart-condition string Restart when condition is met ("none"|"on-failure"|"any") (default "any")
--restart-delay duration Delay between restart attempts (ns|us|ms|s|m|h) (default 5s)
--restart-max-attempts uint Maximum number of restarts before giving up
--restart-window duration Window used to evaluate the restart policy (ns|us|ms|s|m|h)
--rollback-delay duration Delay between task rollbacks (ns|us|ms|s|m|h) (default 0s)
--rollback-failure-action string Action on rollback failure ("pause"|"continue") (default "pause")
--rollback-max-failure-ratio float Failure rate to tolerate during a rollback (default 0)
--rollback-monitor duration Duration after each task rollback to monitor for failure (ns|us|ms|s|m|h)
(default 5s)
--rollback-order string Rollback order ("start-first"|"stop-first") (default "stop-first")
--rollback-parallelism uint Maximum number of tasks rolled back simultaneously (0 to roll back all at
once) (default 1)
--secret secret Specify secrets to expose to the service
--stop-grace-period duration Time to wait before force killing a container (ns|us|ms|s|m|h) (default 10s)
--stop-signal string Signal to stop the container
--sysctl list Sysctl options
-t, --tty Allocate a pseudo-TTY
--ulimit ulimit Ulimit options (default [])
--update-delay duration Delay between updates (ns|us|ms|s|m|h) (default 0s)
--update-failure-action string Action on update failure ("pause"|"continue"|"rollback") (default "pause")
--update-max-failure-ratio float Failure rate to tolerate during an update (default 0)
--update-monitor duration Duration after each task update to monitor for failure (ns|us|ms|s|m|h)
(default 5s)
--update-order string Update order ("start-first"|"stop-first") (default "stop-first")
--update-parallelism uint Maximum number of tasks updated simultaneously (0 to update all at once)
(default 1)
-u, --user string Username or UID (format: [:])
--with-registry-auth Send registry authentication details to swarm agents
-w, --workdir string Working directory inside the container
对比:
启动nginx 实战:docker run :容器启动,不具备 扩缩容能力
docker service: 服务 具有扩缩容能力、滚动更新
[root@LinuxMain ~]# docker service create -p 6666:80 --name yournginx nginx
mcg3kdcy1k9r5wusf1cfz25yw
overall progress: 1 out of 1 tasks
1/1: running [==================================================>]
verify: Service converged
[root@LinuxMain ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
mcg3kdcy1k9r yournginx replicated 1/1 nginx:latest *:6666->80/tcp
[root@LinuxMain ~]# docker service inspect yournginx
[
{
"ID": "mcg3kdcy1k9r5wusf1cfz25yw",
"Version": {
"Index": 112
},
"CreatedAt": "2021-12-08T07:32:42.475066967Z",
"UpdatedAt": "2021-12-08T07:32:42.477638619Z",
"Spec": {
"Name": "yournginx",
"Labels": {},
"TaskTemplate": {
"ContainerSpec": {
"Image": "nginx:latest@sha256:9522864dd661dcadfd9958f9e0de192a1fdda2c162a35668ab6ac42b465f0603",
"Init": false,
"StopGracePeriod": 10000000000,
"DNSConfig": {},
"Isolation": "default"
},
"Resources": {
"Limits": {},
"Reservations": {}
},
"RestartPolicy": {
"Condition": "any",
"Delay": 5000000000,
"MaxAttempts": 0
},
"Placement": {
"Platforms": [
{
"Architecture": "amd64",
"OS": "linux"
},
{
"OS": "linux"
},
{
"OS": "linux"
},
{
"Architecture": "arm64",
"OS": "linux"
},
{
"Architecture": "386",
"OS": "linux"
},
{
"Architecture": "mips64le",
"OS": "linux"
},
{
"Architecture": "ppc64le",
"OS": "linux"
},
{
"Architecture": "s390x",
"OS": "linux"
}
]
},
"ForceUpdate": 0,
"Runtime": "container"
},
"Mode": {
"Replicated": {
"Replicas": 1
}
},
"UpdateConfig": {
"Parallelism": 1,
"FailureAction": "pause",
"Monitor": 5000000000,
"MaxFailureRatio": 0,
"Order": "stop-first"
},
"RollbackConfig": {
"Parallelism": 1,
"FailureAction": "pause",
"Monitor": 5000000000,
"MaxFailureRatio": 0,
"Order": "stop-first"
},
"EndpointSpec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 80,
"PublishedPort": 6666,
"PublishMode": "ingress"
}
]
}
},
"Endpoint": {
"Spec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 80,
"PublishedPort": 6666,
"PublishMode": "ingress"
}
]
},
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 80,
"PublishedPort": 6666,
"PublishMode": "ingress"
}
],
"VirtualIPs": [
{
"NetworkID": "zsr8qo981agme7hwly23zmixn",
"Addr": "10.0.0.17/24"
}
]
}
}
]
我们只启动了一个:它随机分配启动在 3个管理节点和1个工作节点中;
# 启动在 129 管理节点; [root@LinuxMain ~]# docker ps ConTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES cff54994b69a nginx:latest "/docker-entrypoint.…" 11 minutes ago Up 11 minutes 80/tcp yournginx.1.i11l3d0of60yvz9gjvm0ds793 # 动态扩容 增加3个副本 [root@LinuxMain ~]# docker service update --replicas 3 yournginx yournginx overall progress: 3 out of 3 tasks 1/3: running [==================================================>] 2/3: running [==================================================>] 3/3: running [==================================================>] verify: Service converged [root@LinuxMain ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS fsbsafdu860k yournginx replicated 3/3 nginx:latest *:6666->80/tcp # 查看位置: # 129 [root@LinuxMain ~]# docker ps ConTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES dfff9fefaca3 nginx:latest "/docker-entrypoint.…" 37 seconds ago Up 36 seconds 80/tcp yournginx.3.pjqr7kvyezpxkpa7l8tdrq887 # 130 [root@local130 ~]# docker ps ConTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4ff524091da2 nginx:latest "/docker-entrypoint.…" about a minute ago Up about a minute 80/tcp yournginx.1.vjkajiumyr0d2qyzrchm1ulu3 # 132 [root@local130 ~]# docker ps ConTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d54a886dc225 nginx:latest "/docker-entrypoint.…" about a minute ago Up about a minute 80/tcp yournginx.2.phc20it1a3o548u786ob4v6ae # 133 [root@local130 ~]# docker ps ConTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES端口换成9999 重新操作、访问测试:
集群中启动的的服务,可以从任意节点访问,不管该节点是否启动服务容器;
在集群中,没启动容器的节点也可以访问:
# 动态扩容到 10个; [root@LinuxMain ~]# docker service update --replicas 10 yournginx # 动态缩容 成1个 [root@LinuxMain ~]# docker service update --replicas 1 yournginx yournginx overall progress: 1 out of 1 tasks 1/1: running [==================================================>] verify: Service converged [root@LinuxMain ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS zjihy233oqep yournginx replicated 1/1 nginx:latest *:9999->80/tcpscale扩容、缩容:
[root@LinuxMain ~]# docker service scale yournginx=5 yournginx scaled to 5 overall progress: 5 out of 5 tasks 1/5: running [==================================================>] 2/5: running [==================================================>] 3/5: running [==================================================>] 4/5: running [==================================================>] 5/5: running [==================================================>] verify: Service converged [root@LinuxMain ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS zjihy233oqep yournginx replicated 5/5 nginx:latest *:9999->80/tcp [root@LinuxMain ~]# docker service scale yournginx=1 yournginx scaled to 1 overall progress: 1 out of 1 tasks 1/1: running [==================================================>] verify: Service converged [root@LinuxMain ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS zjihy233oqep yournginx replicated 1/1 nginx:latest *:9999->80/tcp概念总结:
swarm:
集群的管理和编号。docker 可以初始化一个swarm集群,其他节点可以加入.(管理者、工作者)
node:
就是一个docker节点。多个节点组成一个网络集群.(管理者、工作者)
service:
任务,可以在管理节点或工作节点运行。核心,处理 用户访问、内部访问等。
Task:
容器内的命令,细节任务。
原理说明:当我们执行一个命令:create; -> 由分发器 dispatcher和调度器schduler 选择节点执行任务 -> z被选中的节点执行任务。
项目启动节点(全局、工作节点)docker service create --mode global -p 9876:80 --name mynginx nginx docker service create --mode replicated -p 9876:80 --name mynginx nginx #Swarm 可以在 service 创建或运行过程中灵活地通过 --replicas 调整容器副本的数量,内部调度器则会根据当前集群的资源使用状况在不同 node 上启停容器,这就是 service 默认的 replicated mode。在此模式下,node 上运行的副本数有多有少,一般情况下,资源更丰富的 node 运行的副本数更多,反之亦然。除了 replicated mode,service 还提供了一个 globalmode,其作用是强制在每个 node 上都运行一个且最多一个副本。数据卷挂载:
–mount:
[root@LinuxMain ~]# docker service create --mode global -p 9877:80 --mount type=volume,src=my-vol,dst=/var/ --name mynginxglobal nginx
[root@LinuxMain ~]# docker inspect 6af9c9ff5101
[
{
"Id": "6af9c9ff510168437b54080e5d76500443e6524dd7fcbd42adf52f29be5a71db",
"Created": "2021-12-08T09:36:10.916156059Z",
"Path": "/docker-entrypoint.sh",
"Args": [
"nginx",
"-g",
"daemon off;"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 74526,
"ExitCode": 0,
"Error": "",
"StartedAt": "2021-12-08T09:36:12.361054408Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:f652ca386ed135a4cbe356333e08ef0816f81b2ac8d0619af01e2b256837ed3e",
......
# 挂载信息:
"Mounts": [
{
"Type": "volume",
"Source": "my-vol",
"Target": "/var/"
}
],
......
"Init": false
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/4bad11c7421a7a8efc04f9587f9d4888a4ddb1b3b846198153c10d873a09930f-init/diff:/var/lib/docker/overlay2/86f9e85e4ecb2cd6d099b090a4d8faa2c362c196c5cc3f5530508b7f0fb0152b/diff:/var/lib/docker/overlay2/141455663d36a81773084f483ea04f25e0d6565b67cfe1d65114c17f78fd3d3f/diff:/var/lib/docker/overlay2/77f0ca050eed8c73f168030f266188bea06f406056016c04e64c35cd2b29eeec/diff:/var/lib/docker/overlay2/8a628efd41a58572af93874f3d9905d34646382580cea9ddcedd2c6458b3d5c7/diff:/var/lib/docker/overlay2/1d6bb9fcafed5e2369165280f294cf34ec7833805850f84fcb353cbe225bd909/diff:/var/lib/docker/overlay2/6cd08b6ccf00da0eda34928f11486b21f5ef2271851329dd006b00e638e3d04d/diff",
"MergedDir": "/var/lib/docker/overlay2/4bad11c7421a7a8efc04f9587f9d4888a4ddb1b3b846198153c10d873a09930f/merged",
"UpperDir": "/var/lib/docker/overlay2/4bad11c7421a7a8efc04f9587f9d4888a4ddb1b3b846198153c10d873a09930f/diff",
"WorkDir": "/var/lib/docker/overlay2/4bad11c7421a7a8efc04f9587f9d4888a4ddb1b3b846198153c10d873a09930f/work"
},
"Name": "overlay2"
},
# 挂载信息:
"Mounts": [
{
"Type": "volume",
"Name": "my-vol",
"Source": "/var/lib/docker/volumes/my-vol/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "6af9c9ff5101",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"80/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"NGINX_VERSION=1.21.4",
"NJS_VERSION=0.7.0",
"PKG_RELEASE=1~bullseye"
],
"Cmd": [
"nginx",
"-g",
"daemon off;"
],
"Image": "nginx:latest@sha256:9522864dd661dcadfd9958f9e0de192a1fdda2c162a35668ab6ac42b465f0603",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/docker-entrypoint.sh"
],
......
}
]
swarm网络模式:
swarm
Overlay: Overlay 技术是在现有的物理网络之上构建一个虚拟网络,上层应用只与虚拟网络相关。
ingress: 特殊的Overlay 网络;
[root@LinuxMain ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
b724b7053819 bridge bridge local
b39e2b2a7b12 chb_default bridge local
48e06692a6d7 docker_gwbridge bridge local
966118ebdaaf host host local
zsr8qo981agm ingress overlay swarm
dd919f3545d2 mynet bridge local
2304de3dc4fb none null local
27f5a0386991 redis bridge local
[root@LinuxMain ~]# docker network inspect ingress
[
{
"Name": "ingress",
"Id": "zsr8qo981agme7hwly23zmixn",
.....
"Peers": [
{
"Name": "b35c90995694",
"IP": "192.168.5.129"
},
{
"Name": "6828a6cbea6b",
"IP": "192.168.5.130"
},
{
"Name": "4425c74d56d8",
"IP": "192.168.5.132"
},
{
"Name": "1b0955df803f",
"IP": "192.168.5.133"
}
]
}
]
Docker Stack
#单机部署:
docker-compose up -d wordpress.yaml
#集群部署
docker stack deploy -d wordpress.yaml
[root@LinuxMain ~]# docker stack --help
Usage: docker stack [OPTIONS] COMMAND
Options:
--orchestrator string Orchestrator to use (swarm|kubernetes|all)
Commands:
deploy Deploy a new stack or update an existing stack
ls List stacks
ps List the tasks in the stack
rm Remove one or more stacks
services List the services in the stack
Docker Secret
安全证书、密码;
[root@LinuxMain ~]# docker secret --help Usage: docker secret COMMAND Manage Docker secrets Commands: create Create a secret from a file or STDIN as content inspect Display detailed information on one or more secrets ls List secrets rm Remove one or more secretsDocker Config
配置。
[root@LinuxMain ~]# docker config --help Usage: docker config COMMAND Manage Docker configs Commands: create Create a config from a file or STDIN inspect Display detailed information on one or more configs ls List configs rm Remove one or more configs
todo:k8s



