一、Docker Swarm集群的环境搭建与试用
Docker Swarm 搭建
1. OS设置
| Step 1 | 关闭SELinux,firewalld |
| Step 2 | 网络设置 |
| Step 3 | [root@vm1 ~]# ip -br a | grep 0s8 | awk '{print $3}' 192.168.50.100/24 |
| Step 4 | [root@vm2 ~]# ip -br a | grep 0s8 | awk '{print $3}' 192.168.50.120/24 |
2. 安装Docker
| Step 1 | [root@vm1 ~]# cat install-docker.sh |
| Step 2 | yum remove docker* -y |
| Step 3 | rm -rf /var/lib/docker |
| Step 4 | yum -y install wget |
| Step 5 | wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo |
| Step 6 | yum install docker-ce docker-ce-cli containerd.io -y |
| Step 7 | docker –version |
| Step 8 | systemctl enable docker –now |
| Step 9 | docker run hello-world |
| Step 10 | [root@vm1 ~]# bash install-docker.sh |
| Step 11 | [root@vm1 ~]# curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose |
| Step 12 | [root@vm1 ~]# chmod +x /usr/local/bin/docker-compose |
| Step 13 | [root@vm1 ~]# docker -v Docker version 20.10.12, build e91ed57 |
| Step 14 | [root@vm1 ~]# docker-compose -v docker-compose version 1.29.2, build 5becea4c |
| Step 15 | [root@vm2 ~]# docker -v Docker version 20.10.12, build e91ed57 |
| Step 16 | [root@vm2 ~]# docker-compose -v docker-compose version 1.29.2, build 5becea4c |
3. 设置Docker0网络
| Step 1 | [root@vm1 ~]# docker network inspect bridge -f "{{.IPAM.Config}}" |
| Step 2 | [{192.168.80.0/24 192.168.80.1 map[]}] |
| Step 3 | [root@vm2 ~]# docker network inspect bridge -f "{{.IPAM.Config}}" |
| Step 4 | [{192.168.90.0/24 192.168.90.1 map[]}] |
搭建Swarm集群
1. 初始化
| Step 1 | [root@vm1 ~]# docker swarm init --advertise-addr 192.168.50.100 |
| Step 2 | Swarm initialized: current node (kdcrkd6sqteevq9jgy70fd0h0) is now a manager. |
| Step 3 | To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1- 0h1lwim8eh8ygwmh9mg3chuep8lf3dh0z8iv79v3km7itl28ww-4yt8thj4v7a2sj45iab6v8qqj 192.168.50.100:2377 |
| Step 4 | To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. |
| Step 5 | [root@vm1 ~]# docker swarm join-token worker |
| Step 6 | To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-0h1lwim8eh8ygwmh9mg3chuep8lf3dh0z8iv79v3km7itl28ww-4yt8thj4v7a2sj45iab6v8qqj 192.168.50.100:2377 |
| Step 7 | [root@vm1 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION kdcrkd6sqteevq9jgy70fd0h0* vm1 Ready Active Leader 20.10.12 |
2. 添加Worker到Swarm集群中
| Step 1 | [root@vm2 ~]# docker swarm join --token SWMTKN-1-0h1lwim8eh8ygwmh9mg3chuep8lf3dh0z8iv79v3km7itl28ww-4yt8thj4v7a2sj45iab6v8qqj 192.168.50.100:2377 This node joined a swarm as a worker. |
查看集群节点
| Step 1 | [root@vm2 ~]# docker node ls Error response from daemon: This node is not a swarm manager. Worker nodes can't be used to view or modify cluster state. Please run this command on a manager node or promote the current node to a manager. |
| Step 2 | ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION kdcrkd6sqteevq9jgy70fd0h0 * vm1 Ready Active Leader 20.10.12 4hh92oj2meotbi0etnje15bzq vm2 Ready Active 20.10.12 |
3. 添加Label
(1)查询DNS
| Step 1 | [root@vm1 ~]# docker node update --label-add name=swarm-master-1 vm1 vml |
| Step 2 | [root@vm1 ~]# docker node update --label-add name=swarm-master-2 vm2 vm2 |
| Step 3 | [root@vm1 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION kdcrkd6sqteevq9jgy70fd0h0 * vm1 Ready Active Leader 20.10.12 4hh92oj2meotbi0etnje15bzq vm2 Ready Active 20.10.12 |
(2)查看Label
| Step 1 | [root@vm1 ~]# docker node inspect vm1 -f "{{.Spec.Labels}}" map[name:swarm-master-1] |
| Step 2 | [root@vm1 ~]# docker node inspect vm2 -f "{{.Spec.Labels}}" map[HOSTNAME:master-2 name:master-2] |
(3)添加Label
| Step 1 | [root@vm1 ~]# docker node update --help Usage: docker node update [OPTIONS] NODE |
| Step 2 | Update a node Options: --availability string Availability of the node ("active"|"pause"|"drain") --label-add list Add or update a node label (key=value) --label-rm list Remove a node label if exists --role string Role of the node ("worker"|"manager") |
| Step 3 | [root@vm1 ~]# docker node update --label-add name=master-2 vm2 vm2 |
| Step 4 | [root@vm1 ~]# echo $? 0 |
| Step 5 | [root@vm1 ~]# |
| Step 6 | [root@vm1 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION kdcrkd6sqteevq9jgy70fd0h0 * vm1 Ready Active Leader 20.10.12 4hh92oj2meotbi0etnje15bzq vm2 Ready Active 20.10.12 |
| Step 7 | [root@vm1 ~]# docker node promote master-2 Error: No such node: master-2 |
| Step 8 | [root@vm1 ~]# docker node update --label-add HOSTNAME=master-2 vm2 vm2 |
| Step 9 | [root@vm1 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION kdcrkd6sqteevq9jgy70fd0h0 * vm1 Ready Active Leader 20.10.12 4hh92oj2meotbi0etnje15bzq vm2 Ready Active 20.10.12 |
| Step 10 | [root@vm1 ~]# docker node promote master-2 Error: No such node: master-2 |
4. 提升Worker为Master
| Step 1 | [root@vm1 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION kdcrkd6sqteevq9jgy70fd0h0 * vm1 Ready Active Leader 20.10.12 4hh92oj2meotbi0etnje15bzq vm2 Ready Active Reachable 20.10.12 |
| Step 2 | [root@vm2 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION kdcrkd6sqteevq9jgy70fd0h0 vm1 Ready Active Leader 20.10.12 4hh92oj2meotbi0etnje15bzq * vm2 Ready Active Reachable 20.10.12 |
5. 查看节点信息
| Step 1 | [root@vm1 ~]# docker node inspect vm2 -f "{{.Spec.Labels}}" map[HOSTNAME:master-2 name:master-2] |
| Step 2 | [root@vm1 ~]# docker node inspect vm2 |
6. 创建网络
| Step 1 | [root@vm1 ~]# docker network create -d overlay --subnet=192.168.82.0/24 --gateway=192.168.82.1 --attachable swarm-net xywzrf7ftwenaxbu0zmewh183 |
| Step 2 | [root@vm1 ~]# docker network inspect swarm-net -f "{{.IPAM}}" {default map[] [{192.168.82.0/24 192.168.82.1 map[]}]} |
7. 创建Service并验证
(1)创建
| Step 1 | [root@vm1 ~]# docker service create --replicas 3 -p 10080:80 --network swarm-net --name nginx-cluster nginx r4v6w094yxl370bynyzghh37a overall progress: 3 out of 3 tasks 1/3: running [==================================================>] 2/3: running [==================================================>] 3/3: running [==================================================>] verify: Service converged |
| Step 2 | [root@vm1 ~]# |
(2)查看
| Step 1 | [root@vm1 ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS r4v6w094yxl3 nginx-cluster replicated 3/3 nginx:latest *:10080->80/tcp |
| Step 2 | [root@vm1 ~]# ss -ntl | grep 10080 LISTEN 0 128 *:10080 *:* |
| Step 3 | [root@vm1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3ddd0e479de6 nginx:latest "/docker-entrypoint.…" 7 minutes ago Up 7 minutes 80/tcp nginx-cluster.1.6i1ncpnncxzxmsckwyahj9f7l |
| Step 4 | [root@vm1 ~]# docker port nginx-cluster.1.6i1ncpnncxzxmsckwyahj9f7l |
| Step 5 | [root@vm1 ~]# echo $? 0 |
| Step 6 | [root@vm1 ~]# |
(3)访问
| Step 1 | [root@vm1 ~]# curl 192.168.50.100:10080 ... |
8. 查看同一台主机的负载均衡分配情况
(1)修改默认Web主页
| Step 1 | [root@vm1 ~]# docker exec -it nginx-cluster.1.6i1ncpnncxzxmsckwyahj9f7l bash root@3ddd0e479de6:/# echo '#1 in master 1' > /usr/share/nginx/html/index.html |
| Step 2 | [root@vm2 ~]# docker exec -it nginx-cluster.2.gp7szi6348r0gfakr6v1i42ga bash root@0d6709372322:/# echo '#2 in master 2' > /usr/share/nginx/html/index.html root@0d6709372322:/# |
| Step 3 | [root@vm2 ~]# docker exec -it nginx-cluster.3.yofvioldzci3k4geve7lykyrs bash root@6b1e246bdc34:/# echo '#3 in master 2' > /usr/share/nginx/html/index.html |
(2)访问测试
| Step 1 | [root@vm2 ~]# curl 192.168.50.120:10080 #3 in master 2 |
| Step 2 | [root@vm2 ~]# curl 192.168.50.120:10080 #1 in master 1 |
| Step 3 | [root@vm2 ~]# curl 192.168.50.120:10080 #2 in master 2 |
| Step 4 | [root@vm2 ~]# curl 192.168.50.120:10080 #3 in master 2 |
9. 验证HA
| Step 1 | [root@vm2 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 6b1e246bdc34 nginx:latest “/docker-entrypoint.…” 18 minutes ago Up 18 minutes 80/tcp nginx-cluster.3.yofvioldzci3k4geve7lykyrs 0d6709372322 nginx:latest “/docker-entrypoint.…” 18 minutes ago Up 18 minutes 80/tcp nginx-cluster.2.gp7szi6348r0gfakr6v1i42ga |
| Step 2 | [root@vm2 ~]# |
| Step 3 | [root@vm2 ~]# systemctl stop docker Warning: Stopping docker.service, but it can still be activated by: docker.socket |
| Step 4 | [root@vm2 ~]# systemctl status docker ● docker.service – Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) |
| Step 5 | [root@vm1 ~]# docker node ls Error response from daemon: rpc error: code = DeadlineExceeded desc = context deadline exceeded |
| Step 6 | [root@vm2 ~]# systemctl status docker ● docker.service – Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Active: active (running) since Sun 2022-01-23 13:22:38 JST; 28s ago |
| Step 7 | [root@vm1 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION kdcrkd6sqteevq9jgy70fd0h0 * vm1 Ready Active Leader 20.10.12 4hh92oj2meotbi0etnje15bzq vm2 Ready Active Reachable 20.10.12 |
| Step 8 | [root@vm2 ~]# shutdown -h now |
| Step 9 | [root@vm1 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION kdcrkd6sqteevq9jgy70fd0h0 * vm1 Ready Active Leader 20.10.12 4hh92oj2meotbi0etnje15bzq vm2 Ready Active Unreachable 20.10.12 |
| Step 10 | [root@vm1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES acc40b31df3f nginx:latest “/docker-entrypoint.…” 2 minutes ago Up 2 minutes 80/tcp nginx-cluster.3.ei48wkjvtmn53mfsjthlb52ef d4635d8f2322 nginx:latest “/docker-entrypoint.…” 2 minutes ago Up 2 minutes 80/tcp nginx-cluster.2.yyc2fmh73p23adbu44auuzf7r 3ddd0e479de6 nginx:latest “/docker-entrypoint.…” 31 minutes ago Up 31 minutes 80/tcp nginx-cluster.1.6i1ncpnncxzxmsckwyahj9f7l |
| Step 11 | [root@vm1 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION kdcrkd6sqteevq9jgy70fd0h0 * vm1 Ready Active Leader 20.10.12 4hh92oj2meotbi0etnje15bzq vm2 Ready Active Reachable 20.10.12 lp4i21pj0sij9yz81f7u8dzy7 vm3 Ready Active 20.10.12 |
| Step 12 | [root@vm1 ~]# ip -br a | grep enp0s8 | awk ‘{print $3}’ 192.168.50.100/24 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES |
| Step 13 | [root@vm2 ~]# ip -br a | grep enp0s8 | awk ‘{print $3}’ 192.168.50.120/24 |
| Step 14 | [root@vm3 ~]# ip -br a | grep enp0s8 | awk ‘{print $3}’ 192.168.50.130/24 |
| Step 15 | [root@vm1 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION kdcrkd6sqteevq9jgy70fd0h0 * vm1 Ready Active Leader 20.10.12 4hh92oj2meotbi0etnje15bzq vm2 Ready Active Reachable 20.10.12 lp4i21pj0sij9yz81f7u8dzy7 vm3 Ready Active 20.10.12 |
| Step 16 | [root@vm1 ~]# docker service rm r4v6w094yxl3 r4v6w094yxl3 |
| Step 17 | [root@vm1 ~]# docker service create –replicas 6 -p 10080:80 –network swarm-net –name nginx-cluster nginx kzdm5zhgt1eo9goxy0rjwmklm overall progress: 6 out of 6 tasks 1/6: running [================================================è] 2/6: running [================================================è] 3/6: running [================================================è] 4/6: running [================================================è] 5/6: running [================================================è] 6/6: running [================================================è] verify: Service converged |
| Step 18 | [root@vm1 ~]# |
| Step 19 | [root@vm1 ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS kzdm5zhgt1eo nginx-cluster replicated 6/6 nginx:latest *:10080->80/tcp |
| Step 20 | [root@vm3 ~]# docker service ls Error response from daemon: This node is not a swarm manager. Worker nodes can’t be used to view or modify cluster state. Please run this command on a manager node or promote the current node to a manager. |
| Step 21 | [root@vm1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 31177737f781 nginx:latest “/docker-entrypoint.…” 28 seconds ago Up 25 seconds 80/tcp nginx-cluster.5.9a7wd7yqo4lweayw3352kssru 791b93b799d8 nginx:latest “/docker-entrypoint.…” 28 seconds ago Up 25 seconds 80/tcp nginx-cluster.2.bcrt9chkfrmbjyy30n4g3il71 |
| Step 22 | [root@vm2 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4a59afd4d2d2 nginx:latest “/docker-entrypoint.…” 3 seconds ago Up 2 seconds 80/tcp nginx-cluster.1.c46oo5nzbchhbn6v07rft9d5b bac746d67e4f nginx:latest “/docker-entrypoint.…” 4 seconds ago Up 2 seconds 80/tcp nginx-cluster.4.71wn1fz6yojbzunktws2qqslg |
| Step 23 | [root@vm3 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f0985bc21d07 nginx:latest “/docker-entrypoint.…” 7 seconds ago Up 5 seconds 80/tcp nginx-cluster.6.vvykeb5g04gihc5jg4la26903 d14c20577a85 nginx:latest “/docker-entrypoint.…” 7 seconds ago Up 5 seconds 80/tcp nginx-cluster.3.qwccv5u0jlwfp5txp5593d8cc |
10. 删除某容器
| Step 1 | [root@vm1 ~]# docker rm $(docker stop nginx-cluster-2.2.qs4tlxp1ilbtyd95xhkusjg48) nginx-cluster-2.2.qs4tlxp1ilbtyd95xhkusjg48 |
| Step 2 | [root@vm2 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 719e0941efa4 nginx:latest "/docker-entrypoint.…" 20 seconds ago Up 15 seconds 80/tcp nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688 |
11. 删除服务
| Step 1 | [root@vm1 ~]# docker service --help Usage: docker service COMMAND Manage services Commands: create Create a new service inspect Display detailed information on one or more services logs Fetch the logs of a service or task ls List services ps List the tasks of one or more services rm Remove one or more services rollback Revert changes to a service's configuration scale Scale one or multiple replicated services update Update a service |
| Step 2 | [root@vm1 ~]# docker service rm nginx-cluster nginx-cluster |
| Step 3 | [root@vm1 ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS sfcq2vc5orxs nginx-cluster-2 replicated 2/2 nginx:latest *:10081->80/tcp |
| Step 4 | [root@vm1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES |
| Step 5 | [root@vm1 ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES |
| Step 6 | [root@vm1 ~]# |
| Step 7 | [root@vm2 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 719e0941efa4 nginx:latest "/docker-entrypoint.…" 2 minutes ago Up About a minute 80/tcp nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688 |
| Step 8 | [root@vm2 ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 719e0941efa4 nginx:latest "/docker-entrypoint.…" 2 minutes ago Up About a minute 80/tcp nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688 |
12. 手动停止容器
| Step 1 | [root@vm2 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 719e0941efa4 nginx:latest "/docker-entrypoint.…" 4 minutes ago Up 4 minutes 80/tcp nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688 |
| Step 2 | [root@vm2 ~]# docker stop nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688 nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688 |
| Step 3 | [root@vm2 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES |
| Step 4 | [root@vm2 ~]# |
| Step 5 | [root@vm2 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 598c665b7a8e nginx:latest "/docker-entrypoint.…" 19 seconds ago Up 13 seconds 80/tcp nginx-cluster-2.2.qmgbury0aixey2sozive6377l |
| Step 6 | [root@vm2 ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 598c665b7a8e nginx:latest "/docker-entrypoint.…" 24 seconds ago Up 18 seconds 80/tcp nginx-cluster-2.2.qmgbury0aixey2sozive6377l 719e0941efa4 nginx:latest "/docker-entrypoint.…" 5 minutes ago Exited (0) 24 seconds ago nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688 |
| Step 7 | [root@vm2 ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS sfcq2vc5orxs nginx-cluster-2 replicated 2/2 nginx:latest *:10081->80/tcp |
| Step 8 | [root@vm2 ~]# docker start nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688 nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688 |
| Step 9 | [root@vm2 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 598c665b7a8e nginx:latest "/docker-entrypoint.…" 2 minutes ago Up 2 minutes 80/tcp nginx-cluster-2.2.qmgbury0aixey2sozive6377l 719e0941efa4 nginx:latest "/docker-entrypoint.…" 7 minutes ago Up 1 second 80/tcp nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688 |
| Step 10 | [root@vm2 ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS sfcq2vc5orxs nginx-cluster-2 replicated 2/2 nginx:latest *:10081->80/tcp |
| Step 11 | [root@vm2 ~]# docker service rm $(docker service ls -q) sfcq2vc5orxs |
| Step 12 | [root@vm2 ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS |
| Step 13 | [root@vm2 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES |
| Step 14 | [root@vm2 ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES |
| Step 15 | [root@vm2 ~]# |
13. 离开集群
| Step 1 | [root@vm3 ~]# docker swarm leave Error response from daemon: You are attempting to leave the swarm on a node that is participating as a manager. The only way to restore a swarm that has lost consensus is to reinitialize it with `--force-new-cluster`. Use `--force` to suppress this message. |
| Step 2 | [root@vm3 ~]# docker swarm leave --force |
| Step 3 | [root@vm3 ~]# docker node ls Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again. |
14. 删除集群
| Step 1 | [root@vm1 ~]# docker swarm leave --force Node left the swarm. |
| Step 2 | [root@vm1 ~]# docker node ls Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again. |
二、Kubernetes集群的环境搭建与试用
1. 安装Docker
| Step 1 | wget -P /etc/yum.repos.d/ https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo |
| Step 2 | yum install -y docker-ce-18.06.0.ce-3.el7.x86_64 |
| Step 3 | systemctl start docker.service |
| Step 4 | systemctl enable docker.service |
2. 安装Kubernetes
| Step 1 | cat < [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF |
| Step 2 | yum install -y kubelet-1.12.3 yum install -y kubeadm-1.12.3 yum install -y kubectl-1.12.3 |
3. 获取镜像
| Step 1 | docker save -o k8s-1.12.3.tar k8s.gcr.io/kube-proxy:v1.12.3 k8s.gcr.io/kube-apiserver:v1.12.3 k8s.gcr.io/kube-controller-manager:v1.12.3 k8s.gcr.io/kube-scheduler:v1.12.3 k8s.gcr.io/etcd:3.2.24 k8s.gcr.io/coredns:1.2.2 quay.io/coreos/flannel:v0.10.0-amd64 k8s.gcr.io/pause:3.1 |
| Step 2 | docker load -i k8s-1.12.3.tar |
4. 禁用节点上的Swap
| Step 1 | swapoff -a |
| Step 2 | sysctl -p |
| Step 3 | vim /ets/fstab |
5. 开启路由转发功能以及iptables的过滤策略
| Step 1 | vim /etc/sysctl.d/k8s.conf |
| Step 2 | net.bridge.bridge-nf-call-ip6tables = 1 |
| Step 3 | net.bridge.bridge-nf-call-iptables = 1 |
| Step 4 | net.ipv4.ip_forward = 1 |
| Step 5 | modprobe br_netfilter |
| Step 6 | sysctl -p /etc/sysctl.d/k8s.conf |
6. 初始化Master节点
| Step 1 | kubeadm init --kubernetes-version=v1.12.3 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.6.6.110 |
7. 从节点加入
| Step 1 | kubeadm join 10.6.6.192:6443 --token afbkdo.6335xh1w0lv7odbh --discovery-token-ca-cert-hash sha256:b9abe5a668609f0225c8bb3ecba3a70a0be370f90905fcce79a6d783bbd0aeef |
8. 配置主节点是否参与调度
| Step 1 | kubectl taint nodes master.k8s node-role.kubernetes.io/master- |
| Step 2 | kubectl taint nodes master.k8s node-role.kubernetes.io/master=:NoSchedule |
9. 开启非安全端口访问
| Step 1 | - --secure-port=6443 |
| Step 2 | - --insecure-bind-address=0.0.0.0 |
| Step 3 | - --insecure-port=8080 |
10. 配置证书续期
| Step 1 | - --kubeconfig=/etc/kubernetes/controller-manager.conf |
| Step 2 | - --experimental-cluster-signing-duration=87600h0m0s |
| Step 3 | - --feature-gates=RotateKubeletServerCertificate=true |



