只有在安装软件上有一小点不同!!!!!!!
架构解析- 环境准备
- 安装软件
- 获取镜像
- 配置启动kubelet
- 初始化集群
- 配置网络插件
- 配置Node节点加入集群
- 集群检查
集群部署
Docker版本:Server Version: 20.10.8
kubeadm.....版本:v1.22.1
Kubeadm也是一个工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。
官方地址:Kubeadm | Kubernetes
环境:
| 192.168.9.60 | master |
|---|---|
| 192.168.9.61 | node1 |
| 192.168.9.62 | node2 |
master/node1/node2
#三台机器同时去修改相应的操作
1. 修改主机名hostname [root@master ~]# hostnamectl set-hostname master [root@node1 ~]# hostnamectl set-hostname node1 [root@node2 ~]# hostnamectl set-hostname node2 2. 进行DNS本地域名解析 # vim /etc/hosts 192.168.9.60 master 192.168.9.61 node1 192.168.9.62 node2 3. 关闭selinux防火墙 # sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux 4. 关闭firewalld # systemctl disable firewalld 5. 关闭Swap交换分区 # sed -i 's/.*swap.*/#&/' /etc/fstab 6. 检查MAC地址和product_uuid是否有冲突 # ip link # cat /sys/class/dmi/id/product_uuid 7. 重启系统 # reboot
二、安装软件
master/node1/node2
#三台机器同时去修改相应的操作
1. 安装docker-ce依赖软件: # yum install wget container-selinux -y 2. 获取container包: # wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm 3. rpm安装container包: # rpm -ivh containerd.io-1.2.6-3.3.el7.x86_64.rpm 注意:上面的步骤在centos7中无须操作 # update-alternatives --set iptables /usr/sbin/iptables-legacy 4. 安装docker-ce的Yum源及一些相关工具: # yum install -y yum-utils device-mapper-persistent-data lvm2 && yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo 5. 安装docker-ce: # yum makecache && yum -y install docker-ce -y 6. 配置docker-ce开启自启与启动: # systemctl enable docker.service && systemctl start docker 7. 查看docker-ce是否安装成功: # docker info #=============================================================================== 8. 配置k8s的Yum源: # vim /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 9. 安装k8s的组件: # yum -y makecache # yum install -y kubelet kubeadm kubectl ipvsadm 说明:如果想安装指定版本的kubeadmin #yum install kubelet-1.16.0-0.x86_64 kubeadm-1.16.0-0.x86_64 kubectl-1.16.0-0.x86_64 10. 配置内核参数: #面试 # vim /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 vm.swappiness=0 11. 查看参数是否配置成功: # sysctl --system # modprobe br_netfilter #把已经存在的内核模块加载到系统中 # sysctl -p /etc/sysctl.d/k8s.conf 12. 加载ipvs相关内核模块 如果重新开机,需要重新加载(可以写在 /etc/rc.local 中开机自动加载) # modprobe ip_vs # modprobe ip_vs_rr # modprobe ip_vs_wrr # modprobe ip_vs_sh # modprobe nf_conntrack_ipv4 13. 查看是否加载成功 # lsmod | grep ip_vs
三、获取镜像
master/node1/node2
#三台机器同时去修改相应的操作
1. 使用工具查看当前最适合的镜像的版本: # kubeadm config images list k8s.gcr.io/kube-apiserver:v1.22.1 k8s.gcr.io/kube-controller-manager:v1.22.1 k8s.gcr.io/kube-scheduler:v1.22.1 k8s.gcr.io/kube-proxy:v1.22.1 k8s.gcr.io/pause:3.5 #容器之间通信 k8s.gcr.io/etcd:3.5.0-0 k8s.gcr.io/coredns/coredns:v1.8.4 2. 使用脚本快速安装镜像: # for image in `kubeadm config images list` ; do docker pull $image; done
四、配置启动kubelet
master/node1/node2
#三台机器同时去修改相应的操作
1. 设置变量,获取docker-ce的Cgroup Drive: # DOCKER_CGROUPS=$(docker info | grep 'Cgroup Drive' | cut -d' ' -f4) # echo $DOCKER_CGROUPS cgroupfs 2. 配置kubelet: # vim /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--cgroup-driver=cgroupfs --pod-infra-container-image=k8s.gcr.io/pause:3.5" #注意:pause:3.5为你上方安装镜像的版本 3. 启动kubelet: # systemctl daemon-reload # systemctl enable kubelet && systemctl start kubelet
五、初始化集群
master
注意只是master做操作
1. 进行初始化:
[root@master ~# kubeadm init --kubernetes-version=v1.22.1 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.9.60 --ignore-preflight-errors=Swap
................
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBEConFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
#--------------------------------------------------------------------------------
kubeadm join 192.168.9.60:6443 --token 79sw3m.fedp436xmfqm3its
--discovery-token-ca-cert-hash sha256:15176715bf269c5d95df4eef62fcaf67b566a980230e1d44f8566e07c57c9f1a
#--------------------------------------------------------------------------------
#这个要进行记录,这是Node节点加入到master上的一条命令
解析
#version:是当前安装的统一版本 docker image ls
#-address:通告地址,为master的私网地址
2. 如果报错,按照下方进行清理环境,再执行初始化:
#先清理环境
[root@master ~]# kubeadm reset -f
[root@master ~]# ipvsadm --clear
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl restart kubelet
#在执行一遍
1. 不执行以下操作,集群加入master必失败:
[root@master ~]# rm -rf $HOME/.kube
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# echo $HOME
/root
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
2. 查看当前集群的成员:
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane,master 76m v1.22.1
六、配置网络插件flannel
master
注意只是master做操作
1. 创建独立的目录:
[root@master flannel]# cd ~ && mkdir flannel && cd flannel
2. 获取yaml配置文件:
[root@master flannel]# curl -O https://raw.githubusercontent.com/coreos/flannel/master/documentation/kube-flannel.yml
3. 将配置文件中的iamge镜像进行安装:
[root@master flannel]# for image in `grep 'image' kube-flannel.yml `; do docker pull $image; done
4. 修改yaml配置文件:
[root@master flannel]# vim kube-flannel.yml
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.14.0
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
- --iface=ens33 #填自己本机的网卡 ip a
5. yaml配置文件生效:
[root@master flannel]# kubectl apply -f ~/flannel/kube-flannel.yml
6. 查看名称空间
[root@master flannel]# kubectl get pods --namespace kube-system
NAME READY STATUS RESTARTS AGE
coredns-78fcd69978-nkxhc 1/1 Running 0 8m6s
coredns-78fcd69978-s5f2t 1/1 Running 0 8m6s
etcd-master 1/1 Running 0 57m
kube-apiserver-master 1/1 Running 0 57m
kube-controller-manager-master 1/1 Running 0 57m
kube-flannel-ds-4hx6r 1/1 Running 0 13m
kube-flannel-ds-5hjkr 1/1 Running 0 13m
kube-flannel-ds-b8pfn 1/1 Running 0 13m
kube-proxy-q8mck 1/1 Running 0 53m
kube-proxy-tbcpl 1/1 Running 0 53m
kube-proxy-wjqpx 1/1 Running 0 57m
kube-scheduler-master 1/1 Running 0 57m
# kubectl get service
# kubectl get svc --namespace kube-system
#====================================================================================
#问题解决:Pod一直显示"ContainerCreate",有可能是Node节点的空间不足,登入节点,清除缓存
[root@node1 ~]# free -mh
total used free shared buff/cache available
Mem: 4.0G 392M 637M 9.0M 3.0G 3.2G
Swap: 0B 0B 0B
[root@node1 ~]# echo 3 > /proc/sys/vm/drop_caches
[root@node1 ~]# free -mh
total used free shared buff/cache available
Mem: 4.0G 373M 3.3G 9.0M 299M 3.3G
Swap: 0B 0B 0B
⛔注意:如果一直显示ContainerCreate,不要去着急,有可能是在下载镜像,等个几分钟...........
七、配置所有node节点加入集群
node1/node2
注意这里是Node节点
1. 使用命令加入Master集群(此命令为k8s初始化成功时的最后最后一个命令) [root@node1 ~]# kubeadm join 192.168.9.60:6443 --token 79sw3m.fedp436xmfqm3its --discovery-token-ca-cert-hash sha256:15176715bf269c5d95df4eef62fcaf67b566a980230e1d44f8566e07c57c9f1a [root@node2 ~]# kubeadm join 192.168.9.60:6443 --token 79sw3m.fedp436xmfqm3its --discovery-token-ca-cert-hash sha256:15176715bf269c5d95df4eef62fcaf67b566a980230e1d44f8566e07c57c9f1a
八、集群检测
master
[root@master flannel]# kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready control-plane,master 5h16m v1.22.1 node1 Ready80s v1.22.1 node2 Ready 65s v1.22.1



