| 系统 | 内核 | IP | 主机名 | 配置 |
| centos7.6 | 3.10.0-1160.11.1.el7.x86_64 | 172.16.0.2 | k8s-master | 2核4G |
| centos7.6 | 3.10.0-1160.11.1.el7.x86_64 | 172.16.0.15 | k8s-node1 | 2核4G |
| centos7.6 | 3.10.0-1160.11.1.el7.x86_64 | 172.16.0.10 | k8s-node2 | 2核4G |
以上三台服务器必须要满足2核2G以上的配置,可以购买腾讯云或者阿里云服务器,也可以在virtualbox中模拟三台主机,上表中的ip为腾讯云服务器内网IP
注意:关于主机名修改,使用hostnamectl命令更改,参考其他教程
二、准备工作以下几个步骤主从节点服务器均需执行相同步骤(k8s-master,k8s-node1,k8s-node2)
1.关闭防火墙如果各个主机启用了防火墙,需要开放Kubernetes各个组件所需要的端口。这里暂时关闭防火墙
systemctl stop firewalld systemctl disable firewalld2.禁用SELINUX
# 永久禁用 vim /etc/selinux/config SELINUX=disabled3.修改k8s.conf文件
cat <4.开启路由转发/etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
echo 1 > /proc/sys/net/ipv4/ip_forward5.关闭swap
# 临时关闭 swapoff -a # 永久关闭 vim /etc/fstab,注释掉swap那一行6.配置yum源
cat <7.安装docker/etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
yum -y install docker8.安装kubeadm kubelet kubectl
这里在master和node节点都安装了kubectl,也可以只在master安装
yum install -y kubelet kubeadm kubectl systemctl start kubelet && systemctl enable kubelet三、初始化Master节点
kubeadm init --apiserver-advertise-address=172.16.0.2 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16
注意:apiserver-advertise-address为master节点ip
参数解释:
–apiserver-advertise-address:用于指定kube-apiserver监听的ip地址,就是 master本机IP地址。 –pod-network-cidr:用于指定Pod的网络范围; 10.244.0.0/16 –service-cidr:用于指定SVC的网络范围; –image-repository: 指定阿里云镜像仓库地址
运行结果成功后会出现这个结果:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBEConFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.16.0.2:6443 --token 0qgehd.903rry868fhbosq1nk
--discovery-token-ca-cert-hash sha256:0508b15f0748d7369981fe49d214274008e135a67f8d9fab1748199206a11afb
注意保存好kubeadm join,后面会使用该命令让node节点加入k8s集群
根据结果说明,需要配置master
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config四、安装flannel
需要在master和node节点运行该步骤
cd ~ mkdir k8s cd k8s wget https://raw.githubusercontent.com/coreos/flannel/master/documentation/kube-flannel.yml
下载的kube-flannel.yml,配置flannel覆盖网络的
注意:如果yml中的"Network": "10.244.0.0/16"和kubeadm init xxx --pod-network-cidr不一样,就需要修改成一样的。不然可能会使得Node间Cluster IP不通
由于kube-flannel.yml中的镜像为:quay.io/coreos/flannel:v0.14.0,比较难下载,这里使用docker hub中心的对应版本的flannel
# 地址为:https://hub.docker.com/r/easzlab/flannel/tags docker pull easzlab/flannel:v0.14.0-amd64
注意:请选择kube-flannel中的对应版本,这里为0.14.0
将该docker hub上的flannel版本打tag为quay.io的镜像包
docker tag docker.io/easzlab/flannel:v0.14.0-amd64 quay.io/coreos/flannel:v0.14.0
加载flannel
kubectl apply -f kube-flannel.yml
在node节点运行这个命令时,会报错:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
解决方法为:
原因:kubernetes-admin命令没有同步过来 将主节点的配置 /etc/kubernetes/admin.conf 复制到本机,再重新声明环境变量 1、复制配置文件 scp root@主节点服务器地址:/etc/kubernetes/admin.conf /etc/kubernetes/ 2、添加环境变量 echo "export KUBEConFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile 3、申明环境变量 source ~/.bash_profile
查看pod状态
# kubectl get pod --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-7f6cbbb7b8-649mw 1/1 Running 0 23h 10.244.0.3 k8s-masterkube-system coredns-7f6cbbb7b8-qptgr 1/1 Running 0 23h 10.244.0.2 k8s-master kube-system etcd-k8s-master 1/1 Running 0 23h 172.16.0.2 k8s-master kube-system kube-apiserver-k8s-master 1/1 Running 0 23h 172.16.0.2 k8s-master kube-system kube-controller-manager-k8s-master 1/1 Running 0 23h 172.16.0.2 k8s-master kube-system kube-flannel-ds-5wdjv 1/1 Running 1 (23h ago) 23h 172.16.0.10 k8s-node2 kube-system kube-flannel-ds-79hzn 1/1 Running 0 23h 172.16.0.2 k8s-master kube-system kube-flannel-ds-ffhbn 1/1 Running 1 (23h ago) 23h 172.16.0.15 k8s-node1 kube-system kube-proxy-8p5gv 1/1 Running 0 23h 172.16.0.10 k8s-node2 kube-system kube-proxy-h7h7x 1/1 Running 0 23h 172.16.0.15 k8s-node1 kube-system kube-proxy-zk47d 1/1 Running 0 23h 172.16.0.2 k8s-master kube-system kube-scheduler-k8s-master 1/1 Running 0 23h 172.16.0.2 k8s-master
注意:确保所有的Pod都处于Running状态
五、node加入集群1.加入节点
kubeadm join 172.16.0.2:6443 --token 0qgehd.903rry868fhbosq1nk
--discovery-token-ca-cert-hash sha256:0508b15f0748d7369981fe49d214274008e135a67f8d9fab1748199206a11afb
2.查看node节点
kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready control-plane,master 23h v1.22.2 k8s-node1 Ready23h v1.22.2 k8s-node2 Ready 23h v1.22.2
保证所有节点都在Ready状态



