- 搭建非高可用版kubernetes集群
- 如需要一键安装Shell脚本前往如下链接
- 集群信息
- 1.节点信息
- 2.组件版本
- 安装前准备工作
- 1.设置hosts解析
- 2.调整系统配置
- 3. 安装docker
- 部署kubernetes
- 1. 安装 kubeadm, kubelet 和 kubectl
- 2. 初始化配置文件
- 3. 提前下载镜像
- 4. 初始化master节点
- 5. 添加slave节点到集群中
- 6. 安装flannel插件
- 7. 设置master节点是否可调度(可选)
- 8. 验证集群
- 9. 部署dashboard
- 10. 清理环境
https://blog.csdn.net/weixin_45724880/article/details/121247461
集群信息 1.节点信息| 主机名 | 节点ip | 角色 | 部署组件 |
|---|---|---|---|
| k8s-master | 172.16.3.57 | master | etcd, kube-apiserver, kube-controller-manager, kubectl, kubeadm, kubelet, kube-proxy, flannel |
| k8s-slave1 | 172.16.3.58 | slave | kubectl, kubelet, kube-proxy, flannel |
| k8s-slave2 | 172.21.32.9 | slave | kubectl, kubelet, kube-proxy, flannel |
| 组件 | 版本 |
|---|---|
| centos | 7.5及以上 |
| Kernel | Linux 3.10.0-1062.9.1.el7.x86_64 |
| etcd | 3.3.15 |
| corednskubeadmkubectlkubeletkube-proxy | v1.16.2 |
| flannel | v0.11.0 |
修改hostname
hostname必须只能包含小写字母、数字、","、"-",且开头结尾必须是小写字母或数字
# 在master节点 $ hostnamectl set-hostname k8s-master #设置master节点的hostname # 在slave-1节点 $ hostnamectl set-hostname k8s-slave1 #设置slave1节点的hostname # 在slave-2节点 $ hostnamectl set-hostname k8s-slave2 #设置slave2节点的hostname
添加hosts解析
$ cat >>/etc/hosts<
172.21.3.57 k8s-slave1
172.21.3.60 k8s-slave2
EOF
操作节点: 所有的master和slave节点(k8s-master,k8s-slave)需要执行
本章下述操作均以k8s-master为例,其他节点均是相同的操作(ip和hostname的值换成对应机器的真实值)
- 设置安全组开放端口
如果节点间无安全组限制(内网机器间可以任意访问),可以忽略,否则,至少保证如下端口可通:
k8s-master节点:TCP:6443,2379,2380,60080,60081UDP协议端口全部打开
k8s-slave节点:UDP协议端口全部打开
- 设置iptables
iptables -P FORWARD ACCEPT
- 关闭swap
swapoff -a # 防止开机自动挂载 swap 分区 sed -i '/ swap / s/^(.*)$/#1/g' /etc/fstab
- 关闭selinux和防火墙
sed -ri 's#(SELINUX=).*#1disabled#' /etc/selinux/config setenforce 0 systemctl disable firewalld && systemctl stop firewalld
- 修改内核参数
cat </etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward=1 vm.max_map_count=262144 EOF modprobe br_netfilter sysctl -p /etc/sysctl.d/k8s.conf
- 设置yum源
$ curl -o /etc/yum.repos.d/CentOS-base.repo http://mirrors.aliyun.com/repo/Centos-7.repo $ curl -o /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo $ cat <3. 安装docker/etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF $ yum clean all && yum makecache
操作节点: 所有节点
## 查看所有的可用版本
$ yum list docker-ce --showduplicates | sort -r
##安装旧版本 yum install docker-ce-cli-18.09.9-3.el7 docker-ce-18.09.9-3.el7
## 安装源里最新版本
$ yum install docker-ce
## 配置docker加速
$ mkdir -p /etc/docker
vi /etc/docker/daemon.json
{
"insecure-registries": [
"172.21.32.15:5000"
],
"registry-mirrors" : [
"https://8xpk5wnt.mirror.aliyuncs.com"
]
}
## 启动docker
$ systemctl enable docker && systemctl start docker
部署kubernetes
1. 安装 kubeadm, kubelet 和 kubectl
操作节点: 所有的master和slave节点(k8s-master,k8s-slave) 需要执行
$ yum install -y kubelet-1.16.0 kubeadm-1.16.0 kubectl-1.16.0 ## 查看kubeadm 版本 $ kubeadm version ## 设置kubelet开机启动 $ systemctl enable kubelet2. 初始化配置文件
操作节点: 只在master节点(k8s-master)执行
#方法一:配置+初始化(推荐)
$ kubeadm init --apiserver-advertise-address=192.168.30.128 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.0 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
#方法二:flannel容易起不来
$ kubeadm config print init-defaults > kubeadm.yaml
$ cat kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.21.32.15 # apiserver地址,因为单master,所以配置master的节点内网IP
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers # 修改成阿里镜像源
kind: ClusterConfiguration
kubernetesVersion: v1.16.2
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16 # Pod 网段,flannel插件需要使用这个网段
serviceSubnet: 10.96.0.0/12
scheduler: {}
3. 提前下载镜像对于上面的资源清单的文档比较杂,要想完整了解上面的资源对象对应的属性,可以查看对应的 godoc 文档,地址: https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2。
操作节点:只在master节点(k8s-master)执行
# 查看需要使用的镜像列表,若无问题,将得到如下列表 $ kubeadm config images list --config kubeadm.yaml registry.aliyuncs.com/google_containers/kube-apiserver:v1.16.0 registry.aliyuncs.com/google_containers/kube-controller-manager:v1.16.0 registry.aliyuncs.com/google_containers/kube-scheduler:v1.16.0 registry.aliyuncs.com/google_containers/kube-proxy:v1.16.0 registry.aliyuncs.com/google_containers/pause:3.1 registry.aliyuncs.com/google_containers/etcd:3.3.15-0 registry.aliyuncs.com/google_containers/coredns:1.6.2 # 提前下载镜像到本地 $ kubeadm config images pull --config kubeadm.yaml [config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.16.0 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.16.0 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.16.0 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.16.0 [config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.1 [config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.3.15-0 [config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.6.2
重要更新:如果出现不可用的情况,请使用如下方式来代替:
-
还原kubeadm.yaml的imageRepository
... imageRepository: k8s.gcr.io ... ## 查看使用的镜像源 kubeadm config images list --config kubeadm.yaml k8s.gcr.io/kube-apiserver:v1.16.0 k8s.gcr.io/kube-controller-manager:v1.16.0 k8s.gcr.io/kube-scheduler:v1.16.0 k8s.gcr.io/kube-proxy:v1.16.0 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2
-
使用docker hub中的镜像源来下载,注意上述列表中要加上处理器架构,通常我们使用的虚拟机都是amd64
$ docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.16.0 $ docker pull mirrorgooglecontainers/etcd-amd64:3.3.15-0 ... $ docker tag mirrorgooglecontainers/etcd-amd64:3.3.15-0 k8s.gcr.io/etcd:3.3.15-0
操作节点:只在master节点(k8s-master)执行
kubeadm init --config kubeadm.yaml
若初始化成功后,最后会提示如下信息:
...
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.21.32.15:6443 --token abcdef.0123456789abcdef
--discovery-token-ca-cert-hash sha256:1c4305f032f4bf534f628c32f5039084f4b103c922ff71b12a5f0f98d1ca9a4f
接下来按照上述提示信息操作,配置kubectl客户端的认证
#重要 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
5. 添加slave节点到集群中**⚠️注意:**此时使用 kubectl get nodes查看节点应该处于notReady状态,因为还未配置网络插件
若执行初始化过程中出错,根据错误信息调整后,执行kubeadm reset后再次执行init操作即可
操作节点:所有的slave节点(k8s-slave)需要执行
在每台slave节点,执行如下命令,该命令是在kubeadm init成功后提示信息中打印出来的,需要替换成实际init后打印出的命令。
kubeadm join 172.21.32.15:6443 --token abcdef.0123456789abcdef
--discovery-token-ca-cert-hash sha256:1c4305f032f4bf534f628c32f5039084f4b103c922ff71b12a5f0f98d1ca9a4f
6. 安装flannel插件
操作节点:只在master节点(k8s-master)执行
- 下载flannel的yaml文件
#方法一:一键配置(推荐) kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/documentation/kube-flannel.yml ##由于我上面的kubeadm init xxx --pod-network-cidr就是10.244.0.0/16。所以此yaml文件就不需要更改了。 #方法二,需要配置yml文件 wget https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/documentation/kube-flannel.yml
- 修改配置,指定网卡名称,大概在文件的170行和190行,添加一行配置:
$ vi kube-flannel.yml
...
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
- --iface=eth0 # 如果机器存在多网卡的话,指定内网网卡的名称,默认不指定的话会找第一块网
resources:
requests:
cpu: "100m"
...
- 执行安装flannel网络插件
# 先拉取镜像,此过程国内速度比较慢 $ docker pull quay.io/coreos/flannel:v0.11.0-amd64 # 执行flannel安装 $ kubectl create -f kube-flannel.yml7. 设置master节点是否可调度(可选)
操作节点:k8s-master
默认部署成功后,master节点无法调度业务pod,如需设置master节点也可以参与pod的调度,需执行:
$ kubectl taint node k8s-master node-role.kubernetes.io/master:NoSchedule-8. 验证集群
操作节点: 在master节点(k8s-master)执行
$ kubectl get nodes #观察集群节点是否全部Ready NAME STATUS ROLES AGE VERSION k8s-master Ready master 22h v1.13.3 k8s-slave Ready22h v1.13.3
创建测试nginx服务
$ kubectl run test-nginx --image=nginx:alpine
查看pod是否创建成功,并访问pod ip测试是否可用
$ kubectl get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-nginx-5bd8859b98-5nnnw 1/1 Running 0 9s 10.244.1.2 k8s-slave1$ curl 10.244.1.2 ... Welcome to nginx! If you see this page, the nginx web server is successfully installed and working. Further configuration is required.
For online documentation and support please refer to "http://nginx.org/">nginx.org.
Commercial support is available at "http://nginx.com/">nginx.com.Thank you for using nginx.


