三台服务器, 两台master主机, 一台worker的高可用集群。
| 主机IP | 主机名 | 角色 |
|---|---|---|
| 1.117.61.155 | k8s-node01 | controlplane,etcd |
| 81.68.101.212 | k8s-node02 | controlplane,etcd,worker |
| 81.68.229.215 | k8s-node03 | controlplane,etcd,worker |
1.2 修改主机名 ( 所有节点操作 )分开买的腾讯云服务器,所以ip地址不连续
master:
hostnamectl set-hostname k8s-node01
node1:
hostnamectl set-hostname k8s-node02
node2:
hostnamectl set-hostname k8s-node03
1.3 安装一些常用工具如果命令失败, 直接修改 /etc/hostname, 重启后生效
hostname # 可查看是否修改成功
yum -y install lrzsz vim gcc glibc openssl openssl-devel net-tools wget curl1.4 关闭防火强 ( 所有节点操作 )
systemctl stop firewalld systemctl disable firewalld
1.5 关闭selinux ( 所有节点操作 )所有节点相同操作可使用同时发送到所有会话
setenforce 0 # 临时关闭 sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config # 永久关闭1.6 关闭swap ( 所有节点操作 )
swapoff -a # 临时关闭;关闭swap主要是为了性能考虑 sed -ri 's/.*swap.*/#&/' /etc/fstab
1.7 同步时间 ( 所有节点操作 )free # 可以通过这个命令查看swap是否关闭了
yum install ntpdate -y ntpdate time.windows.com1.8 将桥接的IPv4流量传递到iptables的链 ( 所有节点操作 )
cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF # 加载配置文件 sysctl --system1.9 添加主机名与IP对应的关系( 所有节点操作 )
cat >> /etc/hosts << EOF 1.117.61.155 node01 81.68.101.212 node02 81.68.229.215 node2 EOF2 安装docker(所有节点操作)
# 1 安装依赖工具
yum install -y yum-utils
# 2 配置docker仓库
sudo yum-config-manager
--add-repo
https://download.docker.com/linux/centos/docker-ce.repo
# 国外的仓库很慢,可以使用国内的镜像快一点
sudo yum-config-manager
--add-repo
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 3 安装docker
# sudo yum install docker-ce docker-ce-cli containerd.io
# 3 指定版本docekr 例如 19.03.13 # 这里不能用最新版本的,经过测试,最新版本和rek2不兼容。
#yum install docker-ce-19.03.13 docker-ce-cli-19.03.13
# 4 配置腾讯云镜像
cat > /etc/docker/daemon.json
3 rke部署集群
# 1 添加用户,rke不能用root操作(三个节点),创建普通用户rke
useradd rke -G docker
echo "123456" | passwd --stdin rke
# 2 下载 rke_linux-amd64并放到添加权限,移动到命令文件中(node1操作)
# 下载地址 https://github.com/rancher/rke/releases/tag/v1.0.11
chmod +x rke_linux-amd64
mv rke_linux-amd64 /usr/local/bin/rke
# 3 节点之间免密登录(node1操作)
su - rke
ssh-keygen # (三次enter)
ssh-copy-id rke@1.117.61.155 # yes, 输入密码
ssh-copy-id rke@81.68.101.212
ssh-copy-id rke@81.68.229.215
# 4 生成集群配置yml文件
cat > cluster.yml << EOF
nodes:
- address: 1.117.61.155
user: rke
role:
- controlplane
- etcd
hostname_override: k8s-node1
- address: 81.68.101.212
user: rke
role:
- controlplane
- etcd
hostname_override: k8s-node2
- address: 81.68.229.215
user: rke
role:
- worker
hostname_override: k8s-node3
EOF
# 5 启动
rke up
这一步可能会出现下面几个错误:
- FATA[0000] Failed to parse cluster file: error unmarshalling: error converting YAML to JSON: yaml: line 6: did not find expected key
这是因为yaml中 : 和 - 后面必须空格, 或者要按照格式对其,按照提示的行数查看问题,然后修改就可以了
(vi编辑器中 :set nu 可以显示行数)
- FATA[0806] [[network] Host [81.68.101.212] is not able to connect to the following ports: [81.68.101.212:2380, 81.68.101.212:2379]. Please check network policies and firewall rules]
这个我遇到最多的问题,按照描述是网络的问题,连接不上这两个端口,我在服务器的防火墙里开放这两个端口可以解决, 但是还会出现其他的端口无法连接,索性我添加了 开通 tcp所有端口 。就可以了
- FATA[1952] Failed to get job complete status for job rke-network-plugin-deploy-job in namespace kube-system
这个问题重新rke up 就可以解决,我也没有搞清楚是什么原因.
这是部署成功的提示
rke up
...
...
INFO[0049] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0049] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0049] [addons] Executing deploy job rke-ingress-controller
INFO[0059] [ingress] ingress controller nginx deployed successfully
INFO[0059] [addons] Setting up user addons
INFO[0059] [addons] no user addons defined
INFO[0059] Finished building Kubernetes cluster successfully
4 安装kubectl
# 1部署成功就可以看到集群配置的yml文件
[rke@node01 ~]$ ls
cluster.rkestate cluster.yml kube_config_cluster.yml
# 2 安装kubectl
su root
cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubectl
# 3 复制kube_config_cluster.yml 到 /root/.kube/config
mkdir -p /root/.kube/
cp /home/rke/kube_config_cluster.yml /root/.kube/config
# 4 查看集群状态
[root@node01 rke]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master1 Ready controlplane,etcd 10h v1.17.9 10.0.4.6 CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://19.3.13
k8s-master2 Ready controlplane,etcd 10h v1.17.9 10.0.4.15 CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://19.3.13
k8s-node1 Ready worker 10h v1.17.9 10.0.4.16 CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://19.3.13
# 查看pods
[root@node01 rke]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx default-http-backend-67cf578fc4-pvktd 1/1 Running 0 5h37m
ingress-nginx nginx-ingress-controller-s7fcj 1/1 Running 0 5h37m
kube-system canal-4pc9k 2/2 Running 0 10h
kube-system canal-nrpvx 2/2 Running 0 10h
kube-system canal-vmqq9 2/2 Running 0 10h
kube-system coredns-7c5566588d-j6xpw 1/1 Running 0 5h37m
kube-system coredns-autoscaler-65bfc8d47d-2sg8d 1/1 Running 0 5h37m
kube-system metrics-server-6b55c64f86-g56ww 1/1 Running 0 5h37m
kube-system rke-coredns-addon-deploy-job-fss2d 0/1 Completed 0 5h37m
kube-system rke-ingress-controller-deploy-job-vnjcl 0/1 Completed 0 5h37m
kube-system rke-metrics-addon-deploy-job-8kj9x 0/1 Completed 0 5h37m
kube-system rke-network-plugin-deploy-job-bjnvh 0/1 Completed 0 10h
5 测试一下,起个nginx服务
# 1 YML文件
cat > nginx-dep.yml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
EOF
# 2 创建
kubectl apply -f nginx-dep.yml
# 3 查看状态
kubectl get pods nginx -o wide
# 4 启动svc,暴露端口
cat > nginx-svc.yml << EOF
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30080
type: NodePort
EOF
## 启动svc
kubectl apply -f nginx-svc.yml
访问81.68.229.215:30080



