1.准备环境
图图主机清单网段分配生产环境需求1.1.k8s基础系统环境配置1.2.所有节点配置hosts本地解析1.3.k8s-master01节点向所有节点配置免密登录 2.k8s基本组件安装
2.1.所有k8s节点安装Containerd作为Runtime2.2.k8s与etcd下载及安装(仅在master01操作) 3.相关证书生成
3.1.生成etcd证书3.2.生成k8s相关证书 4.k8s系统组件配置
4.1.etcd配置4.2.创建service(所有master节点操作) 5.高可用配置6.k8s组件配置(区别于第4点)
6.1.创建apiserver(所有master节点)6.2.配置kube-controller-manager service6.3.配置kube-scheduler service 7.TLS Bootstrapping配置8.node节点配置
8.1.在master01上将证书复制到node节点8.2.kubelet配置8.3.kube-proxy配置 9.安装Calico10.安装CoreDNS11.安装Metrics Server12.集群验证13.安装dashboard14. 生产环境关键性配置(根据需求配置)
1.准备环境 图图 主机清单 网段分配物理主机:192.168.10.0/24service:10.96.0.0/12pod:172.16.0.0/12 生产环境需求
master节点数:3个及以上master节点,必须为单数
节点数:0-100 8C16G+ 节点数:100-250 8C32G+ 节点数:250-500 16C32G+
etcd节点:3个及以上实现高可用,最好使用ssd磁盘
节点数:0-50 2C8G+ 50G SSD 节点数:50-250 4C16G+ 150G SSD 节点数:250-1000 8C32G+ 250G SSD
如果有条件建议k8s集群与etcd集群分开安装 1.1.k8s基础系统环境配置
k8s基础环境配置
里面可能没有1.2和1.3两项,参考下面两项 1.2.所有节点配置hosts本地解析
[root@k8s-master01 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.10.2 k8s-master01 192.168.10.3 k8s-master02 192.168.10.4 k8s-master03 192.168.10.5 k8s-node01 192.168.10.6 k8s-node02 192.168.10.7 lb01 192.168.10.8 lb02 192.168.10.100 lb-vip1.3.k8s-master01节点向所有节点配置免密登录
[root@k8s-master01 ~]# ssh-keygen -t rsa [root@k8s-master01 ~]# for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done2.k8s基本组件安装 2.1.所有k8s节点安装Containerd作为Runtime
废弃docker第一步,安装docker-20.10(安装了,不适用,哎,就是玩!!!)
yum install docker-ce-20.10.* docker-ce-cli-20.10.* containerd -y
配置Containerd所需的模块
cat <加载模块
modprobe -- overlay modprobe -- br_netfilter配置Containerd所需的内核
cat <创建Containerd的配置文件
mkdir -p /etc/containerd containerd config default | tee /etc/containerd/config.toml修改Containerd的配置文件
vim /etc/containerd/config.toml # 找到containerd.runtimes.runc.options,在其下加入SystemdCgroup = true [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true [plugins."io.containerd.grpc.v1.cri".cni] # 将sandbox_image默认地址改为符合版本地址 sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"启动并设置为开机启动
systemctl daemon-reload systemctl enable --now containerd配置crictl客户端连接的运行时位置
cat > /etc/crictl.yaml <2.2.k8s与etcd下载及安装(仅在master01操作) 下载k8s安装包(你用哪个下哪个)
wget https://dl.k8s.io/v1.23.3/kubernetes-server-linux-amd64.tar.gz下载etcd安装包
wget https://github.com/etcd-io/etcd/releases/download/v3.5.1/etcd-v3.5.1-linux-amd64.tar.gz解压k8s安装文件
tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}解压etcd安装文件
tar -xf etcd-v3.5.1-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.1-linux-amd64/etcd{,ctl} # 查看/usr/local/bin下内容 ls /usr/local/bin/ etcd etcdctl kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler查看版本
[root@k8s-master01 ~]# kubelet --version Kubernetes v1.23.3 [root@k8s-master01 ~]# etcdctl version etcdctl version: 3.5.1 API version: 3.5将组件发送至其他k8s节点
Master='k8s-master02 k8s-master03' Work='k8s-node01 k8s-node02' for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done for NODE in $Work; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done克隆证书相关文件(自己创建也可以,我就用现成的了)
git clone https://github.com/dotbalo/k8s-ha-install.git cd k8s-ha-install/ && git checkout manual-installation-v1.23.x所有k8s节点创建目录
mkdir -p /opt/cni/bin3.相关证书生成master01节点下载证书生成工具
wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64" -O /usr/local/bin/cfssl wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64" -O /usr/local/bin/cfssljson chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson3.1.生成etcd证书特别说明除外,以下操作在所有master节点操作所有master节点创建证书存放目录
mkdir /etc/etcd/ssl -pmaster01节点生成etcd证书
cd /root/k8s-ha-install/pki/ # 生成etcd证书和etcd证书的key(如果你觉得以后可能会扩容,可以在ip那多写几个预留出来) cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca cfssl gencert -ca=/etc/etcd/ssl/etcd-ca.pem -ca-key=/etc/etcd/ssl/etcd-ca-key.pem -config=ca-config.json -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.10.2,192.168.10.3,192.168.10.4 -profile=kubernetes etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd将证书复制到其他节点
Master='k8s-master02 k8s-master03' for NODE in $Master; do ssh $NODE "mkdir -p /etc/etcd/ssl" for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE} done done3.2.生成k8s相关证书特别说明除外,以下操作在所有master节点操作所有k8s节点创建证书存放目录
mkdir -p /etc/kubernetes/pkimaster01节点生成k8s证书
# 生成一个根证书 cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca # 10.96.0.1是service网段的第一个地址,需要计算,192.168.10.100为高可用vip地址 cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -hostname=10.96.0.1,192.168.10.100,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,192.168.10.2,192.168.10.3,192.168.10.4 -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver生成apiserver聚合证书
cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca # 有一个警告,可以忽略 cfssl gencert -ca=/etc/kubernetes/pki/front-proxy-ca.pem -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem -config=ca-config.json -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client生成controller-manage的证书
cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -profile=kubernetes manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager # 设置一个集群项 kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.10.100:8443 --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置一个环境项,一个上下文 kubectl config set-context system:kube-controller-manager@kubernetes --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置一个用户项 kubectl config set-credentials system:kube-controller-manager --client-certificate=/etc/kubernetes/pki/controller-manager.pem --client-key=/etc/kubernetes/pki/controller-manager-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置默认环境 kubectl config use-context system:kube-controller-manager@kubernetes --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -profile=kubernetes scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.10.100:8443 --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler --client-certificate=/etc/kubernetes/pki/scheduler.pem --client-key=/etc/kubernetes/pki/scheduler-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-context system:kube-scheduler@kubernetes --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config use-context system:kube-scheduler@kubernetes --kubeconfig=/etc/kubernetes/scheduler.kubeconfig cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.10.100:8443 --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-credentials kubernetes-admin --client-certificate=/etc/kubernetes/pki/admin.pem --client-key=/etc/kubernetes/pki/admin-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-context kubernetes-admin@kubernetes --cluster=kubernetes --user=kubernetes-admin --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig创建ServiceAccount Key ——secret
openssl genrsa -out /etc/kubernetes/pki/sa.key 2048 openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub将证书发送到其他master节点
for NODE in k8s-master02 k8s-master03; do for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE}; done; for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}; done; done查看证书
[root@k8s-master01 pki]# ls /etc/kubernetes/pki/ admin.csr apiserver-key.pem ca.pem front-proxy-ca.csr front-proxy-client-key.pem scheduler.csr admin-key.pem apiserver.pem controller-manager.csr front-proxy-ca-key.pem front-proxy-client.pem scheduler-key.pem admin.pem ca.csr controller-manager-key.pem front-proxy-ca.pem sa.key scheduler.pem apiserver.csr ca-key.pem controller-manager.pem front-proxy-client.csr sa.pub # 一共23个就对了 [root@k8s-master01 pki]# ls /etc/kubernetes/pki/ |wc -l 234.k8s系统组件配置 4.1.etcd配置master01配置
cat /etc/etcd/etcd.config.yml name: 'k8s-master01' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.10.2:2380' listen-client-urls: 'https://192.168.10.2:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.10.2:2380' advertise-client-urls: 'https://192.168.10.2:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://192.168.10.2:2380,k8s-master02=https://192.168.10.3:2380,k8s-master03=https://192.168.10.4:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: falsemaster02配置
cat /etc/etcd/etcd.config.yml name: 'k8s-master02' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.10.3:2380' listen-client-urls: 'https://192.168.10.3:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.10.3:2380' advertise-client-urls: 'https://192.168.10.3:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://192.168.10.2:2380,k8s-master02=https://192.168.10.3:2380,k8s-master03=https://192.168.10.4:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: falsemaster03配置
cat /etc/etcd/etcd.config.yml name: 'k8s-master03' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.10.4:2380' listen-client-urls: 'https://192.168.10.4:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.10.4:2380' advertise-client-urls: 'https://192.168.10.4:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://192.168.10.2:2380,k8s-master02=https://192.168.10.3:2380,k8s-master03=https://192.168.10.4:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false4.2.创建service(所有master节点操作)创建etcd.service并启动
vim /usr/lib/systemd/system/etcd.service cat /usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Service documentation=https://coreos.com/etcd/docs/latest/ After=network.target [Service] Type=notify ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml Restart=on-failure RestartSec=10 LimitNOFILE=65536 [Install] WantedBy=multi-user.target Alias=etcd3.service创建etcd证书目录
mkdir /etc/kubernetes/pki/etcd ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/ systemctl daemon-reload systemctl enable --now etcd查看etcd状态
export ETCDCTL_API=3 etcdctl --endpoints="192.168.10.4:2379,192.168.10.3:2379,192.168.10.2:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table +-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | 192.168.10.4:2379 | 56875ab4a12c94e8 | 3.5.1 | 25 kB | false | false | 2 | 8 | 8 | | | 192.168.10.3:2379 | 33df6a8fe708d3fd | 3.5.1 | 25 kB | true | false | 2 | 8 | 8 | | | 192.168.10.2:2379 | 58fbe5ec9743048f | 3.5.1 | 20 kB | false | false | 2 | 8 | 8 | | +-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+5.高可用配置在lb01和lb02两台服务器上操作安装keepalived和haproxy服务(想用新版可自行安装也行)
yum -y install keepalived haproxy修改haproxy配置文件(两台配置文件一样)
[root@lb01 ~]# cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak [root@lb01 ~]# vim /etc/haproxy/haproxy.cfg [root@lb01 ~]# cat /etc/haproxy/haproxy.cfg global maxconn 2000 ulimit-n 16384 log 127.0.0.1 local0 err stats timeout 30s defaults log global mode http option httplog timeout connect 5000 timeout client 50000 timeout server 50000 timeout http-request 15s timeout http-keep-alive 15s frontend k8s-master bind 0.0.0.0:8443 bind 127.0.0.1:8443 mode tcp option tcplog tcp-request inspect-delay 5s default_backend k8s-master backend k8s-master mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server k8s-master01 192.168.10.2:6443 check server k8s-master02 192.168.10.3:6443 check server k8s-master03 192.168.10.4:6443 checklb01配置keepalived master节点
[root@lb01 ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak [root@lb01 ~]# vim /etc/keepalived/keepalived.conf [root@lb01 ~]# cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state MASTER interface ens33 mcast_src_ip 192.168.10.7 virtual_router_id 51 priority 100 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.10.100 } track_script { chk_apiserver } }lb02配置keepalived backup节点
[root@lb02 ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak 您在 /var/spool/mail/root 中有新邮件 [root@lb02 ~]# vim /etc/keepalived/keepalived.conf [root@lb02 ~]# cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state BACKUP interface ens33 mcast_src_ip 192.168.10.8 virtual_router_id 51 priority 50 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.10.100 } track_script { chk_apiserver } }健康检查脚本配置(两台lb主机)
vim /etc/keepalived/check_apiserver.sh cat /etc/keepalived/check_apiserver.sh #!/bin/bash err=0 for k in $(seq 1 3) do check_code=$(pgrep haproxy) if [[ $check_code == "" ]]; then err=$(expr $err + 1) sleep 1 continue else err=0 break fi done if [[ $err != "0" ]]; then echo "systemctl stop keepalived" /usr/bin/systemctl stop keepalived exit 1 else exit 0 fi # 给脚本授权 chmod +x /etc/keepalived/check_apiserver.sh启动服务
systemctl daemon-reload systemctl enable --now haproxy systemctl enable --now keepalived测试高可用
# 能ping同 [root@k8s-node02 ~]# ping 192.168.10.100 # 能telnet访问 [root@k8s-node02 ~]# telnet 192.168.10.100 8443 # 关闭主节点,看vip是否漂移到备节点6.k8s组件配置(区别于第4点)所有k8s节点创建以下目录
mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes6.1.创建apiserver(所有master节点)master01节点配置
vim /usr/lib/systemd/system/kube-apiserver.service cat /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver --v=2 --logtostderr=true --allow-privileged=true --bind-address=0.0.0.0 --secure-port=6443 --insecure-port=0 --advertise-address=192.168.10.2 --service-cluster-ip-range=10.96.0.0/12 --service-node-port-range=30000-32767 --etcd-servers=https://192.168.10.2:2379,https://192.168.10.3:2379,https://192.168.10.4:2379 --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem --etcd-certfile=/etc/etcd/ssl/etcd.pem --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem --client-ca-file=/etc/kubernetes/pki/ca.pem --tls-cert-file=/etc/kubernetes/pki/apiserver.pem --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-account-issuer=https://kubernetes.default.svc.cluster.local --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota --authorization-mode=Node,RBAC --enable-bootstrap-token-auth=true --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem --requestheader-allowed-names=aggregator --requestheader-group-headers=X-Remote-Group --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-username-headers=X-Remote-User # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.targetmaster02节点配置
vim /usr/lib/systemd/system/kube-apiserver.service cat /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver --v=2 --logtostderr=true --allow-privileged=true --bind-address=0.0.0.0 --secure-port=6443 --insecure-port=0 --advertise-address=192.168.10.3 --service-cluster-ip-range=10.96.0.0/12 --service-node-port-range=30000-32767 --etcd-servers=https://192.168.10.2:2379,https://192.168.10.3:2379,https://192.168.10.4:2379 --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem --etcd-certfile=/etc/etcd/ssl/etcd.pem --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem --client-ca-file=/etc/kubernetes/pki/ca.pem --tls-cert-file=/etc/kubernetes/pki/apiserver.pem --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-account-issuer=https://kubernetes.default.svc.cluster.local --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota --authorization-mode=Node,RBAC --enable-bootstrap-token-auth=true --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem --requestheader-allowed-names=aggregator --requestheader-group-headers=X-Remote-Group --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-username-headers=X-Remote-User # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.targetmaster03节点配置
vim /usr/lib/systemd/system/kube-apiserver.service cat /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver --v=2 --logtostderr=true --allow-privileged=true --bind-address=0.0.0.0 --secure-port=6443 --insecure-port=0 --advertise-address=192.168.10.4 --service-cluster-ip-range=10.96.0.0/12 --service-node-port-range=30000-32767 --etcd-servers=https://192.168.10.2:2379,https://192.168.10.3:2379,https://192.168.10.4:2379 --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem --etcd-certfile=/etc/etcd/ssl/etcd.pem --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem --client-ca-file=/etc/kubernetes/pki/ca.pem --tls-cert-file=/etc/kubernetes/pki/apiserver.pem --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-account-issuer=https://kubernetes.default.svc.cluster.local --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota --authorization-mode=Node,RBAC --enable-bootstrap-token-auth=true --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem --requestheader-allowed-names=aggregator --requestheader-group-headers=X-Remote-Group --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-username-headers=X-Remote-User # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target启动apiserver(所有master节点)
systemctl daemon-reload && systemctl enable --now kube-apiserver # 注意查看状态是否启动正常 systemctl status kube-apiserver6.2.配置kube-controller-manager service所有master节点配置,且配置相同172.16.0.0/12为pod网段,按需求设置你自己的网段
vim /usr/lib/systemd/system/kube-controller-manager.service cat /usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-controller-manager --v=2 --logtostderr=true --address=127.0.0.1 --root-ca-file=/etc/kubernetes/pki/ca.pem --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem --service-account-private-key-file=/etc/kubernetes/pki/sa.key --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig --leader-elect=true --use-service-account-credentials=true --node-monitor-grace-period=40s --node-monitor-period=5s --pod-eviction-timeout=2m0s --controllers=*,bootstrapsigner,tokencleaner --allocate-node-cidrs=true --cluster-cidr=172.16.0.0/12 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem --node-cidr-mask-size=24 Restart=always RestartSec=10s [Install] WantedBy=multi-user.target启动kube-controller-manager,并查看状态
systemctl daemon-reload systemctl enable --now kube-controller-manager systemctl status kube-controller-manager6.3.配置kube-scheduler service所有master节点配置,且配置相同
vim /usr/lib/systemd/system/kube-scheduler.service cat /usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-scheduler --v=2 --logtostderr=true --address=127.0.0.1 --leader-elect=true --kubeconfig=/etc/kubernetes/scheduler.kubeconfig Restart=always RestartSec=10s [Install] WantedBy=multi-user.target启动并查看服务状态
systemctl daemon-reload systemctl enable --now kube-scheduler systemctl status kube-scheduler7.TLS Bootstrapping配置在master01上配置
cd /root/k8s-ha-install/bootstrap kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.10.100:8443 --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config set-credentials tls-bootstrap-token-user --token=c8ad9c.2e4d610cf3e7426e --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config set-context tls-bootstrap-token-user@kubernetes --cluster=kubernetes --user=tls-bootstrap-token-user --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config use-context tls-bootstrap-token-user@kubernetes --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig # token的位置在bootstrap.secret.yaml,如果修改的话到这个文件修改 mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config查看集群状态,没问题的话继续后续操作
kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR controller-manager Healthy ok etcd-0 Healthy {"health":"true","reason":""} scheduler Healthy ok etcd-1 Healthy {"health":"true","reason":""} etcd-2 Healthy {"health":"true","reason":""} kubectl create -f bootstrap.secret.yaml8.node节点配置 8.1.在master01上将证书复制到node节点cd /etc/kubernetes/ for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do ssh $NODE mkdir -p /etc/kubernetes/pki for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE} done done8.2.kubelet配置所有k8s节点创建相关目录
mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/所有k8s节点配置kubelet service
vim /usr/lib/systemd/system/kubelet.service cat /usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet documentation=https://github.com/kubernetes/kubernetes After=docker.service Requires=docker.service [Service] ExecStart=/usr/local/bin/kubelet Restart=always StartLimitInterval=0 RestartSec=10 [Install] WantedBy=multi-user.target所有k8s节点配置kubelet service的配置文件
vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf cat /etc/systemd/system/kubelet.service.d/10-kubelet.conf [Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig" Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock --cgroup-driver=systemd" Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml" Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' " ExecStart= ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS所有k8s节点创建kubelet的配置文件
vim /etc/kubernetes/kubelet-conf.yml apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration address: 0.0.0.0 port: 10250 readOnlyPort: 10255 authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.pem authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s cgroupDriver: systemd cgroupsPerQOS: true clusterDNS: - 10.96.0.10 clusterDomain: cluster.local containerLogMaxFiles: 5 containerLogMaxSize: 10Mi contentType: application/vnd.kubernetes.protobuf cpuCFSQuota: true cpuManagerPolicy: none cpuManagerReconcilePeriod: 10s enableControllerAttachDetach: true enableDebuggingHandlers: true enforceNodeAllocatable: - pods eventBurst: 10 eventRecordQPS: 5 evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5% evictionPressureTransitionPeriod: 5m0s failSwapOn: true fileCheckFrequency: 20s hairpinMode: promiscuous-bridge healthzBindAddress: 127.0.0.1 healthzPort: 10248 httpCheckFrequency: 20s imageGCHighThresholdPercent: 85 imageGCLowThresholdPercent: 80 imageMinimumGCAge: 2m0s iptablesDropBit: 15 iptablesMasqueradeBit: 14 kubeAPIBurst: 10 kubeAPIQPS: 5 makeIPTablesUtilChains: true maxOpenFiles: 1000000 maxPods: 110 nodeStatusUpdateFrequency: 10s oomScoreAdj: -999 podPidsLimit: -1 registryBurst: 10 registryPullQPS: 5 resolvConf: /etc/resolv.conf rotateCertificates: true runtimeRequestTimeout: 2m0s serializeImagePulls: true staticPodPath: /etc/kubernetes/manifests streamingConnectionIdleTimeout: 4h0m0s syncFrequency: 1m0s volumeStatsAggPeriod: 1m0s启动kubelet
systemctl daemon-reload systemctl enable --now kubelet查看集群
kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 NotReady8.3.kube-proxy配置18m v1.23.3 k8s-master02 NotReady 18m v1.23.3 k8s-master03 NotReady 18m v1.23.3 k8s-node01 NotReady 18m v1.23.3 k8s-node02 NotReady 18m v1.23.3 此配置只在master01操作
cd /root/k8s-ha-install kubectl -n kube-system create serviceaccount kube-proxy kubectl create clusterrolebinding system:kube-proxy --clusterrole system:node-proxier --serviceaccount kube-system:kube-proxy SECRET=$(kubectl -n kube-system get sa/kube-proxy --output=jsonpath='{.secrets[0].name}') JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET --output=jsonpath='{.data.token}' | base64 -d) PKI_DIR=/etc/kubernetes/pki K8S_DIR=/etc/kubernetes kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.10.100:8443 --kubeconfig=${K8S_DIR}/kube-proxy.kubeconfig kubectl config set-credentials kubernetes --token=${JWT_TOKEN} --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config set-context kubernetes --cluster=kubernetes --user=kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config use-context kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig将kubeconfig发送至其他节点
for NODE in k8s-master02 k8s-master03; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig done for NODE in k8s-node01 k8s-node02; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig done所有k8s节点添加kube-proxy的配置和service文件
vim /usr/lib/systemd/system/kube-proxy.service cat /usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Kube Proxy documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-proxy --config=/etc/kubernetes/kube-proxy.yaml --v=2 Restart=always RestartSec=10s [Install] WantedBy=multi-user.targetvim /etc/kubernetes/kube-proxy.yaml cat /etc/kubernetes/kube-proxy.yaml apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 clientConnection: acceptContentTypes: "" burst: 10 contentType: application/vnd.kubernetes.protobuf kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig qps: 5 clusterCIDR: 172.16.0.0/12 configSyncPeriod: 15m0s conntrack: max: null maxPerCore: 32768 min: 131072 tcpCloseWaitTimeout: 1h0m0s tcpEstablishedTimeout: 24h0m0s enableProfiling: false healthzBindAddress: 0.0.0.0:10256 hostnameOverride: "" iptables: masqueradeAll: false masqueradeBit: 14 minSyncPeriod: 0s syncPeriod: 30s ipvs: masqueradeAll: true minSyncPeriod: 5s scheduler: "rr" syncPeriod: 30s kind: KubeProxyConfiguration metricsBindAddress: 127.0.0.1:10249 mode: "ipvs" nodePortAddresses: null oomScoreAdj: -999 portRange: "" udpIdleTimeout: 250ms启动kube-proxy
systemctl daemon-reload systemctl enable --now kube-proxy9.安装Calico以下步骤只在master01操作更改calico网段
cd /root/k8s-ha-install/calico/ sed -i "s#POD_CIDR#172.16.0.0/12#g" calico.yaml grep "IPV4POOL_CIDR" calico.yaml -A 1 - name: CALICO_IPV4POOL_CIDR value: "172.16.0.0/12" # 创建 kubectl apply -f calico.yaml查看容器状态
kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-5dffd5886b-4blh6 1/1 Running 0 4m24s calico-node-fvbdq 1/1 Running 1 (2m51s ago) 4m23s calico-node-g8nqd 1/1 Running 0 4m23s calico-node-mdps8 1/1 Running 0 4m24s calico-node-nf4nt 1/1 Running 0 4m24s calico-node-sq2ml 1/1 Running 0 4m24s calico-typha-8445487f56-mg6p8 1/1 Running 0 4m24s calico-typha-8445487f56-pxbpj 1/1 Running 0 4m24s calico-typha-8445487f56-tnssl 1/1 Running 0 4m24s10.安装CoreDNS以下步骤只在master01操作修改文件
cd /root/k8s-ha-install/ vim CoreDNS/coredns.yaml clusterIP: 10.96.0.10安装
kubectl create -f CoreDNS/coredns.yaml serviceaccount/coredns created clusterrole.rbac.authorization.k8s.io/system:coredns created clusterrolebinding.rbac.authorization.k8s.io/system:coredns created configmap/coredns created deployment.apps/coredns created service/kube-dns created11.安装Metrics Server以下步骤只在master01操作在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率安装metrics server
cd /root/k8s-ha-install/metrics-server kubectl create -f . serviceaccount/metrics-server created clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrole.rbac.authorization.k8s.io/system:metrics-server created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created service/metrics-server created deployment.apps/metrics-server created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created稍等片刻查看状态
kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s-master01 244m 12% 1152Mi 30% k8s-master02 357m 17% 1228Mi 32% k8s-master03 230m 11% 1232Mi 32% k8s-node01 98m 4% 642Mi 16% k8s-node02 90m 4% 644Mi 16%12.集群验证创建pod
cat<用pod解析默认命名空间中的kubernetes
[root@k8s-master01 dashboard]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1443/TCP 178m [root@k8s-master01 dashboard]# kubectl exec busybox -n default -- nslookup kubernetes Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local 测试跨命名空间是否可以解析
[root@k8s-master01 dashboard]# kubectl exec busybox -n default -- nslookup kube-dns.kube-system Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kube-dns.kube-system Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53
telnet 10.96.0.1 443 Trying 10.96.0.1... Connected to 10.96.0.1. Escape character is '^]'. telnet 10.96.0.10 53 Trying 10.96.0.10... Connected to 10.96.0.10. Escape character is '^]'. Connection closed by foreign host. curl 10.96.0.10:53 curl: (52) Empty reply from serverPod和Pod之前要能通
kubectl get po -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES busybox 1/1 Running 0 17m 172.27.14.193 k8s-node02kubectl get po -n kube-system -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-5dffd5886b-4blh6 1/1 Running 0 77m 172.25.244.193 k8s-master01 calico-node-fvbdq 1/1 Running 1 (75m ago) 77m 192.168.10.2 k8s-master01 calico-node-g8nqd 1/1 Running 0 77m 192.168.10.5 k8s-node01 calico-node-mdps8 1/1 Running 0 77m 192.168.10.6 k8s-node02 calico-node-nf4nt 1/1 Running 0 77m 192.168.10.4 k8s-master03 calico-node-sq2ml 1/1 Running 0 77m 192.168.10.3 k8s-master02 calico-typha-8445487f56-mg6p8 1/1 Running 0 77m 192.168.10.6 k8s-node02 calico-typha-8445487f56-pxbpj 1/1 Running 0 77m 192.168.10.2 k8s-master01 calico-typha-8445487f56-tnssl 1/1 Running 0 77m 192.168.10.5 k8s-node01 coredns-5db5696c7-67h79 1/1 Running 0 63m 172.25.92.65 k8s-master02 metrics-server-6bf7dcd649-5fhrw 1/1 Running 0 61m 172.18.195.1 k8s-master03 # 进入busybox ping其他节点上的pod kubectl exec -ti busybox -- sh / # ping 192.168.10.5 PING 192.168.10.5 (192.168.10.5): 56 data bytes 64 bytes from 192.168.10.5: seq=0 ttl=63 time=0.358 ms 64 bytes from 192.168.10.5: seq=1 ttl=63 time=0.668 ms 64 bytes from 192.168.10.5: seq=2 ttl=63 time=0.637 ms 64 bytes from 192.168.10.5: seq=3 ttl=63 time=0.624 ms 64 bytes from 192.168.10.5: seq=4 ttl=63 time=0.907 ms # 可以连通证明这个pod是可以跨命名空间和跨主机通信的 创建三个副本,可以看到3个副本分布在不同的节点上(用完可以删了)
[root@k8s-master01 ~]# kubectl create deploy nginx --image=nginx --replicas=3 deployment.apps/nginx created [root@k8s-master01 ~]# kubectl get po -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES busybox 1/1 Running 0 25m 172.27.14.193 k8s-node02nginx-85b98978db-2cxkl 0/1 ContainerCreating 0 18s k8s-node02 nginx-85b98978db-dvmnx 0/1 ContainerCreating 0 18s k8s-master02 nginx-85b98978db-lmwl7 0/1 ContainerCreating 0 18s k8s-master03 # 删除nginx [root@k8s-master01 ~]# kubectl delete deploy nginx 以上验证都没有问题的话基本这个集群就可以正常使用了。(不容易啊!!!!!!) 13.安装dashboard
安装dashboard
cd /root/k8s-ha-install/dashboard/ kubectl create -f . # 结果显示 serviceaccount/admin-user created clusterrolebinding.rbac.authorization.k8s.io/admin-user created namespace/kubernetes-dashboard created serviceaccount/kubernetes-dashboard created service/kubernetes-dashboard created secret/kubernetes-dashboard-certs created secret/kubernetes-dashboard-csrf created secret/kubernetes-dashboard-key-holder created configmap/kubernetes-dashboard-settings created role.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created deployment.apps/kubernetes-dashboard created service/dashboard-metrics-scraper created deployment.apps/dashboard-metrics-scraper created创建管理员用户
vim admin.yaml cat admin.yaml apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user annotations: rbac.authorization.kubernetes.io/autoupdate: "true" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kube-system执行yaml文件
kubectl apply -f admin.yaml -n kube-system更改dashboard的svc为NodePort,如果已是请忽略
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard type: NodePort查看端口号
kubectl get svc kubernetes-dashboard -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard NodePort 10.109.245.229443:31608/TCP 7m32s 查看token
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') Name: admin-user-token-b29sl Namespace: kube-system Labels:Annotations: kubernetes.io/service-account.name: admin-user kubernetes.io/service-account.uid: 66516eff-7898-4fdc-aeb5-844cf09897e7 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1363 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6InRRR3ppeFhhOWs5ZHU2VEhDMGxQT2pjMGhDSjVMSGVsRm5PWVpRdlloRlkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWIyOXNsIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI2NjUxNmVmZi03ODk4LTRmZGMtYWViNS04NDRjZjA5ODk3ZTciLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.yZEczshc-dnf7DMdQHm2tBngjJD2KDOLfUEi_8i-4YIgmew6it3JPlhEMGTbFdcMSqm85JxJ2rwXMAplCgtKcJdiARKMgb1umpoIcRSoVYeZC4jzR5Z1f3cVHSNdbEoOfSMEJpdGVjI7l2naDMQxvehngEmQD8EOJe_9Ua9yeL5uoN5QpnzjQY1n7n7kHIOcJq9yNYCxtUFs_rBavAZvCDR-rvGnkxu-7E3tncf0LPxm9PEOKyHgXyf2socFLgFNp3aOX23H9vwhSwTUYot0iJRrP-ZbjfqjmPDPoKPi1s57q6iEC871xUNvbm1QdlQDLYT3XkjNSm7--NHEUr-lhA 登录dashboard,注意一定要是https://
14. 生产环境关键性配置(根据需求配置)
配置kube-controller-manager有效期100年(能不能生效的先配上再说)
vim /usr/lib/systemd/system/kube-controller-manager.service # [Service]下找个地方加上 --cluster-signing-duration=876000h0m0s # 重启 systemctl daemon-reload systemctl restart kube-controller-manager防止漏洞扫描
vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf [Service] Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.kubeconfig --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig" Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin" Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6" Environment="KUBELET_EXTRA_ARGS=--tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --image-pull-progress-deadline=30m" ExecStart= ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS预留空间,按需分配
vim /etc/kubernetes/kubelet-conf.yml rotateServerCertificates: true allowedUnsafeSysctls: - "net.core*" - "net.ipv4.*" kubeReserved: cpu: "1" memory: 1Gi ephemeral-storage: 10Gi systemReserved: cpu: "1" memory: 1Gi ephemeral-storage: 10Gi数据盘要与系统盘分开;etcd使用ssd磁盘



