检查当前kubeadm版本
[root@k8s-node2 ~]# rpm -qi kubeadm Name : kubeadm Version : 1.20.12 Release : 0 Architecture: x86_64 Install Date: Fri 12 Nov 2021 05:32:45 PM CST Group : Unspecified Size : 39219972 License : ASL 2.0 Signature : RSA/SHA512, Thu 28 Oct 2021 11:10:53 AM CST, Key ID f09c394c3e1ba8d5 Source RPM : kubelet-1.20.12-0.src.rpm Build Date : Thu 28 Oct 2021 11:05:04 AM CST Build Host : 5b2f91fa31c0 Relocations : (not relocatable) URL : https://kubernetes.io Summary : Command-line utility for administering a Kubernetes cluster. Description : Command-line utility for administering a Kubernetes cluster.
查看官网当前最新版本为1.22.3
开始安装最新版本
[root@k8s-master yum.repos.d]# yum install kubeadm-1.22.3 -y Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.aliyun.com * epel: ftp.iij.ad.jp * extras: mirrors.aliyun.com * updates: mirrors.aliyun.com Resolving Dependencies --> Running transaction check ---> Package kubeadm.x86_64 0:1.20.12-0 will be updated ---> Package kubeadm.x86_64 0:1.22.3-0 will be an update --> Finished Dependency Resolution Dependencies Resolved ================================================================================================================================================================================================================================================================== Package Arch Version Repository Size ================================================================================================================================================================================================================================================================== Updating: kubeadm x86_64 1.22.3-0 kubernetes 9.3 M Transaction Summary ================================================================================================================================================================================================================================================================== Upgrade 1 Package Total download size: 9.3 M Downloading packages: Delta RPMs disabled because /usr/bin/applydeltarpm not installed. 53279f6bb52408495ff752110742095df27fc8f65bffe50df402ac86f51c2a67-kubeadm-1.22.3-0.x86_64.rpm | 9.3 MB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Updating : kubeadm-1.22.3-0.x86_64 1/2 Cleanup : kubeadm-1.20.12-0.x86_64 2/2 Verifying : kubeadm-1.22.3-0.x86_64 1/2 Verifying : kubeadm-1.20.12-0.x86_64 2/2 Updated: kubeadm.x86_64 0:1.22.3-0 Complete!
查看kubeadm更新计划
[root@k8s-master yum.repos.d]# kubeadm upgrade plan [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [upgrade/config] FATAL: this version of kubeadm only supports deploying clusters with the control plane version >= 1.21.0. Current version: v1.20.12 To see the stack trace of this error execute with --v=5 or higher
检查结果发现当前只能升级到1.21.0版本,估计是无法进行跨版本升级的原因,那只能先升级到1.21.0版本
首先卸载原有的1.22.3版本 然后安装1.21.0
[root@k8s-master yum.repos.d]# rpm -qa | grep kubeadm kubeadm-1.22.3-0.x86_64 [root@k8s-master yum.repos.d]# yum install kubeadm-1.21.0 Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.aliyun.com * epel: ftp.iij.ad.jp * extras: mirrors.aliyun.com * updates: mirrors.aliyun.com Resolving Dependencies --> Running transaction check ---> Package kubeadm.x86_64 0:1.21.0-0 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================================================================================================================================================================================================== Package Arch Version Repository Size ================================================================================================================================================================================================================================================================== Installing: kubeadm x86_64 1.21.0-0 kubernetes 9.1 M Transaction Summary ================================================================================================================================================================================================================================================================== Install 1 Package Total download size: 9.1 M Installed size: 43 M Is this ok [y/d/N]: y Downloading packages: dc4816b13248589b85ee9f950593256d08a3e6d4e419239faf7a83fe686f641c-kubeadm-1.21.0-0.x86_64.rpm | 9.1 MB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Warning: RPMDB altered outside of yum. Installing : kubeadm-1.21.0-0.x86_64 1/1 Verifying : kubeadm-1.21.0-0.x86_64 1/1 Installed: kubeadm.x86_64 0:1.21.0-0 Complete!
由于
国内无法访问k8s.gcr.io镜像,直接使用命令kubeadm upgrade apply v1.21.0 更新会以下错误
[root@k8s-master yum.repos.d]# kubeadm upgrade apply v1.21.0 [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks. [upgrade] Running cluster health checks [upgrade/version] You have chosen to change the cluster version to "v1.21.0" [upgrade/versions] Cluster version: v1.20.12 [upgrade/versions] kubeadm version: v1.21.0 [upgrade//confirm/i] Are you sure you want to proceed with the upgrade? [y/N]: y [upgrade/prepull] Pulling images required for setting up a Kubernetes cluster [upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection [upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull' [preflight] Some fatal errors occurred: [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.21.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.21.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.21.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.21.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.4.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns/coredns:v1.8.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher
,因此我们需要使用阿里云镜像服务器registry.cn-hangzhou.aliyuncs.com/google_containe拉取1.21.0版本,然后对镜像重新打tag
[root@k8s-master yum.repos.d]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.21.0 v1.21.0: Pulling from google_containers/kube-proxy 1813d21adc01: Pull complete 98435e81eab0: Pull complete 80b8c7bdf1f1: Pull complete 0a60cc3d3112: Pull complete 061309079d30: Pull complete c9b12ef85413: Pull complete a7746f5cce34: Pull complete Digest: sha256:326199e7a5232bf7531a3058e9811c925b07085f33fa882558cc4e89379b9109 Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.21.0 [root@k8s-master yum.repos.d]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.21.0 v1.21.0: Pulling from google_containers/kube-apiserver d94d38b8f0e6: Pull complete f1880506fee2: Pull complete 1af65bcd2386: Pull complete Digest: sha256:828fefd9598ed865d45364d1be859c87aabfa445b03b350e3440d143bd21bca9 Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.21.0 [root@k8s-master yum.repos.d]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0 v1.21.0: Pulling from google_containers/kube-controller-manager d94d38b8f0e6: Already exists f1880506fee2: Already exists bff3737c2cd2: Pull complete Digest: sha256:92414283b8a8ba52ad04691a7124aea042e3f2ec3f6384efc5b08da3e100442d Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0 [root@k8s-master yum.repos.d]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.21.0 v1.21.0: Pulling from google_containers/kube-scheduler d94d38b8f0e6: Already exists f1880506fee2: Already exists 31e7e233174b: Pull complete Digest: sha256:1bcafcb4a0c3105fe08018f34c0e43a10a5d696fc8598b1c705116bcc773726f Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.21.0
然后重新标记tag
[root@k8s-master yum.repos.d]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.21.0 k8s.gcr.io/kube-proxy:v1.21.0 [root@k8s-master yum.repos.d]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.21.0 k8s.gcr.io/kube-apiserver:v1.21.0 [root@k8s-master yum.repos.d]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0 k8s.gcr.io/kube-controller-manager:v1.21.0 [root@k8s-master yum.repos.d]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.21.0 k8s.gcr.io/kube-scheduler:v1.21.0
完成后开始更新kubeadm
[root@k8s-master yum.repos.d]# kubeadm upgrade apply v1.21.0
但因为升级到1.21.0版本需要更新pause和coredns,所以还是报错
[root@k8s-master yum.repos.d]# kubeadm upgrade apply v1.21.0 [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks. [upgrade] Running cluster health checks [upgrade/version] You have chosen to change the cluster version to "v1.21.0" [upgrade/versions] Cluster version: v1.20.12 [upgrade/versions] kubeadm version: v1.21.0 [upgrade//confirm/i] Are you sure you want to proceed with the upgrade? [y/N]: y [upgrade/prepull] Pulling images required for setting up a Kubernetes cluster [upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection [upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull' [preflight] Some fatal errors occurred: [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.4.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns/coredns:v1.8.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher
类似的操作方法
[root@k8s-master yum.repos.d]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.4.1 3.4.1: Pulling from google_containers/pause fac425775c9d: Pull complete Digest: sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.4.1 [root@k8s-master yum.repos.d]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns:v1.8.0 Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns, repository does not exist or may require 'docker login' [root@k8s-master yum.repos.d]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.0 v1.8.0: Pulling from google_containers/coredns c6568d217a00: Already exists 5984b6d55edf: Pull complete Digest: sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.0 [root@k8s-master yum.repos.d]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.4.1 k8s.gcr.io/pause:3.4.1 [root@k8s-master yum.repos.d]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.0 k8s.gcr.io/coredns/coredns:v1.8.0
再次升级
[root@k8s-master yum.repos.d]# kubeadm upgrade apply v1.21.0 [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks. [upgrade] Running cluster health checks [upgrade/version] You have chosen to change the cluster version to "v1.21.0" [upgrade/versions] Cluster version: v1.20.12 [upgrade/versions] kubeadm version: v1.21.0 [upgrade//confirm/i] Are you sure you want to proceed with the upgrade? [y/N]: y [upgrade/prepull] Pulling images required for setting up a Kubernetes cluster [upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection [upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull' [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.21.0"... Static pod: kube-apiserver-k8s-master hash: 2d09cc17d46be1939dd72fad6be33899 Static pod: kube-controller-manager-k8s-master hash: 03e6148729695d31a140dfffd6d0498a Static pod: kube-scheduler-k8s-master hash: f24038546c62f84292fe18147f7d2ef1 [upgrade/etcd] Upgrading to TLS for etcd Static pod: etcd-k8s-master hash: 01ff56a9df3c0658b3d229ca8cf3b6ff [upgrade/staticpods] Preparing for "etcd" upgrade [upgrade/staticpods] Renewing etcd-server certificate [upgrade/staticpods] Renewing etcd-peer certificate [upgrade/staticpods] Renewing etcd-healthcheck-client certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-11-12-18-14-52/etcd.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: etcd-k8s-master hash: 01ff56a9df3c0658b3d229ca8cf3b6ff [apiclient] Found 1 Pods for label selector component=etcd [upgrade/staticpods] Component "etcd" upgraded successfully! [upgrade/etcd] Waiting for etcd to become available [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests117858351" [upgrade/staticpods] Preparing for "kube-apiserver" upgrade [upgrade/staticpods] Renewing apiserver certificate [upgrade/staticpods] Renewing apiserver-kubelet-client certificate [upgrade/staticpods] Renewing front-proxy-client certificate [upgrade/staticpods] Renewing apiserver-etcd-client certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-11-12-18-14-52/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-apiserver-k8s-master hash: 2d09cc17d46be1939dd72fad6be33899 [apiclient] Found 1 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade [upgrade/staticpods] Renewing controller-manager.conf certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-11-12-18-14-52/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-controller-manager-k8s-master hash: 03e6148729695d31a140dfffd6d0498a [apiclient] Found 1 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Preparing for "kube-scheduler" upgrade [upgrade/staticpods] Renewing scheduler.conf certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-11-12-18-14-52/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-scheduler-k8s-master hash: f24038546c62f84292fe18147f7d2ef1 Static pod: kube-scheduler-k8s-master hash: 98c4dbc724c870519b6f3d945a54b5d4 [apiclient] Found 1 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated) [upgrade/postupgrade] Applying label node.kubernetes.io/exclude-from-external-load-balancers='' to control plane Nodes [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.21.0". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
在node节点执行更新
kubeadm upgrade node
然后所有节点安装
yum -y install kubelet-1.21.0 kubeadm-1.21.0 kubectl-1.21.0 systemctl daemon-reload systemctl restart kubelet
查看节点状态,已完成升级
[root@k8s-master yum.repos.d]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master NotReady control-plane,master 311d v1.21.0 k8s-node1 Ready311d v1.21.0 k8s-node2 Ready 311d v1.21.0
然后再次执行以上步骤进行进一步的更新版本。



