kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,kubeadm会对集群配置方面的一些实践做调整,通过实验kubeadm可以学习到Kubernetes官方在集群配置上一些新的最佳实践。
操作系统
Ubuntu 16.04+、Debian 9、CentOS 7、RHEL 7、Fedora 25/26 (best-effort)、其他
内存2GB + ,2核CPU +
集群节点之间可以通信
每个节点唯一主机名,MAC地址和product_uuid
检查MAC地址:使用ip link或者ifconfig -a
检查product_uuid:cat /sys/class/dmi/id/product_uuid
禁止swap分区。这样才能使kubelet正常工作
准备
1.1系统配置
主机名与IP对应关系:
[root@k8s-master ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.201 k8s-master 192.168.1.202 k8s-node1 192.168.1.203 k8s-node2
如果各个主机启用了防火墙,需要开放Kubernetes各个组件所需要的端口,可以查看Installing kubeadm中的”Check required ports”一节。 这里简单起见在各节点禁用防火墙:
systemctl stop firewalld systemctl disable firewalld
禁用SELINUX:
sed -i 's/enforcing/disabled/' /etc/selinux/config setenforce 0
或着
vi /etc/selinux/config SELINUX=disabled
关闭swap:
swapoff -a # 临时 vim /etc/fstab # 永久
同步时间:
yum install ntpdate -y ntpdate ntp.api.bz
创建/etc/sysctl.d/k8s.conf文件,添加如下内容:
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1
执行命令使修改生效:
modprobe br_netfilter sysctl -p /etc/sysctl.d/k8s.conf
1.2kube-proxy开启ipvs的前置条件
由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块,在所有的Kubernetes节点node1和node2上执行以下脚本:
cat > /etc/sysconfig/modules/ipvs.modules <上面脚本创建了的/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块。
接下来还需要确保各个节点上已经安装了ipset软件包
yum install ipset为了便于查看ipvs的代理规则,最好安装一下管理工具 ipvsadm
yum install ipvsadm1.3安装Docker
Kubernetes从1.6开始使用CRI(Container Runtime Interface)容器运行时接口。默认的容器运行时仍然是Docker,使用的是kubelet中内置dockershim CRI实现。
安装docker的yum源:
yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo查看最新的Docker版本:
yum list docker-ce.x86_64 --showduplicates |sort -r [root@go-docker ~]# yum list docker-ce.x86_64 --showduplicates |sort -r * updates: mirrors.aliyun.com Loading mirror speeds from cached hostfile Loaded plugins: fastestmirror, langpacks * extras: mirrors.aliyun.com docker-ce.x86_64 3:19.03.5-3.el7 docker-ce-stable docker-ce.x86_64 3:19.03.4-3.el7 docker-ce-stable docker-ce.x86_64 3:19.03.3-3.el7 docker-ce-stable docker-ce.x86_64 3:19.03.2-3.el7 docker-ce-stable docker-ce.x86_64 3:19.03.1-3.el7 docker-ce-stable docker-ce.x86_64 3:19.03.0-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.9-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.8-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.7-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.6-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.5-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.4-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.3-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.2-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.1-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable docker-ce.x86_64 18.06.3.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.06.2.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable * base: mirrors.aliyun.com Available PackagesKubernetes 1.16当前支持的docker版本列表是1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09。 这里在各节点安装docker的18.09.7版本。
yum makecache fast yum install -y --setopt=obsoletes=0 docker-ce-18.09.7-3.el7 systemctl start docker systemctl enable docker确认一下iptables filter表中FOWARD链的默认策略(pllicy)为ACCEPT。
iptables -nvL [root@k8s-master ~]# iptables -nvL Chain INPUT (policy ACCEPT 20 packets, 2866 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED 0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 19 packets, 2789 bytes) pkts bytes target prot opt in out source destination Chain DOCKER (1 references) pkts bytes target prot opt in out source destination Chain DOCKER-ISOLATION-STAGE-1 (1 references) pkts bytes target prot opt in out source destination 0 0 DOCKER-ISOLATION-STAGE-2 all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 Chain DOCKER-ISOLATION-STAGE-2 (1 references) pkts bytes target prot opt in out source destination 0 0 DROp all -- * docker0 0.0.0.0/0 0.0.0.0/0 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 Chain DOCKER-USER (1 references) pkts bytes target prot opt in out source destination 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/01.4 修改docker cgroup driver为systemd
根据文档CRI installation中的内容,对于使用systemd作为init system的Linux的发行版,使用systemd作为docker的cgroup driver可以确保服务器节点在资源紧张的情况更加稳定,因此这里修改各个节点上docker的cgroup driver为systemd。
创建或修改/etc/docker/daemon.json:vim /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"] }重启docker:
systemctl restart docker docker info | grep Cgroup Cgroup Driver: systemd2.使用kubeadm部署Kubernetes
kubeadm: 引导集群的命令
kubelet:集群中运行任务的代理程序
kubectl:命令行管理工具因为国内网络问题所以导致一些镜像仓库不能下载说使用手动下载方式
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.16.4 docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.16.4 k8s.gcr.io/kube-apiserver:v1.16.4 docker rmi registry.aliyuncs.com/google_containers/kube-apiserver:v1.16.4 docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.16.4 docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.16.4 k8s.gcr.io/kube-controller-manager:v1.16.4 docker rmi registry.aliyuncs.com/google_containers/kube-controller-manager:v1.16.4 docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.16.4 docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.16.4 k8s.gcr.io/kube-scheduler:v1.16.4 docker rmi registry.aliyuncs.com/google_containers/kube-scheduler:v1.16.4 #此节镜像master节点和node节点都需要安装 docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.16.4 docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.16.4 k8s.gcr.io/kube-proxy:v1.16.4 docker rmi registry.aliyuncs.com/google_containers/kube-proxy:v1.16.4 #此镜像master节点和node节点都需要 docker pull registry.aliyuncs.com/google_containers/pause:3.1 docker tag registry.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1 docker rmi registry.aliyuncs.com/google_containers/pause:3.1 docker pull registry.aliyuncs.com/google_containers/coredns:1.6.2 docker tag registry.aliyuncs.com/google_containers/coredns:1.6.2 k8s.gcr.io/coredns:1.6.2 docker rmi registry.aliyuncs.com/google_containers/coredns:1.6.2 docker pull registry.aliyuncs.com/google_containers/etcd:3.3.15-0 docker tag registry.aliyuncs.com/google_containers/etcd:3.3.15-0 k8s.gcr.io/etcd:3.3.15-0 docker rmi registry.aliyuncs.com/google_containers/etcd:3.3.15-0 docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64 docker rmi quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.1 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.1 gcr.io/kubernetes-helm/tiller:v2.14.1 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.1 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.5 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.5 k8s.gcr.io/metrics-server-amd64:v0.3.5 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.5官方Kubernetes YUM软件源
cat </etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kube* EOF 或者添加阿里云的Kubernetes YUM软件源
cat << EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF执行安装命令:
yum makecache fast yum install -y kubelet-1.16.4 kubeadm-1.16.4 kubectl-1.16.4从安装结果可以看出还安装了cri-tools, kubernetes-cni, socat三个依赖:
官方从Kubernetes 1.14开始将cni依赖升级到了0.7.5版本socat是kubelet的依赖
cri-tools是CRI(Container Runtime Interface)容器运行时接口的命令行工具
运行kubelet –help可以看到原来kubelet的绝大多数命令行flag参数都被DEPRECATED了,如:
--address 0.0.0.0 The IP address for the Kubelet to serve on (set to 0.0.0.0 for all IPv4 interfaces and::for all IPv6 interfaces) (default 0.0.0.0) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
而官方推荐我们使用–config指定配置文件,并在配置文件中指定原来这些flag所配置的内容。具体内容可以查看这里Set Kubelet parameters via a config file。这也是Kubernetes为了支持动态Kubelet配置(Dynamic Kubelet Configuration)才这么做的,参考Reconfigure a Node’s Kubelet in a Live Cluster。
kubelet的配置文件必须是json或yaml格式,具体可查看这里。
Kubernetes 1.8开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动。 关闭系统的Swap方法如下:swapoff -a修改所有节点 /etc/fstab 文件,注释掉 SWAP 的自动挂载,使用free -m确认swap已经关闭。 swappiness参数调整,修改/etc/sysctl.d/k8s.conf添加下面一行:
vm.swappiness=0执行下面命令:
sysctl -p /etc/sysctl.d/k8s.conf使修改生效。
在各节点开机启动kubelet服务
systemctl enable kubelet.service systemctl daemon-reload && systemctl restart kubele2.1 使用kubeadm init初始化集群
初始化master之前确认修改/etc/sysconfig/kubelet 中的内容为:
KUBELET_EXTRA_ARGS=--fail-swap-on=falsemaster节点执行命令:
kubeadm init --apiserver-advertise-address=192.168.1.201 --kubernetes-version v1.16.4 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=all或
kubeadm init --apiserver-advertise-address=192.168.1.201 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.16.4 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16运行结果主要内容如下:
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.1.201:6443 --token v5pya1.dly1k110o9oxolo7 --discovery-token-ca-cert-hash sha256:8d411433bb08ba29267226a3f80f66f74fa86562f8daf38a3af57e6330b87fc1执行以下命令
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config最后给出的命令在其他节点执行加入集群:
kubeadm join 192.168.1.201:6443 --token v5pya1.dly1k110o9oxolo7 --discovery-token-ca-cert-hash sha256:8d411433bb08ba29267226a3f80f66f74fa86562f8daf38a3af57e6330b87fc1查看一下集群状态,确认个组件都处于healthy状态:
kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"}2.2 安装Pod Network
执行以下命令:
kdir -p ~/k8s/ cd ~/k8s wget https://raw.githubusercontent.com/coreos/flannel/master/documentation/kube-flannel.yml kubectl apply -f kube-flannel.yml命令结果为:
[root@k8s-master k8s]# kubectl apply -f kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created这里注意kube-flannel.yml这个文件里的flannel的镜像是0.11.0,quay.io/coreos/flannel:v0.11.0-amd64请添加链接描述
如果镜像拉取失败请每一个node进行手动拉取:
docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64 docker rmi quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64安装kubernetes-dashboard
先将yaml文件下载下来,修改里面镜像地址和Service NodePort类型:
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
再将kubernetes-dashboard相关镜像下载:docker pull kubernetesui/dashboard:v2.0.0-beta8 docker pull kubernetesui/metrics-scraper:v1.0.2修改kubernetes-dashboard.yaml文件:
修改Deployment镜像拉取方式:imagePullPolicy: IfNotPresent
修改Service:(注意因为yaml文件不支持使用tab键)kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: type: NodePort ports: - port: 443 targetPort: 8443 NodePort: 30001 selector: k8s-app: kubernetes-dashboard "recommended.yaml" 289L, 7104C written应用kubernetes-dashboard.yaml
kubectl apply -f recommended.yaml应用效果如下:
[root@k8s-master k8s]# kubectl apply -f recommended.yaml namespace/kubernetes-dashboard unchanged serviceaccount/kubernetes-dashboard unchanged service/kubernetes-dashboard created secret/kubernetes-dashboard-certs created secret/kubernetes-dashboard-csrf created secret/kubernetes-dashboard-key-holder created configmap/kubernetes-dashboard-settings created role.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created deployment.apps/kubernetes-dashboard created service/dashboard-metrics-scraper created deployment.apps/dashboard-metrics-scraper created创建sa并绑定默认的cluster-admin管理员集群角色:
kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin登陆kubernetes-dashboard:
kubectl get secret -n kubernetes-dashboard kubectl describe secret dashboard-admin-token-bwdjj -n kubernetes-dashboard注意:查看kubernetes-dashboard 命令:
kubectl --namespace=kubernetes-dashboard get service kubernetes-dashboard解决Google浏览器不能打开kubernetes dashboard方法
mkdir key && cd key生成证书
openssl genrsa -out dashboard.key 2048 openssl req -new -out dashboard.csr -key dashboard.key -subj '/CN=192.168.246.200' openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt删除原有的证书secret
kubectl delete secret kubernetes-dashboard-certs -n kubernetes-dashboard
创建新的证书secretkubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard
查看podkubectl get pod -n kubernetes-dashboard
重启podkubectl delete pod-n kubernetes-dashboard
pad name 为kubernetes-dashboard的名称)删除 kubernetes-dashboard
kubectl get secret,sa,role,rolebinding,services,deployments --namespace=kube-system | grep dashboard kubectl delete deployment kubernetes-dashboard --namespace=kube-system kubectl delete service kubernetes-dashboard --namespace=kube-system kubectl delete role kubernetes-dashboard-minimal --namespace=kube-system kubectl delete rolebinding kubernetes-dashboard-minimal --namespace=kube-system kubectl delete sa kubernetes-dashboard --namespace=kube-system kubectl delete secret kubernetes-dashboard-certs --namespace=kube-system kubectl delete secret kubernetes-dashboard-key-holder --namespace=kube-system登录 kubernetes-dashboard,使用上述创建账号的token登录Kubernetes Dashboard:
kubectl get secret -n kube-system kubectl describe secret dashboard-admin-token-bwdjj -n kube-systemKubernetes集群安装metrics-server
从git上下载最新metrics-service代码:wget https://github.com/kubernetes-incubator/metrics-server/archive/v0.3.5.tar.gz
解压缩:tar zxvf v0.3.5.tar.gz
修改deploy/1.8+/metrics-server-deployment.yaml。追加位置和内容如下:deployment.spec.template.spec.containers[0].commandcommand: - /metrics-server - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalDNS,InternalIP,ExternalDNS,ExternalIP,Hostname修改deploy/1.8+/metrics-server-deployment.yaml的image版本为v0.3.5
修改前:k8s.gcr.io/metrics-server-amd64:v0.3.5 imagePullPolicy: Always
修改后:k8s.gcr.io/metrics-server-amd64:v0.3.5 imagePullPolicy: IfNotPresent
安装metrics-server。kubectl apply -f metrics-server-0.3.5/deploy/1.8+/
确认:kubectl top po -n kube-system[root@k8s-master k8s]# kubectl top pod -n kube-system NAME CPU(cores) MEMORY(bytes) coredns-5644d7b6d9-459zs 2m 10Mi coredns-5644d7b6d9-v4xlt 2m 10Mi etcd-k8s-master 6m 48Mi kube-apiserver-k8s-master 41m 429Mi kube-controller-manager-k8s-master 6m 68Mi kube-flannel-ds-amd64-fkpph 1m 12Mi kube-flannel-ds-amd64-m45n8 1m 11Mi kube-flannel-ds-amd64-qjqzd 1m 9Mi kube-proxy-7xhhs 1m 20Mi kube-proxy-dvqbj 1m 13Mi kube-proxy-xs9lm 2m 13Mi kube-scheduler-k8s-master 2m 24Mi metrics-server-6946b8b5b5-52fb8 6m 11Mi



