1. 环境信息
| 节点信息 | ip地址 | 资源配置 | 系统信息 |
| k8s-master | 10.0.0.100/24 | 2c4G50G | CentOS Linux release 7.8.2003 (Core) |
| k8s-node1 | 10.0.0.101/24 | 2c4G50G | CentOS Linux release 7.8.2003 (Core) |
| k8s-node2 | 10.0.0.101/24 | 2c4G50G | CentOS Linux release 7.8.2003 (Core) |
2. 关闭防火墙 && 禁用 SELinux
systemctl stop firewalld systemctl disable firewalld sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config setenforce 0
3. 禁用sawp分区
swapoff -a vim /etc/fstab
4. 设置主机名&&添加主机名与IP对应关系
cat > /etc/hostname<> /etc/hosts < 5. 桥接的IPV4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf <6. 安装运行时环境docker
# 更新国内源 mkdir -p /etc/yum.repos.d/repo.bak mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/repo.bak curl -o /etc/yum.repos.d/Centos7-base.repo http://pub.mirrors.aliyun.com/repo/Centos-7.repo curl -o /etc/yum.repos.d/Centos7-epel.repo http://pub.mirrors.aliyun.com/repo/epel-7.repo #添加docker源 下载地址:https://mirrors.aliyun.com/docker-ce/ yum -y install yum-utils yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum install -y docker-ce # 设置docker开机启动 systemctl enable docker && systemctl start docker # 为docker设置加速器 vim /etc/docker/daemon.json { "registry-mirrors": [ "https://registry.docker-cn.com", "https://hub-mirror.c.163.com", "https://docker.mirrors.ustc.edu.cn" ], "exec-opts": ["native.cgroupdriver=systemd"] } systemctl daemon-reload systemctl restart docker7. 添加kubernets源
cat </etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF yum install -y kubelet kubeadm kubectl systemctl enable kubelet && systemctl start kubelet 8. 锁定版本
yum install -y yum-plugin-versionlock yum versionlock add docker-ce docker-ce-cli kubectl kubeadm kubelet 已加载插件:fastestmirror, versionlock Adding versionlock on: 3:docker-ce-20.10.10-3.el7 Adding versionlock on: 1:docker-ce-cli-20.10.10-3.el7 Adding versionlock on: 0:kubectl-1.22.3-0 Adding versionlock on: 0:kubeadm-1.22.3-0 Adding versionlock on: 0:kubelet-1.22.3-0 versionlock added: 59. 在mster节点上安装kubernetes
kubeadm init --apiserver-advertise-address=10.0.0.100 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.22.3 --service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/1610. 初始化过程
[init] Using Kubernetes version: v1.22.3 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 10.0.0.100] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [10.0.0.100 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [10.0.0.100 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 8.504452 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: 3xrcbh.ne52s40ihurgiqv8 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBEConFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.0.0.100:6443 --token 3xrcbh.ne52s40ihurgiqv8 --discovery-token-ca-cert-hash sha256:6a551e4201ca6ba175f66c64916d06a5b37a2784ca2a8faa6541985087052ff311. 安装CNI插件
wget https://docs.projectcalico.org/manifests/calico.yaml --no-check-certificate # 修改IPV4POOL_CIDR 为初始化时--pod-network-cidr=10.244.0.0/16对应得地址 vim calico.yaml # - name: CALICO_IPV4POOL_CIDR # value: "192.168.0.0/16" 更改为配置的CIDR地址: - name: CALICO_IPV4POOL_CIDR value: "10.244.0.0/16" kubectl apply -f calico.yaml kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-75f8f6cc59-mbxps 1/1 Running 0 2m20s kube-system calico-node-5hx4d 1/1 Running 0 2m21s kube-system calico-node-ddtjt 1/1 Running 0 2m21s kube-system calico-node-n65kw 1/1 Running 0 2m21s kube-system coredns-7f6cbbb7b8-mvd8x 1/1 Running 0 21m kube-system coredns-7f6cbbb7b8-x7b2s 1/1 Running 0 21m kube-system etcd-k8s-master 1/1 Running 1 (18m ago) 21m kube-system kube-apiserver-k8s-master 1/1 Running 1 (18m ago) 21m kube-system kube-controller-manager-k8s-master 1/1 Running 1 (18m ago) 21m kube-system kube-proxy-7rxb6 1/1 Running 1 (18m ago) 21m kube-system kube-proxy-9dch7 1/1 Running 1 (18m ago) 21m kube-system kube-proxy-vpjbr 1/1 Running 1 (18m ago) 20m kube-system kube-scheduler-k8s-master 1/1 Running 1 (18m ago) 21m tigera-operator tigera-operator-59f4845b57-skl47 1/1 Running 0 16m



