| 主机名 | 公网 IP | 私网 IP |
|---|---|---|
| k8s-master01 | 39.104.173.77 | 172.24.114.3 |
| k8s-node01 | 39.104.179.210 | 172.24.114.4 |
| k8s-node02 | 39.104.173.12 | 172.24.114.1 |
| k8s-node03 | 39.104.177.2 | 172.24.114.2 |
- 更改主机名
# 在虚拟机 172.24.114.3 上,设置 k8s-master01 节点 hostnamectl set-hostname k8s-master01 bash #立马生效 # 在虚拟机 172.24.114.4 上,设置 k8s-node01 节点 hostnamectl set-hostname k8s-node01 bash #立马生效 # 在虚拟机 172.24.114.1 上,设置 k8s-node02 节点 hostnamectl set-hostname k8s-node02 bash #立马生效 # 在虚拟机 172.24.114.2 上,设置 k8s-node03 节点 hostnamectl set-hostname k8s-node03 bash #立马生效
- 关闭防火墙
systemctl stop firewalld systemctl disable firewalld
- 允许 iptables 检查桥接流量
cat <
- 将 SELinux 设置为 permissive 模式(相当于将其禁用)
sudo setenforce 0 sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
- 关闭swap(k8s禁止虚拟内存提供性能)
swapoff -a sed -ri 's/.*swap.*/#&/' /etc/fstab #关闭swap分区
- 配置/etc/hosts
# 自定义master与node IP,请根据个人情况修改 cat >> /etc/hosts << EOF 172.28.12.148 master1 172.28.12.149 node1 EOF6.1 安装docker
#清理过往版本docker sudo yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine#安装docker sudo yum install -y yum-utils sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo sudo yum makecache fast sudo yum install docker-ce docker-ce-cli containerd.io -y sudo systemctl start docker sudo systemctl enable docker sudo systemctl status docker6.2 修改docker驱动
执行kubeadm init集群初始化时遇到:
[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker
cgroup driver. The recommended driver is “systemd”.[警告IsDockerSystemdCheck]:检测到“cgroupfs”作为Docker cgroup驱动程序。
推荐的驱动程序是“systemd”#新增配置文件 cat >> /etc/docker/daemon.json << EOF { "exec-opts":["native.cgroupdriver=systemd"] } EOF #重启docker systemctl restart docker systemctl status docker
- 配置阿里云kubernetes软件源
报错:[Errno -1] repomd.xml signature could not be verified for kubernetes Trying other mirror.
解决:https://github.com/kubernetes/kubernetes/issues/60134
处理:repo_gpgcheck=0cat </etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 #repo_gpgcheck=1 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
- 安装kubelet kubeadm kubectl
sudo yum update -y #针对修改repo_gpgcheck=0 # sudo yum install -y kubelet-1.19.4 kubeadm-1.19.4 kubectl-1.19.4 #可以根据github发布版本,指定 # sudo yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0 #可以根据github发布版本,指定 sudo yum install -y kubelet kubeadm kubectl #最好不要指定版本,默认更新为最新 sudo systemctl enable --now kubelet sudo systemctl start kubelet #sudo systemctl status kubelet 此时kubelet还没有正常准备,待kubeadm init后master节点会ok,将node节点join添加后kubelet也会正常
- 检查工具安装
yum list installed | grep kubelet yum list installed | grep kubeadm yum list installed | grep kubectl kubelet --version #查看集群版本结果 Kubernetes v1.23.3二、kubeadm创建集群
kubeadm初始化集群 #在master上执行
切记修改为master的IP地址, --apiserver-advertise-address 172.24.114.3
#apiserver-advertise-address 172.28.12.148为master节点IP,根据个人master IP修改 kubeadm init --apiserver-advertise-address 172.24.114.3 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr 10.244.0.0/16 --service-cidr 10.96.0.0/12 #--kubernetes-version v1.23.3 #本行,可以不添加,默认使用最新的版本
- 此处如果执行失败,可能master的IP填写错误或者未填写 --apiserver-advertise-address
172.24.114.3kubeadm reset
- 然后输入:y
- 已完成 kubeadm init 重置,重新执行以上命令kubeadm init …
#客户端kubectl接入集群 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config注意:在worker节点上执行
#worker节点添加到集群 ---> 在worker节点node1上执行 #自动生成,请保留 kubeadm join 172.24.114.3:6443 --token bofh8w.5r6qwmvargj3d0do --discovery-token-ca-cert-hash sha256:0724d03bf5ca008808b4dc9c68643c90e54d36733a487dc7d73`在这里插入代码片`0dca35a952b89[root@iZ0jlhvtxignmaozy30vffZ ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION iz0jlhvtxignmaozy30vffz NotReady master 8m19s v1.19.4 iz0jlhvtxignmaozy30vfgz NotReady21s v1.19.4
- 添加pod网络 flannel --> master节点执行
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
- 若上面网址较慢或无反应,请复制一下内容放置 kube-flannel.yml
- cat kube-flannel.yml
--- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: psp.flannel.unprivileged annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec: privileged: false volumes: - configMap - secret - emptyDir - hostPath allowedHostPaths: - pathPrefix: "/etc/cni/net.d" - pathPrefix: "/etc/kube-flannel" - pathPrefix: "/run/flannel" readOnlyRootFilesystem: false # Users and groups runAsUser: rule: RunAsAny supplementalGroups: rule: RunAsAny fsGroup: rule: RunAsAny # Privilege Escalation allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false # Capabilities allowedCapabilities: ['NET_ADMIN', 'NET_RAW'] defaultAddCapabilities: [] requiredDropCapabilities: [] # Host namespaces hostPID: false hostIPC: false hostNetwork: true hostPorts: - min: 0 max: 65535 # SELinux seLinux: # SELinux is unused in CaaSP rule: 'RunAsAny' --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel rules: - apiGroups: ['extensions'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: ['psp.flannel.unprivileged'] - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux hostNetwork: true priorityClassName: system-node-critical tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni-plugin #image: flannelcni/flannel-cni-plugin:v1.0.1 for ppc64le and mips64le (dockerhub limitations may apply) image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1 command: - cp args: - -f - /flannel - /opt/cni/bin/flannel volumeMounts: - name: cni-plugin mountPath: /opt/cni/bin - name: install-cni #image: flannelcni/flannel:v0.16.3 for ppc64le and mips64le (dockerhub limitations may apply) image: rancher/mirrored-flannelcni-flannel:v0.16.3 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel #image: flannelcni/flannel:v0.16.3 for ppc64le and mips64le (dockerhub limitations may apply) image: rancher/mirrored-flannelcni-flannel:v0.16.3 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN", "NET_RAW"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ - name: xtables-lock mountPath: /run/xtables.lock volumes: - name: run hostPath: path: /run/flannel - name: cni-plugin hostPath: path: /opt/cni/bin - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg - name: xtables-lock hostPath: path: /run/xtables.lock type: FileOrCreatekubectl apply -f ./kube-flannel.yml #此处应用后node状态由NotReady --> Ready12.控制面master1查看集群
[root@iZ0jlhvtxignmaozy30vffZ ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION iz0jlhvtxignmaozy30vffz Ready master 11m v1.19.4 iz0jlhvtxignmaozy30vfgz Ready3m14s v1.19.4 [root@iZ0jlhvtxignmaozy30vffZ ~]# kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-6d56c8448f-n7f9k 1/1 Running 0 20m kube-system coredns-6d56c8448f-tz6m4 1/1 Running 0 20m kube-system etcd-iz0jlhvtxignmaozy30vffz 1/1 Running 0 20m kube-system kube-apiserver-iz0jlhvtxignmaozy30vffz 1/1 Running 0 20m kube-system kube-controller-manager-iz0jlhvtxignmaozy30vffz 1/1 Running 0 20m kube-system kube-flannel-ds-7j7jn 1/1 Running 0 9m51s kube-system kube-flannel-ds-fdkbv 1/1 Running 0 9m51s kube-system kube-proxy-fdz49 1/1 Running 0 12m kube-system kube-proxy-kgzcp 1/1 Running 0 20m kube-system kube-scheduler-iz0jlhvtxignmaozy30vffz 1/1 Running 0 20m13.工作负载worker节点(k8s-node01、k8s-node02、k8s-node03 配置admin.conf,实现worker节点 kubectl get …)
# k8s-master01 传文件到k8s-node01、k8s-node02、k8s-node03 # 其中172.24.114.4为 k8s-node01 节点 scp -r /etc/kubernetes/admin.conf root@172.24.114.4:/etc/kubernetes/ # 其中172.24.114.1为 k8s-node02 节点 scp -r /etc/kubernetes/admin.conf root@172.24.114.1:/etc/kubernetes/ # 其中172.24.114.2为 k8s-node03 节点 scp -r /etc/kubernetes/admin.conf root@172.24.114.2:/etc/kubernetes/ # 在k8s-node01、k8s-node02、k8s-node03 执行 echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile source ~/.bash_profile三、新增work节点14、请按顺序执行0-9步骤
15、新worker节点添加到集群
#在新worker节点上 kubeadm join --token--discovery-token-ca-cert-hash sha256: kubeadm token list #集群创建在24小时以内,获取 或者 kubeadm token create #集群创超过24小时,获取 openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' #获取 16、新节点配置kubectl客户端工具
四、安装 NFS
请执行第13步骤
主机名 公网 IP 私网 IP k8s-master01 39.104.173.77 172.24.114.3 k8s-node01 39.104.179.210 172.24.114.4 k8s-node02 39.104.173.12 172.24.114.1 k8s-node03 39.104.177.2 172.24.114.2 4.0、搭建 NFS (选择 k8s-master01 172.24.114.3 )
server: 172.24.114.3 path: /data/postgresql4.1 在提供 NFS 存储主机上执行,这里默认k8s-master01节点
yum install -y nfs-utils #这条命令所有节点master、worker都执行 echo "/data/postgresql *(insecure,rw,sync,no_root_squash)" > /etc/exports # 执行以下命令,启动 nfs 服务;创建共享目录 mkdir -p /data/postgresql # 在master执行 chmod -R 777 /data/postgresql # 使配置生效 exportfs -r #检查配置是否生效 exportfs systemctl enable rpcbind && systemctl start rpcbind systemctl enable nfs && systemctl start nfs4.2 在worker主机上执行(k8s-node01 k8s-node02 k8s-node03)
yum install -y nfs-utils #这条命令所有节点master、worker都执行 showmount -e 172.24.114.3 #查看worker节点是否能查到master节点的nfs文件五、配置 StorageClass 存储vim postgresql-storage.yaml## 创建了一个存储类 apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-storage annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: nfs-provisioner #Deployment中spec.template.spec.containers.env.name.PROVISIONER_NAME 保持一致 parameters: archiveOnDelete: "true" ## 删除pv的时候,pv的内容是否要备份 --- apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2 # resources: # limits: # cpu: 10m # requests: # cpu: 10m volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: nfs-provisioner - name: NFS_SERVER value: 172.24.114.3 ## 指定自己nfs服务器地址 - name: NFS_PATH value: /data/postgresql ## nfs服务器共享的目录 volumes: - name: nfs-client-root nfs: server: 172.24.114.3 path: /data/postgresql --- apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["nodes"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.iokubectl apply -f redis-storage.yaml六、安装 helm6.1 安装包的下载地址:https://github.com/helm/helm/releases
6.2 下载软件包:helm-v3.6.3-linux-amd64.tar.gz,如下二选一wget https://get.helm.sh/helm-v3.6.3-linux-amd64.tar.gz # curl -L https://get.helm.sh/helm-v3.6.3-linux-amd64.tar.gz -o helm-v3.6.3-linux-amd64.tar.gz6.3 解压安装包
[root@k8s-master01 ~]# tar -zxvf helm-v3.6.3-linux-amd64.tar.gz [root@k8s-master01~]# cd linux-amd64/ [root@k8s-master01 linux-amd64]# ls helm LICENSE README.md [root@server1 linux-amd64]# cp helm /usr/local/bin/ [root@k8s-master01 ~]# helm version version.BuildInfo{Version:"v3.6.2", GitCommit:"ee407bdf364942bcb8e8c665f82e15aa28009b71", GitTreeState:"clean", GoVersion:"go1.16.5"}七、helm 安装 postgresql-ha 高可用 postgresql 集群7.1 Helm 添加第三方 Chart 库:
[root@k8s-master01 ~]# helm repo add stable http://mirror.azure.cn/kubernetes/charts/ "stable" already exists with the same configuration, skipping [root@k8s-master01 ~]# helm repo list NAME URL bitnami https://charts.bitnami.com/bitnami dandydev https://dandydeveloper.github.io/charts stable http://mirror.azure.cn/kubernetes/charts/推荐些许持续更新的 repo(此处不用执行)
[root@k8s-master01 ~]# helm repo add stable http://mirror.azure.cn/kubernetes/charts #微软的很全推荐 [root@k8s-master01 ~]# helm repo add bitnami https://charts.bitnami.com/bitnami #大部分都有 [root@k8s-master01 ~]# helm repo add harbor https://helm.goharbor.io #harbor的 [root@k8s-master01 ~]# helm repo add gpu-helm-charts https://nvidia.github.io/gpu-monitoring-tools/helm-charts #NVIDIA DCGM的 [root@k8s-master01 ~]# helm repo add elastic https://helm.elastic.co #elastic的 elasticsearch [root@k8s-master01 ~]# helm repo add stablecharts https://charts.helm.sh/stable #不更新了还是有些东西的 [root@k8s-master01 ~]# helm update7.2 添加第三库之后就可以使用以下方式查询:
[root@k8s-master01 ~]# helm search repo postgresql NAME CHART VERSION APP VERSION DESCRIPTION bitnami/postgresql 11.1.20 14.2.0 PostgreSQL (Postgres) is an open source object-... bitnami/postgresql-ha 8.6.12 11.15.0 This PostgreSQL cluster solution includes the P... stable/postgresql 8.6.4 11.7.0 DEPRECATED Chart for PostgreSQL, an object-rela... stable/pgadmin 1.2.2 4.18.0 pgAdmin is a web based administration tool for ... stable/stolon 1.6.5 0.16.0 DEPRECATED - Stolon - PostgreSQL cloud native H... stable/gcloud-sqlproxy 0.6.1 1.11 DEPRECATED Google Cloud SQL Proxy stable/prometheus-postgres-exporter 1.3.1 0.8.0 DEPRECATED A Helm chart for prometheus postgres...7.3 拉取并修改 postgresql-ha 安装包
[root@k8s-master01 ~]# mkdir -p /root/postgres [root@k8s-master01 ~]# cd /root/postgres/ [root@k8s-master01 postgres]# helm pull bitnami/postgresql-ha [root@k8s-master01 postgres]# ll total 60 -rw-r--r-- 1 root root 58200 Apr 19 19:15 postgresql-ha-8.6.12.tgz [root@k8s-master01 postgres]# tar -xvf postgresql-ha-8.6.12.tgz postgresql-ha/Chart.yaml postgresql-ha/Chart.lock postgresql-ha/values.yaml postgresql-ha/templates/NOTES.txt postgresql-ha/templates/_helpers.tpl postgresql-ha/templates/extra-list.yaml postgresql-ha/templates/ldap-secrets.yaml postgresql-ha/templates/metrics-configmap.yaml postgresql-ha/templates/networkpolicy-egress.yaml postgresql-ha/templates/networkpolicy-ingress.yaml postgresql-ha/templates/pgpool/configmap.yaml postgresql-ha/templates/pgpool/custom-users-secrets.yaml postgresql-ha/templates/pgpool/deployment.yaml postgresql-ha/templates/pgpool/initdb-scripts-configmap.yaml postgresql-ha/templates/pgpool/pdb.yaml postgresql-ha/templates/pgpool/secrets.yaml postgresql-ha/templates/pgpool/service.yaml postgresql-ha/templates/podsecuritypolicy.yaml postgresql-ha/templates/postgresql/configmap.yaml postgresql-ha/templates/postgresql/extended-configmap.yaml postgresql-ha/templates/postgresql/hooks-scripts-configmap.yaml postgresql-ha/templates/postgresql/initdb-scripts-configmap.yaml postgresql-ha/templates/postgresql/metrics-service.yaml postgresql-ha/templates/postgresql/pdb.yaml postgresql-ha/templates/postgresql/secrets.yaml postgresql-ha/templates/postgresql/service-headless.yaml postgresql-ha/templates/postgresql/service.yaml postgresql-ha/templates/postgresql/servicemonitor.yaml postgresql-ha/templates/postgresql/statefulset.yaml postgresql-ha/templates/role.yaml postgresql-ha/templates/rolebinding.yaml postgresql-ha/templates/serviceaccount.yaml postgresql-ha/templates/tls-secrets.yaml postgresql-ha/.helmignore postgresql-ha/README.md postgresql-ha/ci/ct-values.yaml postgresql-ha/ci/values-production-with-pdb.yaml postgresql-ha/charts/common/Chart.yaml postgresql-ha/charts/common/values.yaml postgresql-ha/charts/common/templates/_affinities.tpl postgresql-ha/charts/common/templates/_capabilities.tpl postgresql-ha/charts/common/templates/_errors.tpl postgresql-ha/charts/common/templates/_images.tpl postgresql-ha/charts/common/templates/_ingress.tpl postgresql-ha/charts/common/templates/_labels.tpl postgresql-ha/charts/common/templates/_names.tpl postgresql-ha/charts/common/templates/_secrets.tpl postgresql-ha/charts/common/templates/_storage.tpl postgresql-ha/charts/common/templates/_tplvalues.tpl postgresql-ha/charts/common/templates/_utils.tpl postgresql-ha/charts/common/templates/_warnings.tpl postgresql-ha/charts/common/templates/validations/_cassandra.tpl postgresql-ha/charts/common/templates/validations/_mariadb.tpl postgresql-ha/charts/common/templates/validations/_mongodb.tpl postgresql-ha/charts/common/templates/validations/_postgresql.tpl postgresql-ha/charts/common/templates/validations/_redis.tpl postgresql-ha/charts/common/templates/validations/_validations.tpl postgresql-ha/charts/common/.helmignore postgresql-ha/charts/common/README.md [root@k8s-master01 postgres]# clear [root@k8s-master01 postgres]# ll total 64 drwxr-xr-x 5 root root 4096 Apr 19 19:15 postgresql-ha -rw-r--r-- 1 root root 58200 Apr 19 19:15 postgresql-ha-8.6.12.tgz [root@k8s-master01 postgres]# cd postgresql-ha [root@k8s-master01 postgresql-ha]# ll total 184 -rw-r--r-- 1 root root 220 Apr 16 19:41 Chart.lock drwxr-xr-x 3 root root 4096 Apr 19 19:15 charts -rw-r--r-- 1 root root 803 Apr 16 19:41 Chart.yaml drwxr-xr-x 2 root root 4096 Apr 19 19:15 ci -rw-r--r-- 1 root root 104836 Apr 16 19:41 README.md drwxr-xr-x 4 root root 4096 Apr 19 19:15 templates -rw-r--r-- 1 root root 60838 Apr 16 19:41 values.yaml
- 编辑 postgresql-ha 安装包中 values.yaml
修改 global.storageClass: “nfs-storage”, 其它可默认,也可自行设置[root@k8s-master01 postgresql-ha]# vi values.yaml # 修改 global.storageClass: "nfs-storage", 其它可默认,也可自行设置 [root@k8s-master01 postgresql-ha]# kubectl create namespace postgres namespace/postgres created # 也可 helm install -f values.yaml pgsql bitnami/postgresql-ha --version 8.1.2 -n postgres [root@k8s-master01 postgresql-ha]# helm install pg . -n postgres NAME: pg LAST DEPLOYED: Tue Apr 19 19:17:56 2022 NAMESPACE: postgres STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: CHART NAME: postgresql-ha CHART VERSION: 8.6.12 APP VERSION: 11.15.0 ** Please be patient while the chart is being deployed ** PostgreSQL can be accessed through Pgpool via port 5432 on the following DNS name from within your cluster: pg-postgresql-ha-pgpool.postgres.svc.cluster.local Pgpool acts as a load balancer for PostgreSQL and forward read/write connections to the primary node while read-only connections are forwarded to standby nodes. To get the password for "postgres" run: export POSTGRES_PASSWORD=$(kubectl get secret --namespace postgres pg-postgresql-ha-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode) To get the password for "repmgr" run: export REPMGR_PASSWORD=$(kubectl get secret --namespace postgres pg-postgresql-ha-postgresql -o jsonpath="{.data.repmgr-password}" | base64 --decode) To connect to your database run the following command: kubectl run pg-postgresql-ha-client --rm --tty -i --restart='Never' --namespace postgres --image docker.io/bitnami/postgresql-repmgr:11.15.0-debian-10-r63 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql -h pg-postgresql-ha-pgpool -p 5432 -U postgres -d postgres To connect to your database from outside the cluster execute the following commands: kubectl port-forward --namespace postgres svc/pg-postgresql-ha-pgpool 5432:5432 & psql -h 127.0.0.1 -p 5432 -U postgres -d postgres八、测试 postgresql-ha 高可用 postgresql 集群8.1 查看安装 chart
[root@k8s-master01 ~]# helm list -A NAME NAMESPACE REVISION UPDATeD STATUS CHART APP VERSION pg postgres 1 2022-04-19 19:17:56.077248068 +0800 CST deployed postgresql-ha-8.6.12 11.15.0 redis redis 1 2022-04-18 23:15:45.703701625 +0800 CST deployed redis-ha-4.4.6 5.0.68.2 查看部署状态
[root@k8s-master01 ~]# kubectl get all -n postgres NAME READY STATUS RESTARTS AGE pod/pg-postgresql-ha-pgpool-6d6748c5bb-r2d58 1/1 Running 1 (160m ago) 163m pod/pg-postgresql-ha-postgresql-0 1/1 Running 0 163m pod/pg-postgresql-ha-postgresql-1 1/1 Running 2 (160m ago) 163m pod/pg-postgresql-ha-postgresql-2 1/1 Running 2 (160m ago) 163m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/pg-postgresql-ha-pgpool ClusterIP 10.110.3.545432/TCP 163m service/pg-postgresql-ha-postgresql ClusterIP 10.110.159.68 5432/TCP 163m service/pg-postgresql-ha-postgresql-headless ClusterIP None 5432/TCP 163m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/pg-postgresql-ha-pgpool 1/1 1 1 163m NAME DESIRED CURRENT READY AGE replicaset.apps/pg-postgresql-ha-pgpool-6d6748c5bb 1 1 1 163m NAME READY AGE statefulset.apps/pg-postgresql-ha-postgresql 3/3 163m 8.3 获取访问信息
[root@k8s-master01 ~]# helm list -A NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION pg postgres 1 2022-04-19 19:17:56.077248068 +0800 CST deployed postgresql-ha-8.6.12 11.15.0 redis redis 1 2022-04-18 23:15:45.703701625 +0800 CST deployed redis-ha-4.4.6 5.0.6 [root@k8s-master01 ~]# helm status pg -n postgres NAME: pg LAST DEPLOYED: Tue Apr 19 19:17:56 2022 NAMESPACE: postgres STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: CHART NAME: postgresql-ha CHART VERSION: 8.6.12 APP VERSION: 11.15.0 ** Please be patient while the chart is being deployed ** PostgreSQL can be accessed through Pgpool via port 5432 on the following DNS name from within your cluster: pg-postgresql-ha-pgpool.postgres.svc.cluster.local Pgpool acts as a load balancer for PostgreSQL and forward read/write connections to the primary node while read-only connections are forwarded to standby nodes. To get the password for "postgres" run: export POSTGRES_PASSWORD=$(kubectl get secret --namespace postgres pg-postgresql-ha-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode) To get the password for "repmgr" run: export REPMGR_PASSWORD=$(kubectl get secret --namespace postgres pg-postgresql-ha-postgresql -o jsonpath="{.data.repmgr-password}" | base64 --decode) To connect to your database run the following command: kubectl run pg-postgresql-ha-client --rm --tty -i --restart='Never' --namespace postgres --image docker.io/bitnami/postgresql-repmgr:11.15.0-debian-10-r63 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql -h pg-postgresql-ha-pgpool -p 5432 -U postgres -d postgres To connect to your database from outside the cluster execute the following commands: kubectl port-forward --namespace postgres svc/pg-postgresql-ha-pgpool 5432:5432 & psql -h 127.0.0.1 -p 5432 -U postgres -d postgres8.4 登录 postgresql 数据库
a. 获取 postgresql 免密
[root@k8s-master01 ~]# kubectl get secret --namespace postgres pg-postgresql-ha-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode uRn2dcvezab. 登录 postgresql
[root@k8s-master01 ~]# kubectl exec -it pg-postgresql-ha-pgpool-6d6748c5bb-r2d58 -n postgres -- psql -h 127.0.0.1 -p 5432 -U postgres -d postgres Password for user postgres: #此处输入查出的密码:uRn2dcveza psql (10.20, server 11.15) WARNING: psql major version 10, server major version 11. Some psql features might not work. Type "help" for help. postgres=#c. 查询当前数据库
postgres=# select current_database(); current_database ------------------ postgres (1 row)d. 查询当前用户( sql语句:select user; 或者:select current_user;)
postgres=# select user; user ---------- postgres (1 row)e. 查看数据版本
postgres=# select version(); version ----------------------------------------------------------------------------------------- PostgreSQL 11.15 on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit (1 row)f. 数据库查所有表
postgres=# select * from pg_tables; schemaname | tablename | tableowner | tablespace | hasindexes | hasrules | hastriggers | rowsecurity --------------------+-------------------------+------------+------------+------------+----------+-------------+------------- pg_catalog | pg_statistic | postgres | | t | f | f | f pg_catalog | pg_foreign_table | postgres | | t | f | f | f pg_catalog | pg_authid | postgres | pg_global | t | f | f | f pg_catalog | pg_user_mapping | postgres | | t | f | f | f pg_catalog | pg_subscription | postgres | pg_global | t | f | f | f pg_catalog | pg_largeobject | postgres | | t | f | f | f pg_catalog | pg_type | postgres | | t | f | f | f pg_catalog | pg_attribute | postgres | | t | f | f | f pg_catalog | pg_proc | postgres | | t | f | f | f pg_catalog | pg_class | postgres | | t | f | f | f pg_catalog | pg_attrdef | postgres | | t | f | f | f pg_catalog | pg_constraint | postgres | | t | f | f | f pg_catalog | pg_inherits | postgres | | t | f | f | f pg_catalog | pg_index | postgres | | t | f | f | f pg_catalog | pg_operator | postgres | | t | f | f | f pg_catalog | pg_opfamily | postgres | | t | f | f | f pg_catalog | pg_opclass | postgres | | t | f | f | f pg_catalog | pg_am | postgres | | t | f | f | f pg_catalog | pg_amop | postgres | | t | f | f | f pg_catalog | pg_amproc | postgres | | t | f | f | f pg_catalog | pg_language | postgres | | t | f | f | f pg_catalog | pg_largeobject_metadata | postgres | | t | f | f | f pg_catalog | pg_aggregate | postgres | | t | f | f | f pg_catalog | pg_statistic_ext | postgres | | t | f | f | f pg_catalog | pg_rewrite | postgres | | t | f | f | f pg_catalog | pg_trigger | postgres | | t | f | f | f pg_catalog | pg_event_trigger | postgres | | t | f | f | f pg_catalog | pg_description | postgres | | t | f | f | f pg_catalog | pg_cast | postgres | | t | f | f | f pg_catalog | pg_enum | postgres | | t | f | f | f pg_catalog | pg_namespace | postgres | | t | f | f | f pg_catalog | pg_conversion | postgres | | t | f | f | f pg_catalog | pg_depend | postgres | | t | f | f | f pg_catalog | pg_database | postgres | pg_global | t | f | f | f pg_catalog | pg_db_role_setting | postgres | pg_global | t | f | f | f pg_catalog | pg_tablespace | postgres | pg_global | t | f | f | f pg_catalog | pg_pltemplate | postgres | pg_global | t | f | f | f pg_catalog | pg_auth_members | postgres | pg_global | t | f | f | f pg_catalog | pg_shdepend | postgres | pg_global | t | f | f | f pg_catalog | pg_shdescription | postgres | pg_global | t | f | f | f pg_catalog | pg_ts_config | postgres | | t | f | f | f pg_catalog | pg_ts_config_map | postgres | | t | f | f | f pg_catalog | pg_ts_dict | postgres | | t | f | f | fg. 数据库主库查找
- 进入psql命令行环境,执行函数 select pg_is_in_recovery(),查询到的结果为
f 代表当前是主用,如果是 t 代表是备用 (推荐)- 从wal日志实现复制来进行初步判断,ps -ef|grep “postgres: wal”|grep -v “grep”(参考)
[root@k8s-node01 ~]# kubectl get svc -n postgres NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE pg-postgresql-ha-pgpool ClusterIP 10.110.3.545432/TCP 3h3m pg-postgresql-ha-postgresql ClusterIP 10.110.159.68 5432/TCP 3h3m pg-postgresql-ha-postgresql-headless ClusterIP None 5432/TCP 3h3m [root@k8s-node01 ~]# kubectl get po -n postgres -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pg-postgresql-ha-pgpool-6d6748c5bb-r2d58 1/1 Running 1 (3h1m ago) 3h3m 10.244.1.12 k8s-node01 pg-postgresql-ha-postgresql-0 1/1 Running 0 3h3m 10.244.1.13 k8s-node01 pg-postgresql-ha-postgresql-1 1/1 Running 2 (3h1m ago) 3h3m 10.244.3.10 k8s-node03 pg-postgresql-ha-postgresql-2 1/1 Running 2 (3h1m ago) 3h3m 10.244.2.7 k8s-node02 [root@k8s-node01 ~]# kubectl exec -it pg-postgresql-ha-postgresql-0 -n postgres bash [root@pg-postgresql-ha-postgresql-0:/$ psql -h 127.0.0.1 -p 5432 -U postgres -d postgres Password for user postgres: psql (11.15) Type "help" for help. postgres=# select pg_is_in_recovery(); pg_is_in_recovery ------------------- f (1 row) [root@k8s-node01 ~]# kubectl exec -it pg-postgresql-ha-postgresql-1 -n postgres bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. I have no name!@pg-postgresql-ha-postgresql-1:/$ psql -h 127.0.0.1 -p 5432 -U postgres -d postgres Password for user postgres: psql (11.15) Type "help" for help. postgres=# select pg_is_in_recovery(); pg_is_in_recovery ------------------- t (1 row) [root@k8s-node01 ~]# kubectl exec -it pg-postgresql-ha-postgresql-2 -n postgres bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. I have no name!@pg-postgresql-ha-postgresql-2:/$ psql -h 127.0.0.1 -p 5432 -U postgres -d postgres Password for user postgres: psql (11.15) Type "help" for help. postgres=# select pg_is_in_recovery(); pg_is_in_recovery ------------------- t (1 row) [root@k8s-node01 ~]# kubectl exec -it pg-postgresql-ha-pgpool-6d6748c5bb-r2d58 -n postgres bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. I have no name!@pg-postgresql-ha-pgpool-6d6748c5bb-r2d58:/$ psql -h 127.0.0.1 -p 5432 -U postgres -d postgres Password for user postgres: psql (10.20, server 11.15) WARNING: psql major version 10, server major version 11. Some psql features might not work. Type "help" for help. postgres=# select pg_is_in_recovery(); pg_is_in_recovery ------------------- f (1 row) 以上,可以看
出高可用 postgresql 数据库已搭建完毕,状态ok!



