栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 软件开发 > 后端开发 > Java

部署 Kubernetes 管理工具 Rancher

Java 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

部署 Kubernetes 管理工具 Rancher

第五章 部署 Kubernetes 管理工具 Rancher

文章目录
  • 第五章 部署 Kubernetes 管理工具 Rancher
  • 一、Rancher 介绍
  • 二、部署 Rancher 高可用平台
    • 1、部署 Ingress-nginx
      • 1.1 首先现在 下载镜像,然后打标签上传到本地仓库
      • 1.2 mandatory.yaml 配置文件
      • 1.3 service.nodeport.yaml 配置文件
      • 1.4 最后查看部署状态,一下说明成功
    • 2、部署 Rancher
      • 2.1 添加 Helm Chart 仓库
      • 2.2 获取 Rancher Chart
      • 2.3 创建私有证书
      • 2.4 设置 Rancher 模板
      • 2.5 安装 Rancher
      • 2.6 登录平台设置密码
  • 总结


一、Rancher 介绍

Rancher 是容器化管理平台,最初是为了支持多种容器编排引擎而构建的,随着 Kubernetes 的兴起。2.x 版本已经完全转向 Kubernetes 。Rancher 提供了简单直接的用户界面,使得用户可以更好的管理维护 Kubernetes 集群。具体功能可以查看官网:https://docs.rancher.cn/

二、部署 Rancher 高可用平台 1、部署 Ingress-nginx

Rancher Server 默认需要 SSL/TLS 配置来保证访问的安全性。
rancher默认使用ingress暴露UI到集群外部供用户访问,所以需要自行部署ingress-controller,以部署ingress-nginx-controller为例。

1.1 首先现在 下载镜像,然后打标签上传到本地仓库
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.30.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.30.0 192.168.1.90/nginx-ingress/nginx-ingress-controller:0.30.0
docker push 192.168.1.90/nginx-ingress/nginx-ingress-controller:0.30.0
1.2 mandatory.yaml 配置文件
[root@k8s-master-01 Ingress-nginx]# cat mandatory.yaml 
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "-"
      # Here: "-"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  #replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      hostNetwork: true
      # wait up to five minutes for the drain of connections
      #terminationGracePeriodSeconds: 3000
      serviceAccountName: nginx-ingress-serviceaccount
      nodeSelector:
         kubernetes.io/os: linux
        # kubernetes.io/hostname: k8s-master-01   # 修改处
      # 如下几行为新加行  作用【允许在master节点运行】
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
        - name: nginx-ingress-controller
          image: 192.168.1.90/nginx-ingress/nginx-ingress-controller:0.30.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 101
            runAsUser: 101
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown

---

apiVersion: v1
kind: LimitRange
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  limits:
  - min:
      memory: 90Mi
      cpu: 100m
    type: Container
1.3 service.nodeport.yaml 配置文件
[root@k8s-master-01 Ingress-nginx]# cat service.nodeport.yaml 
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
1.4 最后查看部署状态,一下说明成功
[root@k8s-master-01 Ingress-nginx]# kubectl get pods -n ingress-nginx
NAME                                   READY   STATUS             RESTARTS   AGE         12m
nginx-ingress-controller-gz56r         1/1     Running            0          14m
nginx-ingress-controller-l5smk         1/1     Running            0          14m
nginx-ingress-controller-lt459         1/1     Running            0          14m
nginx-ingress-controller-m4rb7         1/1     Running            0          14m
nginx-ingress-controller-nflr6         1/1     Running            0          14m
nginx-ingress-controller-qdjrb         1/1     Running            0          14m
nginx-ingress-controller-ql647         1/1     Running            0          14m
nginx-ingress-controller-whrx4         1/1     Running            0          14m
[root@k8s-master-01 Ingress-nginx]# kubectl get svc -n ingress-nginx
NAME            TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx   NodePort   10.96.19.184           80:32134/TCP,443:30339/TCP   11m
2、部署 Rancher 2.1 添加 Helm Chart 仓库

这里 helm 安装略过,可以去此链接下载安装http://mirror.cnrancher.com/

添加仓库,这里我选择稳定版

latest: 建议在尝试新功能时使用。
stable: 建议在生产环境中使用。(推荐)
alpha: 未来版本的实验性预览。

helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
helm repo update
2.2 获取 Rancher Chart

下载特定版本 Rancher 到本地 ,用–version参数指定版本

### 获取 rancher 版本
helm fetch rancher-stable/rancher --version=v2.5.8
2.3 创建私有证书

Rancher Server 默认需要 SSL/TLS 配置来保证访问的安全性。

vim cert.sh
#!/bin/bash -e

help ()
{
    echo  ' ================================================================ '
    echo  ' --ssl-domain: 生成ssl证书需要的主域名,如不指定则默认为localhost,如果是ip访问服务,则可忽略;'
    echo  ' --ssl-trusted-ip: 一般ssl证书只信任域名的访问请求,有时候需要使用ip去访问server,那么需要给ssl证书添加扩展IP,多个IP用逗号隔开;'
    echo  ' --ssl-trusted-domain: 如果想多个域名访问,则添加扩展域名(SSL_TRUSTED_DOMAIN),多个扩展域名用逗号隔开;'
    echo  ' --ssl-size: ssl加密位数,默认2048;'
    echo  ' --ssl-date: ssl有效期,默认10年;'
    echo  ' --ca-date: ca有效期,默认10年;'
    echo  ' --ssl-cn: 国家代码(2个字母的代号),默认CN;'
    echo  ' 使用示例:'
    echo  ' ./create_self-signed-cert.sh --ssl-domain=www.test.com --ssl-trusted-domain=www.test2.com  '
    echo  ' --ssl-trusted-ip=1.1.1.1,2.2.2.2,3.3.3.3 --ssl-size=2048 --ssl-date=3650'
    echo  ' ================================================================'
}

case "$1" in
    -h|--help) help; exit;;
esac

if [[ $1 == '' ]];then
    help;
    exit;
fi

CMDOPTS="$*"
for OPTS in $CMDOPTS;
do
    key=$(echo ${OPTS} | awk -F"=" '{print $1}' )
    value=$(echo ${OPTS} | awk -F"=" '{print $2}' )
    case "$key" in
        --ssl-domain) SSL_DOMAIN=$value ;;
        --ssl-trusted-ip) SSL_TRUSTED_IP=$value ;;
        --ssl-trusted-domain) SSL_TRUSTED_DOMAIN=$value ;;
        --ssl-size) SSL_SIZE=$value ;;
        --ssl-date) SSL_DATE=$value ;;
        --ca-date) CA_DATE=$value ;;
        --ssl-cn) CN=$value ;;
    esac
done

#CA相关配置
CA_DATE=${CA_DATE:-3650}
CA_KEY=${CA_KEY:-cakey.pem}
CA_CERT=${CA_CERT:-cacerts.pem}
CA_DOMAIN=localhost

#ssl相关配置
SSL_CONFIG=${SSL_CONFIG:-$PWD/openssl.cnf}
SSL_DOMAIN=${SSL_DOMAIN:-localhost}
SSL_DATE=${SSL_DATE:-3650}
SSL_SIZE=${SSL_SIZE:-2048}

##国家代码(2个字母的代号),默认CN;
CN=${CN:-CN}

SSL_KEY=$SSL_DOMAIN.key
SSL_CSR=$SSL_DOMAIN.csr
SSL_CERT=$SSL_DOMAIN.crt

echo -e "33[32m ---------------------------- 33[0m"
echo -e "33[32m       | 生成 SSL Cert |       33[0m"
echo -e "33[32m ---------------------------- 33[0m"

if [[ -e ./${CA_KEY} ]]; then
    echo -e "33[32m ====> 1. 发现已存在CA私钥,备份"${CA_KEY}"为"${CA_KEY}"-bak,然后重新创建 33[0m"
    mv ${CA_KEY} "${CA_KEY}"-bak
    openssl genrsa -out ${CA_KEY} ${SSL_SIZE}
else
    echo -e "33[32m ====> 1. 生成新的CA私钥 ${CA_KEY} 33[0m"
    openssl genrsa -out ${CA_KEY} ${SSL_SIZE}
fi

if [[ -e ./${CA_CERT} ]]; then
    echo -e "33[32m ====> 2. 发现已存在CA证书,先备份"${CA_CERT}"为"${CA_CERT}"-bak,然后重新创建 33[0m"
    mv ${CA_CERT} "${CA_CERT}"-bak
    openssl req -x509 -sha256 -new -nodes -key ${CA_KEY} -days ${CA_DATE} -out ${CA_CERT} -subj "/C=${CN}/CN=${CA_DOMAIN}"
else
    echo -e "33[32m ====> 2. 生成新的CA证书 ${CA_CERT} 33[0m"
    openssl req -x509 -sha256 -new -nodes -key ${CA_KEY} -days ${CA_DATE} -out ${CA_CERT} -subj "/C=${CN}/CN=${CA_DOMAIN}"
fi

echo -e "33[32m ====> 3. 生成Openssl配置文件 ${SSL_CONFIG} 33[0m"
cat > ${SSL_CONFIG} <> ${SSL_CONFIG} <> ${SSL_CONFIG}
    done

    if [[ -n ${SSL_TRUSTED_IP} ]]; then
        ip=(${SSL_TRUSTED_IP})
        for i in "${!ip[@]}"; do
          echo IP.$((i+1)) = ${ip[$i]} >> ${SSL_CONFIG}
        done
    fi
fi

echo -e "33[32m ====> 4. 生成服务SSL KEY ${SSL_KEY} 33[0m"
openssl genrsa -out ${SSL_KEY} ${SSL_SIZE}

echo -e "33[32m ====> 5. 生成服务SSL CSR ${SSL_CSR} 33[0m"
openssl req -sha256 -new -key ${SSL_KEY} -out ${SSL_CSR} -subj "/C=${CN}/CN=${SSL_DOMAIN}" -config ${SSL_CONFIG}

echo -e "33[32m ====> 6. 生成服务SSL CERT ${SSL_CERT} 33[0m"
openssl x509 -sha256 -req -in ${SSL_CSR} -CA ${CA_CERT} 
    -CAkey ${CA_KEY} -CAcreateserial -out ${SSL_CERT} 
    -days ${SSL_DATE} -extensions v3_req 
    -extfile ${SSL_CONFIG}

echo -e "33[32m ====> 7. 证书制作完成 33[0m"
echo
echo -e "33[32m ====> 8. 以YAML格式输出结果 33[0m"
echo "----------------------------------------------------------"
echo "ca_key: |"
cat $CA_KEY | sed 's/^/  /'
echo
echo "ca_cert: |"
cat $CA_CERT | sed 's/^/  /'
echo
echo "ssl_key: |"
cat $SSL_KEY | sed 's/^/  /'
echo
echo "ssl_csr: |"
cat $SSL_CSR | sed 's/^/  /'
echo
echo "ssl_cert: |"
cat $SSL_CERT | sed 's/^/  /'
echo

echo -e "33[32m ====> 9. 附加CA证书到Cert文件 33[0m"
cat ${CA_CERT} >> ${SSL_CERT}
echo "ssl_cert: |"
cat $SSL_CERT | sed 's/^/  /'
echo

echo -e "33[32m ====> 10. 重命名服务证书 33[0m"
echo "cp ${SSL_DOMAIN}.key tls.key"
cp ${SSL_DOMAIN}.key tls.key
echo "cp ${SSL_DOMAIN}.crt tls.crt"
cp ${SSL_DOMAIN}.crt tls.crt


#生成证书
sudo chmod +x cert.sh
./cert.sh --ssl-domain=rancher.sigs.applysquare.net --ssl-size=2048 --ssl-date=3650

#创建namespace
kubectl create namespace cattle-system

#服务证书和私钥密文
kubectl -n cattle-system create 
secret tls tls-rancher-ingress 
--cert=./tls.crt 
--key=./tls.key

#ca证书密文
kubectl -n cattle-system create secret 
generic tls-ca 
--from-file=./cacerts.pem
2.4 设置 Rancher 模板

1、使用的是由私有 CA 签名的证书,则在 --set ingress.tls.source=secret之后添加–set privateCA=true
2、–set hostname=k8s-vip:rancher 访问平台链接
3、因为我刚才将 Rancher 的包下载到本地,指定的版本是 2.5.8,接下来对本地的模板文件进行修改。

helm template rancher ./rancher-2.5.8.tgz  --output-dir . 
    --namespace cattle-system 
    --set hostname=k8s-vip 
    --set ingress.tls.source=secret 
    --set privateCA=true 
    --set useBundledSystemChart=true # Use the packaged Rancher system charts

2.5 安装 Rancher

使用 设置好的模板进行部署安装

kubectl -n cattle-system apply -R -f ./rancher
2.6 登录平台设置密码

登录链接:https://k8s-vip

设置好密码之后,重新登录平台


总结

以上就是在现有 Kubernetes 集群中部署 Rancher 高可用集群的方法。使用 Rancher 是为了更好的管理使用 Kubernetes 集群。最主要的原因就是 Rancher 的流水线功能,后期会出文章具体介绍。

转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/584951.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号