栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 系统运维 > 运维 > Linux

k8s 集群部署

Linux 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

k8s 集群部署

文章目录
  • 简介
  • k8s 集群安装
    • kubeadm
    • 环境准备
    • 所有节点安装 Docker、kubeadm、kubelet、kubectl
    • 部署 k8s-master
    • 安装 Pod 网络插件(CNI)
  • 入门操作 kubernetes 集群
  • 部署tomcat并暴露nginx访问
  • Ingress
  • 安装默认 dashboard
  • 安装 KubeSphere
    • 简介
    • 安装

简介

Kubernetes 简称 k8s。是用于自动部署,扩展和管理容器化应用程序的开源系统。

中文官网:https://kubernetes.io/zh/
中文社区:https://www.kubernetes.org.cn/
官方文档:https://kubernetes.io/zh/docs/home/
社区文档:http://docs.kubernetes.org.cn/

KubeSphere 中文官网: https://kubesphere.com.cn/

k8s 集群安装

按照视频一步一步操作,需要用到的命令如下。

kubeadm

kubeadm 是官方社区推出的一个用于快速部署 kubernetes 集群的工具。

环境准备

1、准备工作
使用 vagrant 快速创建三个虚拟机。
虚拟机启动前先设置 virtualbox 的主机网络。
现全部统一为 192.168.56.1,以后所有虚拟机都是 56.x 的 ip 地址。

2、创建三个虚拟机
使用提供的 vagrant 文件,复制到非中文无空格目录下,运行 vagrant up 启动三个虚拟机。
创建过程很慢,亲测第一次初始化经历了大概3个小时。第二次初始化5分钟。

Vagrant.configure("2") do |config|
   (1..3).each do |i|
        config.vm.define "k8s-node#{i}" do |node|
            # 设置虚拟机的Box
            node.vm.box = "centos/7"

            # 设置虚拟机的主机名
            node.vm.hostname="k8s-node#{i}"

            # 设置虚拟机的IP
            node.vm.network "private_network", ip: "192.168.56.#{99+i}", netmask: "255.255.255.0"

            # 设置主机与虚拟机的共享目录
            # node.vm.synced_folder "~/Documents/vagrant/share", "/home/vagrant/share"

            # VirtaulBox相关配置
            node.vm.provider "virtualbox" do |v|
                # 设置虚拟机的名称
                v.name = "k8s-node#{i}"
                # 设置虚拟机的内存大小
                v.memory = 2048 
                # 设置虚拟机的CPU个数
                v.cpus = 4
            end
        end
   end
end
D:gulimallk8s>vagrant up
==> vagrant: A new version of Vagrant is available: 2.2.19 (installed version: 2.2.9)!
==> vagrant: To upgrade visit: https://www.vagrantup.com/downloads.html

Bringing machine 'k8s-node1' up with 'virtualbox' provider...
Bringing machine 'k8s-node2' up with 'virtualbox' provider...
Bringing machine 'k8s-node3' up with 'virtualbox' provider...
==> k8s-node1: Importing base box 'centos/7'...
==> k8s-node1: Matching MAC address for NAT networking...
==> k8s-node1: Checking if box 'centos/7' version '2004.01' is up to date...
==> k8s-node1: Setting the name of the VM: k8s-node1
==> k8s-node1: Clearing any previously set network interfaces...
==> k8s-node1: Preparing network interfaces based on configuration...
    k8s-node1: Adapter 1: nat
    k8s-node1: Adapter 2: hostonly
==> k8s-node1: Forwarding ports...
    k8s-node1: 22 (guest) => 2222 (host) (adapter 1)
==> k8s-node1: Running 'pre-boot' VM customizations...
==> k8s-node1: Booting VM...
==> k8s-node1: Waiting for machine to boot. This may take a few minutes...
    k8s-node1: SSH address: 127.0.0.1:2222
    k8s-node1: SSH username: vagrant
    k8s-node1: SSH auth method: private key
    k8s-node1:
    k8s-node1: Vagrant insecure key detected. Vagrant will automatically replace
    k8s-node1: this with a newly generated keypair for better security.
    k8s-node1:
    k8s-node1: Inserting generated public key within guest...
    k8s-node1: Removing insecure key from the guest if it's present...
    k8s-node1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> k8s-node1: Machine booted and ready!
==> k8s-node1: Checking for guest additions in VM...
    k8s-node1: No guest additions were detected on the base box for this VM! Guest
    k8s-node1: additions are required for forwarded ports, shared folders, host only
    k8s-node1: networking, and more. If SSH fails on this machine, please install
    k8s-node1: the guest additions and repackage the box to continue.
    k8s-node1:
    k8s-node1: This is not an error message; everything may continue to work properly,
    k8s-node1: in which case you may ignore this message.
==> k8s-node1: Setting hostname...
==> k8s-node1: Configuring and enabling network interfaces...
==> k8s-node1: Rsyncing folder: /cygdrive/d/gulimall/k8s/ => /vagrant
==> k8s-node2: Importing base box 'centos/7'...
==> k8s-node2: Matching MAC address for NAT networking...
==> k8s-node2: Checking if box 'centos/7' version '2004.01' is up to date...
==> k8s-node2: Setting the name of the VM: k8s-node2
==> k8s-node2: Fixed port collision for 22 => 2222. Now on port 2200.
==> k8s-node2: Clearing any previously set network interfaces...
==> k8s-node2: Preparing network interfaces based on configuration...
    k8s-node2: Adapter 1: nat
    k8s-node2: Adapter 2: hostonly
==> k8s-node2: Forwarding ports...
    k8s-node2: 22 (guest) => 2200 (host) (adapter 1)
==> k8s-node2: Running 'pre-boot' VM customizations...
==> k8s-node2: Booting VM...
==> k8s-node2: Waiting for machine to boot. This may take a few minutes...
    k8s-node2: SSH address: 127.0.0.1:2200
    k8s-node2: SSH username: vagrant
    k8s-node2: SSH auth method: private key
    k8s-node2:
    k8s-node2: Vagrant insecure key detected. Vagrant will automatically replace
    k8s-node2: this with a newly generated keypair for better security.
    k8s-node2:
    k8s-node2: Inserting generated public key within guest...
    k8s-node2: Removing insecure key from the guest if it's present...
    k8s-node2: Key inserted! Disconnecting and reconnecting using new SSH key...
==> k8s-node2: Machine booted and ready!
==> k8s-node2: Checking for guest additions in VM...
    k8s-node2: No guest additions were detected on the base box for this VM! Guest
    k8s-node2: additions are required for forwarded ports, shared folders, host only
    k8s-node2: networking, and more. If SSH fails on this machine, please install
    k8s-node2: the guest additions and repackage the box to continue.
    k8s-node2:
    k8s-node2: This is not an error message; everything may continue to work properly,
    k8s-node2: in which case you may ignore this message.
==> k8s-node2: Setting hostname...
==> k8s-node2: Configuring and enabling network interfaces...
==> k8s-node2: Rsyncing folder: /cygdrive/d/gulimall/k8s/ => /vagrant
==> k8s-node3: Importing base box 'centos/7'...
==> k8s-node3: Matching MAC address for NAT networking...
==> k8s-node3: Checking if box 'centos/7' version '2004.01' is up to date...
==> k8s-node3: Setting the name of the VM: k8s-node3
==> k8s-node3: Fixed port collision for 22 => 2222. Now on port 2201.
==> k8s-node3: Clearing any previously set network interfaces...
==> k8s-node3: Preparing network interfaces based on configuration...
    k8s-node3: Adapter 1: nat
    k8s-node3: Adapter 2: hostonly
==> k8s-node3: Forwarding ports...
    k8s-node3: 22 (guest) => 2201 (host) (adapter 1)
==> k8s-node3: Running 'pre-boot' VM customizations...
==> k8s-node3: Booting VM...
==> k8s-node3: Waiting for machine to boot. This may take a few minutes...
    k8s-node3: SSH address: 127.0.0.1:2201
    k8s-node3: SSH username: vagrant
    k8s-node3: SSH auth method: private key
    k8s-node3:
    k8s-node3: Vagrant insecure key detected. Vagrant will automatically replace
    k8s-node3: this with a newly generated keypair for better security.
    k8s-node3:
    k8s-node3: Inserting generated public key within guest...
    k8s-node3: Removing insecure key from the guest if it's present...
    k8s-node3: Key inserted! Disconnecting and reconnecting using new SSH key...
==> k8s-node3: Machine booted and ready!
==> k8s-node3: Checking for guest additions in VM...
    k8s-node3: No guest additions were detected on the base box for this VM! Guest
    k8s-node3: additions are required for forwarded ports, shared folders, host only
    k8s-node3: networking, and more. If SSH fails on this machine, please install
    k8s-node3: the guest additions and repackage the box to continue.
    k8s-node3:
    k8s-node3: This is not an error message; everything may continue to work properly,
    k8s-node3: in which case you may ignore this message.
==> k8s-node3: Setting hostname...
==> k8s-node3: Configuring and enabling network interfaces...
==> k8s-node3: Rsyncing folder: /cygdrive/d/gulimall/k8s/ => /vagrant

主机名: k8s-node1、k8s-node2、k8s-node3。
IP地址: 192.168.56.100、192.168.56.101、192.168.56.102。

cmd 进入三个虚拟机,开启 root 的密码访问权限。
进入虚拟机命令:vagrant ssh k8s-node1
切换到root:su root
root密码:vagrant
vi /etc/ssh/sshd_config
修改 PasswordAuthentication yes
重启服务:service sshd restart

3、设置好 NAT 网络(三个节点都设置)

4、设置 Linux 环境(三个节点都执行)
关闭防火墙:
systemctl stop firewalld
systemctl disable firewalld

关闭 selinux:
sed -i ‘s/enforcing/disabled/’ /etc/selinux/config
setenforce 0

验证

[root@k8s-node1 ~]# cat /etc/selinux/config 

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     disabled - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of disabled.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

关闭 swap:

swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab

验证

[root@k8s-node1 ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Thu Apr 30 22:04:55 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=1c419d6c-5064-4a2b-953c-05b2c67edb15 /                       xfs     defaults        0 0
#/swapfile none swap defaults 0 0
[root@k8s-node1 ~]# 

添加主机名与 IP 对应关系:
vi /etc/hosts
10.0.2.15 k8s-node1
10.0.2.6 k8s-node2
10.0.2.7 k8s-node3

将桥接的 IPv4 流量传递到 iptables 的链:
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system

疑难问题: 遇见提示是只读的文件系统,运行如下命令
mount -o remount rw /

所有节点安装 Docker、kubeadm、kubelet、kubectl

Kubernetes 默认 CRI(容器运行时)为 Docker,因此先安装 Docker。

1、安装 docker(三个节点都执行)
(1)卸载系统之前的 docker:

sudo yum remove docker 
	docker-client 
	docker-client-latest 
	docker-common 
	docker-latest 
	docker-latest-logrotate 
	docker-logrotate 
	docker-engine

(2)安装 Docker-CE:
安装必须的依赖:

sudo yum install -y yum-utils 
	device-mapper-persistent-data 
	lvm2

设置 docker repo 的 yum 位置:

sudo yum-config-manager 
	--add-repo 
	https://download.docker.com/linux/centos/docker-ce.repo

安装 docker,以及 docker-cli:

sudo yum install -y docker-ce docker-ce-cli containerd.io

(3)配置 docker 加速:

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
	"registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"]
}
EOF

sudo systemctl daemon-reload
sudo systemctl restart docker

(4)启动 docker & 设置 docker 开机自启:

systemctl enable docker

2、添加阿里云 yum 源(三个节点都执行)

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

3、安装 kubeadm,kubelet 和 kubectl(三个节点都执行)

yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3
systemctl enable kubelet
systemctl start kubelet

基础环境准备好,可以给三个虚拟机备份一下。

部署 k8s-master

1、master 节点初始化
将k8s资料上传到node1节点。
设置执行权限:chmod 700 master_images.sh
运行文件安装镜像:./master_images.sh

验证

[root@k8s-node1 k8s]# docker images
REPOSITORY                                                                    TAG       IMAGE ID       CREATED       SIZE
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy                v1.17.3   ae853e93800d   2 years ago   116MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver            v1.17.3   90d27391b780   2 years ago   171MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager   v1.17.3   b0f1517c1f4b   2 years ago   161MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler            v1.17.3   d109c0821a2b   2 years ago   94.4MB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns                   1.6.5     70f311871ae1   2 years ago   41.6MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd                      3.4.3-0   303ce5db0e90   2 years ago   288MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause                     3.1       da86e6ba6ca1   4 years ago   742kB
[root@k8s-node1 k8s]# 

选定一个master节点执行,注意命令中的ip地址。

kubeadm init 
--apiserver-advertise-address=10.0.2.15 
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers 
--kubernetes-version v1.17.3 
--service-cidr=10.96.0.0/16 
--pod-network-cidr=10.244.0.0/16

2、测试 kubectl(主节点执行)

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
安装 Pod 网络插件(CNI)

kube-flannel.yml 文件在 k8s 资料中。

kubectl apply -f kube-flannel.yml
[root@k8s-node1 k8s]# kubectl get node
NAME        STATUS   ROLES    AGE   VERSION
k8s-node1   Ready    master   44m   v1.17.3

目前 master 状态为 notready。等待网络加入完成即可。(三个节点都执行)

kubeadm join 10.0.2.15:6443 --token uva840.53zc7trjqzm8et40 
    --discovery-token-ca-cert-hash sha256:18c9e8d2dddf9211ab1f97c8394f7d2956275bf3b4edb45da78bfe47e5befe53

若出现预检查错误:
在命令最后面加上:–ignore-preflight-errors=all

kubeadm join 10.0.2.15:6443 --token 2ap839.jrc8wcw145t6m0gw 
   --discovery-token-ca-cert-hash sha256:fdcb34d17193b839f73a1c73c8aef48b819fd9f95d86ed6527864dbc3f97a415  --ignore-preflight-errors=all

查看所有名称空间的 pods:

[root@k8s-node1 k8s]# kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
k8s-node1   Ready    master   11m   v1.17.3
k8s-node2   Ready       99s   v1.17.3
k8s-node3   Ready       95s   v1.17.3

[root@k8s-node1 k8s]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                READY   STATUS    RESTARTS   AGE
kube-system   coredns-7f9c544f75-gbfdl            1/1     Running   0          11m
kube-system   coredns-7f9c544f75-h8sxd            1/1     Running   0          11m
kube-system   etcd-k8s-node1                      1/1     Running   0          11m
kube-system   kube-apiserver-k8s-node1            1/1     Running   0          11m
kube-system   kube-controller-manager-k8s-node1   1/1     Running   0          11m
kube-system   kube-flannel-ds-amd64-cl8vs         1/1     Running   0          11m
kube-system   kube-flannel-ds-amd64-dtrvb         1/1     Running   0          2m17s
kube-system   kube-flannel-ds-amd64-stvhc         1/1     Running   1          2m13s
kube-system   kube-proxy-dsvgl                    1/1     Running   0          11m
kube-system   kube-proxy-lhjqp                    1/1     Running   0          2m17s
kube-system   kube-proxy-plbkb                    1/1     Running   0          2m13s
kube-system   kube-scheduler-k8s-node1            1/1     Running   0          11m
入门操作 kubernetes 集群

1、部署一个 tomcat
kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8

获取到 tomcat 信息:

[root@k8s-node1 k8s]# kubectl get pods -o wide
NAME                       READY   STATUS    RESTARTS   AGE   IP           NODE        NOMINATED NODE   READINESS GATES
tomcat6-5f7ccf4cb9-7dx6c   1/1     Running   0          37s   10.244.1.2   k8s-node2              
[root@k8s-node1 k8s]# 

2、暴露 nginx 访问
kubectl expose deployment tomcat6 --port=80 --target-port=8080 --type=NodePort
Pod 的 80 映射容器的 8080;service 会代理 Pod 的 80

访问:http://192.168.56.100:30849/

3、动态扩容测试
kubectl get deployment
应用升级 kubectl set image (–help 查看帮助)
扩容: kubectl scale --replicas=3 deployment tomcat6
扩容了多份,所有无论访问哪个 node 的指定端口,都可以访问到 tomcat6

部署tomcat并暴露nginx访问
#####################################
kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yaml > tomcat6-deployment.yaml
vi tomcat6-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: tomcat6
  name: tomcat6
spec:
  replicas: 3
  selector:
    matchLabels:
      app: tomcat6
  template:
    metadata:
      labels:
        app: tomcat6
    spec:
      containers:
      - image: tomcat:6.0.53-jre8
        name: tomcat
		
#####################################
[root@k8s-node1 ~]# kubectl apply -f tomcat6-deployment.yaml 
deployment.apps/tomcat6 created
[root@k8s-node1 ~]# kubectl get all
NAME                           READY   STATUS    RESTARTS   AGE
pod/tomcat6-5f7ccf4cb9-6b4c9   1/1     Running   0          19s
pod/tomcat6-5f7ccf4cb9-sjlzh   1/1     Running   0          19s
pod/tomcat6-5f7ccf4cb9-vjd6t   1/1     Running   0          19s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1            443/TCP   129m

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/tomcat6   3/3     3            3           19s

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/tomcat6-5f7ccf4cb9   3         3         3       19s
[root@k8s-node1 ~]#
#####################################
[root@k8s-node1 ~]# kubectl expose deployment tomcat6 --port=80 --target-port=8080 --type=NodePort --dry-run -o yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: tomcat6
  name: tomcat6
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: tomcat6
  type: NodePort
status:
  loadBalancer: {}
[root@k8s-node1 ~]# 
#####################################
[root@k8s-node1 ~]# vi tomcat6-deployment.yaml 

    app: tomcat6
  name: tomcat6
spec:
  replicas: 3
  selector:
    matchLabels:
      app: tomcat6
  template:
    metadata:
      labels:
        app: tomcat6
    spec:
      containers:
      - image: tomcat:6.0.53-jre8
        name: tomcat
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: tomcat6
  name: tomcat6
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: tomcat6
  type: NodePort
#####################################
[root@k8s-node1 ~]# kubectl delete deployment.apps/tomcat6
deployment.apps "tomcat6" deleted
[root@k8s-node1 ~]# kubectl get all
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1            443/TCP   133m
[root@k8s-node1 ~]# 
#####################################

[root@k8s-node1 ~]# kubectl apply -f tomcat6-deployment.yaml
deployment.apps/tomcat6 created
service/tomcat6 created
[root@k8s-node1 ~]# 

[root@k8s-node1 ~]# kubectl get all
NAME                           READY   STATUS    RESTARTS   AGE
pod/tomcat6-5f7ccf4cb9-dqd5v   1/1     Running   0          34s
pod/tomcat6-5f7ccf4cb9-jn9wr   1/1     Running   0          34s
pod/tomcat6-5f7ccf4cb9-v9v6h   1/1     Running   0          34s

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1              443/TCP        141m
service/tomcat6      NodePort    10.96.210.80           80:32625/TCP   34s

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/tomcat6   3/3     3            3           34s

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/tomcat6-5f7ccf4cb9   3         3         3       34s
[root@k8s-node1 ~]# 

[root@k8s-node1 k8s]# kubectl get all
NAME                           READY   STATUS    RESTARTS   AGE
pod/tomcat6-5f7ccf4cb9-7dx6c   1/1     Running   0          4m26s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1            443/TCP   17m

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/tomcat6   1/1     1            1           4m26s

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/tomcat6-5f7ccf4cb9   1         1         1       4m26s
[root@k8s-node1 k8s]# kubectl delete deployment.apps/tomcat6
deployment.apps "tomcat6" deleted
[root@k8s-node1 k8s]# kubectl get pods -o wide
No resources found in default namespace.

访问地址:http://192.168.56.102:32625/

Ingress
#####################################
[root@k8s-node1 k8s]# kubectl apply -f ingress-controller.yaml 
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
daemonset.apps/nginx-ingress-controller created
service/ingress-nginx created
#####################################
[root@k8s-node1 k8s]# kubectl get pods --all-namespaces
NAMESPACE       NAME                                READY   STATUS    RESTARTS   AGE
default         tomcat6-5f7ccf4cb9-dqd5v            1/1     Running   0          10m
default         tomcat6-5f7ccf4cb9-jn9wr            1/1     Running   0          10m
default         tomcat6-5f7ccf4cb9-v9v6h            1/1     Running   0          10m
ingress-nginx   nginx-ingress-controller-jpd4h      1/1     Running   0          2m11s
ingress-nginx   nginx-ingress-controller-tgvmg      1/1     Running   0          2m11s
kube-system     coredns-7f9c544f75-gsk9k            1/1     Running   0          150m
kube-system     coredns-7f9c544f75-lw6xd            1/1     Running   0          150m
kube-system     etcd-k8s-node1                      1/1     Running   0          150m
kube-system     kube-apiserver-k8s-node1            1/1     Running   0          150m
kube-system     kube-controller-manager-k8s-node1   1/1     Running   0          150m
kube-system     kube-flannel-ds-amd64-9jx56         1/1     Running   1          132m
kube-system     kube-flannel-ds-amd64-fgq9x         1/1     Running   1          132m
kube-system     kube-flannel-ds-amd64-w7zwd         1/1     Running   0          141m
kube-system     kube-proxy-g95bd                    1/1     Running   0          150m
kube-system     kube-proxy-w627h                    1/1     Running   1          132m
kube-system     kube-proxy-xcssd                    1/1     Running   0          132m
kube-system     kube-scheduler-k8s-node1            1/1     Running   0          150m
[root@k8s-node1 k8s]# 
#####################################
[root@k8s-node1 k8s]# vi ingress-tomcat6.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: web
spec: 
  rules:
  - host: tomcat6.atguigu.com
    http:
      paths:
        - backend: 
           serviceName: tomcat6
           servicePort: 80
		   
[root@k8s-node1 k8s]# kubectl apply -f ingress-tomcat6.yaml 
error: error parsing ingress-tomcat6.yaml: error converting YAML to JSON: yaml: line 11: found character that cannot start any token
这个问题是因为yaml文件不支持tab制表符。
yaml语法不支持制表符,用空格代替就行。
冒号后面需要跟着空格,看看是不是缺少了空格。
最后发现是 serviceName 那行前面有个 tab ,用空格代替就行。

[root@k8s-node1 k8s]# kubectl apply -f ingress-tomcat6.yaml 
ingress.extensions/web created
#####################################
[root@k8s-node1 k8s]# kubectl get pods -o wide
NAME                       READY   STATUS    RESTARTS   AGE   IP           NODE        NOMINATED NODE   READINESS GATES
tomcat6-5f7ccf4cb9-dqd5v   1/1     Running   0          40m   10.244.2.8   k8s-node2              
tomcat6-5f7ccf4cb9-jn9wr   1/1     Running   0          40m   10.244.1.7   k8s-node3              
tomcat6-5f7ccf4cb9-v9v6h   1/1     Running   0          40m   10.244.2.7   k8s-node2              
[root@k8s-node1 k8s]# 
#####################################
配置windows hosts,ip地址是 node2或node3
192.168.56.101 tomcat6.atguigu.com

最后直接使用域名访问:tomcat6.atguigu.com
安装默认 dashboard 安装 KubeSphere 简介

KubeSphere 是一款面向云原生设计的开源项目,在目前主流容器调度平台 Kubernetes 之 上构建的分布式多租户容器管理平台,提供简单易用的操作界面以及向导式操作方式,在降 低用户使用容器调度平台学习成本的同时,极大降低开发、测试、运维的日常工作的复杂度。

默认的 dashboard 没啥用,我们用 kubesphere 可以打通全部的 devops 链路。
Kubesphere 集成了很多套件,集群要求较高
中文文档:https://kubesphere.com.cn/docs/
https://kubesphere.io/

Kuboard 也很不错,集群要求不高
https://kuboard.cn/support/

安装

1、安装前提条件
https://v2-1.docs.kubesphere.io/docs/zh-CN/installation/prerequisites/
helm下载: https://github.com/helm/helm/releases?page=6
2、安装前提环境
docker+k8s+kubesphere:helm与tiller安装

(1)安装 helm(master 节点执行)
Helm 是 Kubernetes 的包管理器。包管理器类似于我们在 Ubuntu 中使用的 apt、Centos 中使用的 yum 或者 Python 中的 pip 一样,能快速查找、下载和安装软件包。Helm 由客 户端组件 helm 和服务端组件 Tiller 组成, 能够将一组 K8S 资源打包统一管理, 是查找、共 享和使用为 Kubernetes 构建的软件的最佳方式。

下载,wget或者迅雷下载再传上都可以
https://get.helm.sh/helm-v2.16.9-linux-amd64.tar.gz
tar -zxvf helm-v2.16.9-linux-amd64.tar.gz
cp linux-amd64/helm /usr/local/bin

[root@node151 bin]# helm version
Client: &version.Version{SemVer:"v2.16.9", GitCommit:"8ad7037828e5a0fca1009dabe290130da6368e39", GitTreeState:"clean"}
Error: could not find tiller

创建权限(master 执行)
创建文件 helm-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

执行:
kubectl apply -f helm-rbac.yaml
结果如下:
[root@node151 ~]# kubectl apply -f helm-rbac.yaml
serviceaccount/tiller unchanged
clusterrolebinding.rbac.authorization.k8s.io/tiller unchanged

(1)安装 Tiller(master 执行)
初始化问题解决

执行初始化:
helm init 
> --upgrade 
> --service-account=tiller 
> --tiller-image=registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.16.9 
> --history-max 300

使用监控可以查看服务的状态
watch kubectl get pod -n kube-system -o wide

执行:
helm version
结果如下:
[root@node151 ~]# helm version
Client: &version.Version{SemVer:"v2.16.3", GitCommit:"1ee0254c86d4ed6887327dabed7aa7da29d7eb0d", GitTreeState:"clean"}
Error: could not find tiller

执行:
kubectl get sa -n kube-system |grep tiller
结果如下:
[root@node151 ~]#  kubectl get sa -n kube-system |grep tiller
tiller                               1         11m

执行:kubectl get clusterrolebindings.rbac.authorization.k8s.io  -n kube-system |grep tiller
结果如下:
[root@node151 ~]#  kubectl get clusterrolebindings.rbac.authorization.k8s.io  -n kube-system |grep tiller
tiller 

执行:
kubectl get all --all-namespaces | grep tiller
结果如下:
[root@node151 ~]# kubectl get all --all-namespaces | grep tiller
kube-system   pod/tiller-deploy-797955c678-nl5nv             1/1     Running   0          43m
kube-system   service/tiller-deploy   ClusterIP   10.96.197.118           44134/TCP                43m
kube-system   deployment.apps/tiller-deploy             1/1     1            1           43m
kube-system   replicaset.apps/tiller-deploy-797955c678             1         1         1       43m

执行:
kubectl -n kube-system get pods|grep tiller
结果如下:
验证tiller是否安装成功
[root@node151 ~]# kubectl -n kube-system get pods|grep tiller
tiller-deploy-797955c678-nl5nv             1/1     Running   0          50m

监控查看
执行:watch kubectl get pod -n kube-system -o wide
结果如下:
NAME                                       READY   STATUS    RESTARTS   AGE     IP              NODE	  NOMINATED NODE   READINESS GATES
calico-kube-controllers-589b5f594b-ckfgz   1/1     Running   1          7h22m   10.20.235.4     node153              
calico-node-msd6f                          1/1     Running   1          7h22m   192.168.5.152   node152              
calico-node-s9xf6                          1/1     Running   2          7h22m   192.168.5.151   node151              
calico-node-wcztl                          1/1     Running   1          7h22m   192.168.5.153   node153              
coredns-7f9c544f75-gmclr                   1/1     Running   2          9h	10.20.223.67    node151              
coredns-7f9c544f75-t7jh6                   1/1     Running   2          9h	10.20.235.3     node153              
etcd-node151                               1/1     Running   4          9h	192.168.5.151   node151              
kube-apiserver-node151                     1/1     Running   6          9h	192.168.5.151   node151              
kube-controller-manager-node151            1/1     Running   5          9h	192.168.5.151   node151              
kube-proxy-5t7jg                           1/1     Running   3          9h	192.168.5.151   node151              
kube-proxy-fqjh2                           1/1     Running   2          8h	192.168.5.152   node152              
kube-proxy-mbxtx                           1/1     Running   2          8h	192.168.5.153   node153              
kube-scheduler-node151                     1/1     Running   5          9h	192.168.5.151   node151              
tiller-deploy-797955c678-nl5nv             1/1     Running   0          40m     10.20.117.197   node152              

可以看见tiller服务启动

执行:
helm version
结果如下:
[root@node151 ~]# helm version
Client: &version.Version{SemVer:"v2.16.9", GitCommit:"8ad7037828e5a0fca1009dabe290130da6368e39", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.9", GitCommit:"8ad7037828e5a0fca1009dabe290130da6368e39", GitTreeState:"clean"}

命令集合:

helm version
kubectl get pods -n kube-system
kubectl get pod --all-namespaces
kubectl get all --all-namespaces | grep tiller 
kubectl get all -n kube-system -l app=helm -o name|xargs kubectl delete -n kube-system
watch kubectl get pod -n kube-system -o wide

3、安装 OpenEBS(master 执行)
安装 OpenEBS 创建 LocalPV 存储类型

kubectl get node -o wide

确认 master 节点是否有 Taint,如下看到 master 节点有 Taint。
kubectl describe node k8s-node1 | grep Taint

去掉 master 节点的 Taint:
kubectl taint nodes k8s-node1 node-role.kubernetes.io/master:NoSchedule-

下面开始 安装 OpenEBS
kubectl create ns openebs

helm install --namespace openebs --name openebs stable/openebs --version 1.5.0

[root@k8s-node1 k8s]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                           READY   STATUS    RESTARTS   AGE
kube-system   coredns-7f9c544f75-gbfdl                       1/1     Running   0          132m
kube-system   coredns-7f9c544f75-h8sxd                       1/1     Running   0          132m
kube-system   etcd-k8s-node1                                 1/1     Running   0          132m
kube-system   kube-apiserver-k8s-node1                       1/1     Running   0          132m
kube-system   kube-controller-manager-k8s-node1              1/1     Running   0          132m
kube-system   kube-flannel-ds-amd64-cl8vs                    1/1     Running   0          132m
kube-system   kube-flannel-ds-amd64-dtrvb                    1/1     Running   0          123m
kube-system   kube-flannel-ds-amd64-stvhc                    1/1     Running   2          123m
kube-system   kube-proxy-dsvgl                               1/1     Running   0          132m
kube-system   kube-proxy-lhjqp                               1/1     Running   0          123m
kube-system   kube-proxy-plbkb                               1/1     Running   0          123m
kube-system   kube-scheduler-k8s-node1                       1/1     Running   0          132m
kube-system   tiller-deploy-6588db4955-68f64                 1/1     Running   0          78m
openebs       openebs-admission-server-5cf6864fbf-j6wqd      1/1     Running   0          60m
openebs       openebs-apiserver-bc55cd99b-gc95c              1/1     Running   0          60m
openebs       openebs-localpv-provisioner-85ff89dd44-wzcvc   1/1     Running   0          60m
openebs       openebs-ndm-6qcqk                              1/1     Running   0          60m
openebs       openebs-ndm-fl54s                              1/1     Running   0          60m
openebs       openebs-ndm-g5jdq                              1/1     Running   0          60m
openebs       openebs-ndm-operator-87df44d9-h9cpj            1/1     Running   1          60m
openebs       openebs-provisioner-7f86c6bb64-tp4js           1/1     Running   0          60m
openebs       openebs-snapshot-operator-54b9c886bf-x7gn4     2/2     Running   0          60m
[root@k8s-node1 k8s]# 

[root@k8s-node1 k8s]# kubectl get sc
NAME                        PROVISIONER                                                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
openebs-device              openebs.io/local                                           Delete          WaitForFirstConsumer   false                  60m
openebs-hostpath            openebs.io/local                                           Delete          WaitForFirstConsumer   false                  60m
openebs-jiva-default        openebs.io/provisioner-iscsi                               Delete          Immediate              false                  60m
openebs-snapshot-promoter   volumesnapshot.external-storage.k8s.io/snapshot-promoter   Delete          Immediate              false                  60m
[root@k8s-node1 k8s]# 
#####################################

[root@k8s-node1 k8s]# kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
storageclass.storage.k8s.io/openebs-hostpath patched

[root@k8s-node1 k8s]# kubectl get sc
NAME                         PROVISIONER                                                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
openebs-device               openebs.io/local                                           Delete          WaitForFirstConsumer   false                  69m
openebs-hostpath (default)   openebs.io/local                                           Delete          WaitForFirstConsumer   false                  69m
openebs-jiva-default         openebs.io/provisioner-iscsi                               Delete          Immediate              false                  69m
openebs-snapshot-promoter    volumesnapshot.external-storage.k8s.io/snapshot-promoter   Delete          Immediate              false                  69m
[root@k8s-node1 k8s]# 
#####################################
至此,OpenEBS 的 LocalPV 已作为默认的存储类型创建成功。由于在文档开头手动去掉 了 master 节点的 Taint,我们可以在安装完 OpenEBS 后将 master 节点 Taint 加上,避 免业务相关的工作负载调度到 master 节点抢占 master 资源。
[root@k8s-node1 k8s]# kubectl taint nodes k8s-node1 node-role.kubernetes.io=master:NoSchedule
node/k8s-node1 tainted
[root@k8s-node1 k8s]# 
#####################################

3、最小化安装 kubesphere
在 Kubernetes 安装 KubeSphere 安装文档:
https://v2-1.docs.kubesphere.io/docs/zh-CN/installation/install-on-k8s/

用到的yaml配置来源:
https://gitee.com/learning1126/ks-installer/blob/master/kubesphere-minimal.yaml#
vi kubesphere-minimal.yaml

---
apiVersion: v1
kind: Namespace
metadata:
  name: kubesphere-system

---
apiVersion: v1
data:
  ks-config.yaml: |
    ---

    persistence:
      storageClass: ""

    etcd:
      monitoring: False
      endpointIps: 192.168.0.7,192.168.0.8,192.168.0.9
      port: 2379
      tlsEnable: True

    common:
      mysqlVolumeSize: 20Gi
      minioVolumeSize: 20Gi
      etcdVolumeSize: 20Gi
      openldapVolumeSize: 2Gi
      redisVolumSize: 2Gi

    metrics_server:
      enabled: False

    console:
      enableMultiLogin: False  # enable/disable multi login
      port: 30880

    monitoring:
      prometheusReplicas: 1
      prometheusMemoryRequest: 400Mi
      prometheusVolumeSize: 20Gi
      grafana:
        enabled: False

    logging:
      enabled: False
      elasticsearchMasterReplicas: 1
      elasticsearchDataReplicas: 1
      logsidecarReplicas: 2
      elasticsearchMasterVolumeSize: 4Gi
      elasticsearchDataVolumeSize: 20Gi
      logMaxAge: 7
      elkPrefix: logstash
      containersLogMountedPath: ""
      kibana:
        enabled: False

    openpitrix:
      enabled: False

    devops:
      enabled: False
      jenkinsMemoryLim: 2Gi
      jenkinsMemoryReq: 1500Mi
      jenkinsVolumeSize: 8Gi
      jenkinsJavaOpts_Xms: 512m
      jenkinsJavaOpts_Xmx: 512m
      jenkinsJavaOpts_MaxRAM: 2g
      sonarqube:
        enabled: False
        postgresqlVolumeSize: 8Gi

    servicemesh:
      enabled: False

    notification:
      enabled: False

    alerting:
      enabled: False

kind: ConfigMap
metadata:
  name: ks-installer
  namespace: kubesphere-system

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ks-installer
  namespace: kubesphere-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  creationTimestamp: null
  name: ks-installer
rules:
- apiGroups:
  - ""
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apps
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - extensions
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - batch
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - rbac.authorization.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apiregistration.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apiextensions.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - tenant.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - certificates.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - devops.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - monitoring.coreos.com
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - logging.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - jaegertracing.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - storage.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - '*'
  verbs:
  - '*'

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ks-installer
subjects:
- kind: ServiceAccount
  name: ks-installer
  namespace: kubesphere-system
roleRef:
  kind: ClusterRole
  name: ks-installer
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    app: ks-install
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ks-install
  template:
    metadata:
      labels:
        app: ks-install
    spec:
      serviceAccountName: ks-installer
      containers:
      - name: installer
        image: kubesphere/ks-installer:v2.1.1
        imagePullPolicy: "Always"
[root kubesphere-master-2 ~/ks-installer/scripts]## 在kuberntes上安装最小化KubeSphere                                                                                  
[root kubesphere-master-2 ~/ks-installer/scripts]#                                                                                                              
[root kubesphere-master-2 ~/ks-installer/scripts]### 先决条件                                                                                                                                                                          
[root kubesphere-master-2 ~/ks-installer/scripts]### 关闭防火墙或按照[端口文档](https://v3-0.docs.kubesphere.io/docs/installing-on-linux/introduction/port-firewall/)设置防火墙规则
[root kubesphere-master-2 ~/ks-installer/scripts]### 关闭selinux                                                                                                  
[root kubesphere-master-2 ~/ks-installer/scripts]### 关闭swap                                                                                                     
                                                                                                                                                                                                             
[root kubesphere-master-2 ~/ks-installer/scripts]#### 检查资源                                                                                                      
[root kubesphere-master-2 ~/ks-installer/scripts]#free -g                                                                                                       
              total        used        free      shared  buff/cache   available                                                                                 
Mem:             31           1          24           0           5          28                                                                                 
Swap:             0           0           0                                                                                                                     
[root kubesphere-master-2 ~/ks-installer/scripts]### kubernetes版本必须是 1.15.x, 1.16.x, 1.17.x, or 1.18.x 


[root@k8s-node1 k8s]# kubectl version 
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:14:22Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:07:13Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
[root@k8s-node1 k8s]# 

[root@k8s-node1 k8s]# kubectl get sc
NAME                         PROVISIONER                                                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
openebs-device               openebs.io/local                                           Delete          WaitForFirstConsumer   false                  104m
openebs-hostpath (default)   openebs.io/local                                           Delete          WaitForFirstConsumer   false                  104m
openebs-jiva-default         openebs.io/provisioner-iscsi                               Delete          Immediate              false                  104m
openebs-snapshot-promoter    volumesnapshot.external-storage.k8s.io/snapshot-promoter   Delete          Immediate              false                  104m
[root@k8s-node1 k8s]# 

如果你的kubernetes集群满足所有的前提条件,你就可以在你的集群上部署KubeSphere了.                                                         
###################################
安装kubernetes:
[root@k8s-node1 k8s]# kubectl apply -f kubesphere-minimal.yaml 
namespace/kubesphere-system unchanged
configmap/ks-installer created
serviceaccount/ks-installer unchanged
clusterrole.rbac.authorization.k8s.io/ks-installer configured
clusterrolebinding.rbac.authorization.k8s.io/ks-installer unchanged
deployment.apps/ks-installer configured

[root@k8s-node1 k8s]# kubectl get pod --all-namespaces
NAMESPACE           NAME                                           READY   STATUS    RESTARTS   AGE
kube-system         coredns-7f9c544f75-gbfdl                       1/1     Running   0          156m
kube-system         coredns-7f9c544f75-h8sxd                       1/1     Running   0          156m
kube-system         etcd-k8s-node1                                 1/1     Running   0          155m
kube-system         kube-apiserver-k8s-node1                       1/1     Running   0          155m
kube-system         kube-controller-manager-k8s-node1              1/1     Running   0          155m
kube-system         kube-flannel-ds-amd64-cl8vs                    1/1     Running   0          155m
kube-system         kube-flannel-ds-amd64-dtrvb                    1/1     Running   0          146m
kube-system         kube-flannel-ds-amd64-stvhc                    1/1     Running   2          146m
kube-system         kube-proxy-dsvgl                               1/1     Running   0          156m
kube-system         kube-proxy-lhjqp                               1/1     Running   0          146m
kube-system         kube-proxy-plbkb                               1/1     Running   0          146m
kube-system         kube-scheduler-k8s-node1                       1/1     Running   0          155m
kube-system         tiller-deploy-6588db4955-68f64                 1/1     Running   0          101m
kubesphere-system   ks-installer-75b8d89dff-57jvq                  1/1     Running   0          58s
openebs             openebs-admission-server-5cf6864fbf-j6wqd      1/1     Running   0          84m
openebs             openebs-apiserver-bc55cd99b-gc95c              1/1     Running   2          84m
openebs             openebs-localpv-provisioner-85ff89dd44-wzcvc   1/1     Running   2          84m
openebs             openebs-ndm-6qcqk                              1/1     Running   0          84m
openebs             openebs-ndm-fl54s                              1/1     Running   0          84m
openebs             openebs-ndm-g5jdq                              1/1     Running   0          84m
openebs             openebs-ndm-operator-87df44d9-h9cpj            1/1     Running   1          84m
openebs             openebs-provisioner-7f86c6bb64-tp4js           1/1     Running   1          84m
openebs             openebs-snapshot-operator-54b9c886bf-x7gn4     2/2     Running   1          84m
[root@k8s-node1 k8s]# 

检查安装日志:这里会很慢,亲测大概等了五分钟。
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

访问地址:http://192.168.56.100:30880
Account: admin
Password: P@88w0rd

转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/841686.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号