- 单机版
在以前想把多个应用部署在一个服务器上,就会出现一个应用占用过高导致其他的应用资源减少,资源不隔离,一个应用出现问题会导致其他的应用出现连带反应。 - 虚拟化
使用虚拟机的方式来部署应用,解决了第一种问题,【缺点】虚拟机体积大,环境部署麻烦【优点】资源隔离。 - 容器化
为了解决第二个问题,在之后出现了docker,只需要把应用打包成dockerFile在docker上进行容器化运行,等于一个小型虚拟机,但速度快,部署方便不需要安装多余环境只需要这dockerfile声明要运行在哪个环境下即可(如:java应用只需要指明在jdk哪个版本上运行,docker会自动下载好环境),资源隔离,更容易管理。
【意】虽然容器化解决了资源隔离问题,使得部署更方便,但如果一个应用有几十个微服务,这样又会使得管理非常困难。所以我们急需一个大规模的容器编排系统(调控系统),能够自动创建新容器, 删除现有容器并将它们的所有资源用于新容器。这就是Kubernetes。
kubernetes具有以下特性:- 服务发现和负载均衡
Kubernetes 可以使用 DNS 名称或自己的 IP 地址公开容器,如果进入容器的流量很大, Kubernetes 可以负载均衡并分配网络流量,从而使部署稳定。 - 存储编排
Kubernetes 允许你自动挂载你选择的存储系统,例如本地存储、公共云提供商等(控制应用的使用的存储空间。)。 - 自动部署和回滚
你可以使用 Kubernetes 描述已部署容器的所需状态,它可以以受控的速率将实际状态 更改为期望状态。例如,你可以自动化 Kubernetes 来为你的部署创建新容器, 删除现有容器并将它们的所有资源用于新容器。(可以回滚到上一个版本,所有的占用资源指标k8s都可以控制) - 自动完成装箱计算
Kubernetes 允许你指定每个容器所需 CPU 和内存(RAM)。 当容器指定了资源请求时,Kubernetes 可以做出更好的决策来管理容器的资源。 - 自我修复
Kubernetes 重新启动失败的容器、替换容器、杀死不响应用户定义的 运行状况检查的容器,并且在准备好服务之前不将其通告给客户端。 - 密钥与配置管理(类似配置中心)
Kubernetes 允许你存储和管理敏感信息,例如密码、OAuth 令牌和 ssh 密钥。 你可以在不重建容器镜像的情况下部署和更新密钥和应用程序配置,也无需在堆栈配置中暴露密钥。
2、安装docker省略服务器创建,就是让3台服务器通过内网ip就可以ping通。
#1、移除以前docker相关包
sudo yum remove docker
docker-client
docker-client-latest
docker-common
docker-latest
docker-latest-logrotate
docker-logrotate
docker-engine
#2、配置yum源
sudo yum install -y yum-utils
sudo yum-config-manager
--add-repo
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#3、安装docker
yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6
3、启动docker
#systemctl enable=现在启动,--now= 开机启动
systemctl enable docker --now
#配置加速
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://wzu988tq.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
4、安装kubernetes集群
【安装注意事项】:
-
一台兼容的 Linux 主机。Kubernetes 项目为基于 Debian 和 Red Hat 的 Linux 发行版以及一些不提供包管理器的发行版提供通用的指令
-
每台机器 2 GB 或更多的 RAM (如果少于这个数字将会影响你应用的运行内存)
-
2 CPU 核或更多
-
集群中的所有机器的网络彼此均能相互连接(公网和内网都可以)
-
设置防火墙放行规则
节点之中不可以有重复的主机名、MAC 地址或 product_uuid。请参见这里了解更多详细信息。 -
设置不同hostname
开启机器上的某些端口。请参见这里 了解更多详细信息。 -
内网互信
禁用交换分区。为了保证 kubelet 正常工作,你 必须 禁用交换分区。永久关闭
hostnamectl set-hostname k8s-node1 hostnamectl set-hostname k8s-node2 hostnamectl set-hostname k8s-node3
3、关闭交互分区(虚拟内存)# 将 SELinux 设置为 permissive 模式(相当于将其禁用) sudo setenforce 0 sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config #关闭swap swapoff -a sed -ri 's/.*swap.*/#&/' /etc/fstab #允许 iptables 检查桥接流量 cat <4、安装kubelet、kubeadm、kubectl #配置kubelet、kubeadm、kubectl的下载地址 cat <5、下载各个机器需要的镜像 #自动下载shell脚本,自动安装docker镜像 sudo tee ./images.sh <<-'EOF' #!/bin/bash images=( kube-apiserver:v1.20.9 kube-proxy:v1.20.9 kube-controller-manager:v1.20.9 kube-scheduler:v1.20.9 coredns:1.7.0 etcd:3.4.13-0 pause:3.2 ) for imageName in ${images[@]} ; do docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName done EOF #给运行权限,并且运行 chmod +x ./images.sh && ./images.sh6、初始化主节点#所有机器添加master域名映射,ip需要修改为自己的master节点IP echo "172.31.0.3 cluster-endpoint" >> /etc/hosts只在主节点运行下面命令
#主节点初始化(只在主节点) #masterIP地址 #上面映射的域名地址 #阿里云镜像仓库 #版本 #网络范围 kubeadm init --apiserver-advertise-address=172.31.0.3 --control-plane-endpoint=cluster-endpoint --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images --kubernetes-version v1.20.9 --service-cidr=10.96.0.0/16 #service层网络的范围 --pod-network-cidr=192.168.0.0/16 #每个Pod都会被k8s分配一个人ip,这里是设置ip的网段范围,集群中的任意一个机器以及任意的应用都能通过Pod分配的ip来访问这个Pod #上面的注释会不能运行,这里命令一样为了方便复制 kubeadm init --apiserver-advertise-address=172.31.0.4 --control-plane-endpoint=k8s-master --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images --kubernetes-version v1.20.9 --service-cidr=10.96.0.0/16 --pod-network-cidr=192.168.0.0/16 #所有网络范围不重叠#这是上面运行成功的提示,最好复制出来有需要运行的命令 Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: #运行下面命令,把k8s的admin.conf文件,移动到root用户下的config目录,这样就能使用kubectl命令了 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: #多个master命令 kubeadm join cluster-endpoint:6443 --token f9tktf.0ua13cj9f8jn493r --discovery-token-ca-cert-hash sha256:711e2715c4011d5c3bdf09496ab4221513c931621b9de3086345bad310645565 --control-plane Then you can join any number of worker nodes by running the following on each as root: #子节点加入主节点命令 kubeadm join cluster-endpoint:6443 --token f9tktf.0ua13cj9f8jn493r --discovery-token-ca-cert-hash Your Kubernetes control-plane has initialized successfully! #查看集群所有节点 kubectl get nodes #根据配置文件,给集群创建资源 kubectl apply -f xxx.yaml #查看集群部署了哪些应用等于docker的docker ps kubectl get pods -A #运行中的应用在docker叫容器,在k8s叫Pod7、安装网络插件(只在主节点运行下面命令)#下载calico网络插件 curl https://docs.projectcalico.org/manifests/calico.yaml -O #下载好后会有一个配置文件,k8s想创建资源只需要应用配置文件就行了 kubectl apply -f calico.yaml8、子节点加入主节点#就是上面主节点运行成功最后的一段,这个需要令牌,令牌是24小时有效 kubeadm join cluster-endpoint:6443 --token j521hh.3vrz4lj88px8qa5o --discovery-token-ca-cert-hash sha256:d412501e0a07b62eff09c876e7be24eaf4badc18cf0377afda67589f251d9d5d #过期后在主节点创建新令牌 kubeadm token create --print-join-command9、部署可视化界面dashboard(在主节点)kubernetes官方提供的可视化界面
https://github.com/kubernetes/dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml【注】如果不能下载可以使用下面的yaml文件
#创建dashboard.yaml vi dashboard.yaml# Copyright 2017 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR ConDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. apiVersion: v1 kind: Namespace metadata: name: kubernetes-dashboard --- apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard --- kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: ports: - port: 443 targetPort: 8443 selector: k8s-app: kubernetes-dashboard --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kubernetes-dashboard type: Opaque --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-csrf namespace: kubernetes-dashboard type: Opaque data: csrf: "" --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-key-holder namespace: kubernetes-dashboard type: Opaque --- kind: ConfigMap apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-settings namespace: kubernetes-dashboard --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard rules: # Allow Dashboard to get, update and delete Dashboard exclusive secrets. - apiGroups: [""] resources: ["secrets"] resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"] verbs: ["get", "update", "delete"] # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] # Allow Dashboard to get metrics. - apiGroups: [""] resources: ["services"] resourceNames: ["heapster", "dashboard-metrics-scraper"] verbs: ["proxy"] - apiGroups: [""] resources: ["services/proxy"] resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"] verbs: ["get"] --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard rules: # Allow Metrics Scraper to get metrics from the Metrics server - apiGroups: ["metrics.k8s.io"] resources: ["pods", "nodes"] verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboard subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: kubernetes-dashboard subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kubernetes-dashboard --- kind: Deployment apiVersion: apps/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: kubernetesui/dashboard:v2.3.1 imagePullPolicy: Always ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates - --namespace=kubernetes-dashboard # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs # Create on-disk volume to store exec logs - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1001 runAsGroup: 2001 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard nodeSelector: "kubernetes.io/os": linux # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule --- kind: Service apiVersion: v1 metadata: labels: k8s-app: dashboard-metrics-scraper name: dashboard-metrics-scraper namespace: kubernetes-dashboard spec: ports: - port: 8000 targetPort: 8000 selector: k8s-app: dashboard-metrics-scraper --- kind: Deployment apiVersion: apps/v1 metadata: labels: k8s-app: dashboard-metrics-scraper name: dashboard-metrics-scraper namespace: kubernetes-dashboard spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: dashboard-metrics-scraper template: metadata: labels: k8s-app: dashboard-metrics-scraper annotations: seccomp.security.alpha.kubernetes.io/pod: 'runtime/default' spec: containers: - name: dashboard-metrics-scraper image: kubernetesui/metrics-scraper:v1.0.6 ports: - containerPort: 8000 protocol: TCP livenessProbe: httpGet: scheme: HTTP path: / port: 8000 initialDelaySeconds: 30 timeoutSeconds: 30 volumeMounts: - mountPath: /tmp name: tmp-volume securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1001 runAsGroup: 2001 serviceAccountName: kubernetes-dashboard nodeSelector: "kubernetes.io/os": linux # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule volumes: - name: tmp-volume emptyDir: {}10、设置访问端口#修改端口,相当于docker的端口映射 kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard #使用/type:搜索,改成 type: NodePort #查看映射的端口,我的是30035 kubectl get svc -A |grep kubernetes-dashboard然后修改安全组,开放这个端口号,使用任意服务器节点访问这个端口进入
11、创建访问账号#使用https加ip和端口号
https://139.198.106.119:30035
#创建访问账号,准备一个yaml文件; vi dash.yaml apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kubernetes-dashboard #保存退出后加载配置 kubectl apply -f dash.yaml12、使用创建好的账号来获取令牌#获取访问令牌 kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"eyJhbGciOiJSUzI1NiIsImtpZCI6IllwNV9WV0hDNWNaOFJ6bkpVNllKTzd5ZFR1S1dXRGJxV2RXNDAxaUxUWm8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWoyYjQ1Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI2NTBiYzViYS01MjdhLTRhNmYtYjNjZS1kYzgxOTdhYTM3N2EiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.nU6cL8Drm9PgVBcmDrMmlwhfmN7fYCaAOqMIeilK5xMIESD-eUxn6-IeHhaHjv1tKAtECE-tizYGlkFGhQ0Sh7qgcbVBid1jx9Pi-2AhOVpPnezVho4JEiGmHtQ2kmQuVsKciCV3H3D4QKQBxOxKR-AfE7G8ye4ln405tN5dtE1mQT9VoixChIGCQ9Pja1OzOz13jKz8WN-GWKqJI1V_vrpTkXd7d9AU6fOtRNjIF-PqDqHU7jIsU8Cl7iOEdHrzt8KSvjZ5OpuSm3ckuL3WkBB9xdhKWVrBOKlN_IEeDDy48fWciXV5jS4KWtf2_Ne_R9xQrn7_qnw3byAmhDIgLQ13、删除节点#假设我们需要删除 k8s-node1 这个节点,首先在 master 节点上依次执行以下两个命令: kubectl drain k8s-node1 --delete-local-data --force --ignore-daemonsets kubectl delete node k8s-node1 #执行后通过 kubectl get node 命令可以看到 k8s-node1 已被成功删除: kubectl get nodes #接着在 k8s-node1 这个 Node 节点上执行如下命令,这样该节点即完全从 Cluster 中脱离开来: kubeadm reset



