准备至少三台以上的虚拟机 这三台虚拟机之间要保证能够相互ping通,并且能够ping通外网。
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-vANQFP7c-1647768758854)(E:Typora图片image-20220320145803201.png)]
使用VMware安装三台虚拟机 Master Node-1 Node-2
保证三台虚拟机之间能够相关ping通,并且能够ping外网
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-fbUhU9Uq-1647768758855)(E:Typora图片image-20220320150020309.png)]
关闭三台虚拟机的防火墙 避免一些不必要的麻烦(线上环境不推荐)
systemctl stop firewalld systemctl disable firewalld
关闭 selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config
查看默认的selinux
cat /etc/selinux/config
默认是
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-lugB4RFd-1647768758856)(E:Typora图片image-20220320145613742.png)]
关闭之后
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-3IzHqzsZ-1647768758857)(E:Typora图片image-20220320145706228.png)]
关闭 swap
swapoff -a 临时 sed -ri 's/.*swap.*/#&/' /etc/fstab 永久 free -g 验证,swap 必须为 0;
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-xbAb8wG1-1647768758857)(E:Typora图片image-20220320150309086.png)]
修改主机名
hostnamectl set-hostname:指定新的 hostname
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-dV8D6izh-1647768758858)(E:Typora图片image-20220320151136829.png)]
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-P70OYa4H-1647768758858)(E:Typora图片image-20220320151229664.png)]
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-b5BPjurN-1647768758859)(E:Typora图片image-20220320151318770.png)]
添加主机名与 IP 对应关系
vi /etc/hosts 192.168.25.138 master 192.168.25.136 k8s-node1 192.168.25.137 k8s-node2 注:一定要是自己虚拟机的ip地址 然后和虚拟机名字相对应
可以通过 ip addr 命令查看自己的ip
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-YAUksv1O-1647768758859)(E:Typora图片image-20220320151709240.png)]
编辑完保存退出即可
将桥接的 IPv4 流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --s
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-LVJP9sL0-1647768758859)(E:Typora图片image-20220320152039978.png)]
疑难问题
遇见提示是只读的文件系统,运行如下命令
mount -o remount rw /
date 查看时间 (可选)
yum install -y ntpdate ntpdate time.windows.com 同步最新时间2. 安装Docker
卸载旧版本Docker
yum remove docker
docker-client
docker-client-latest
docker-common
docker-latest
docker-latest-logrotate
docker-logrotate
docker-engine
需要的安装包
yum install -y yum-utils
设置镜像仓库
yum-config-manager
--add-repo
https://download.docker.com/linux/centos/docker-ce.repo #国外地址
yum-config-manager
--add-repo
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo #阿里云
更新yum软件包索引
yum makecache fast
安装docker docker-ce社区版 docker-ee是企业版
yum install docker-ce docker-ce-cli containerd.io
启动Docker
systemctl start docker
查看Docker版本
docker version
配置阿里云镜像加速
# 1.第一步
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://po1qcvly.mirror.aliyuncs.com"]
}
EOF
# 2.第二步
sudo systemctl daemon-reload
# 3.第三步
sudo systemctl restart docker
设置docker 开机自启
systemctl enable docker3. 安装 kubeadm,kubelet 和 kubectl
yum list|grep kube yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3
如果找不到镜像源 可以运行下面的命令
cat </etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
设置kubelet 开机启动
systemctl enable kubelet systemctl start kubelet4. 部署k8s-master
这里由于网络或者其他原因可能运行的较慢 可以先执行一个脚本
脚本命令
#!/bin/bash
images=(
kube-apiserver:v1.17.3
kube-proxy:v1.17.3
kube-controller-manager:v1.17.3
kube-scheduler:v1.17.3
coredns:1.6.5
etcd:3.4.3-0
pause:3.1
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
done
将此脚本放到一个文件中 然后赋予文件相关的权限 执行脚本 通过此脚本下载相关的镜像
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-oKWooQiS-1647768758860)(E:Typora图片image-20220320160928793.png)]
master 节点初始化 (这里只是初始化master结点 其他的两个node结点不用)
kubeadm init --apiserver-advertise-address=192.168.25.138 # 注 此处为master虚拟机的ip地址 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers --kubernetes-version v1.17.3 --service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16
kubeadm init --apiserver-advertise-address=192.168.25.138 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers --kubernetes-version v1.17.3 --service-cidr=10.90.0.0/16 --pod-network-cidr=10.244.0.0/16
运行完命令后 提示初始化成功
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-If45d3ya-1647768758861)(E:Typora图片image-20220320163517938.png)]
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-XUeZudkI-1647768758861)(E:Typora图片image-20220320165619082.png)]
初始化成功之后 运行
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
然后创建一个yml文件
yml文件内容
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-amd64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchexpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-amd64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchexpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- arm64
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-arm64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-arm64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchexpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- arm
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-arm
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-arm
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-ppc64le
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchexpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- ppc64le
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-ppc64le
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-ppc64le
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-s390x
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchexpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- s390x
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-s390x
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-s390x
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
将此文件上传到master虚拟机
然后应用此文件
kubectl apply -f kube-flannel.yml
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-tROhc6E9-1647768758862)(E:Typora图片image-20220320170203553.png)]
然后在其他的两个Node结点 运行 上面所复制的信息
信息类似
kubeadm join 192.168.25.138:6443 --token 71d54b.3dykor5qxnset0lv
--discovery-token-ca-cert-hash sha256:9b3a5080852e1d9cc33a3b46b05086fe0c6d88cdceb96aef8218251440a0a489
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-XeaV8rvR-1647768758862)(E:Typora图片image-20220320170542093.png)]
如果token过期 可以用下面两行命令
kubeadm token create --print-join-command kubeadm token create --ttl 0 --print-join-command
运行完将node结点 加入到master网络
通过 在master 中 运行 kubectl get nodes 查看结点信息
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-dpq2LNK9-1647768758863)(E:Typora图片image-20220320170352811.png)]
监控Pod进度
watch kubectl get pod -n kube-system -o wide
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-GA0Dhokd-1647768758863)(E:Typora图片image-20220320171018014.png)]
全是Running 说明成功
其他相关的命令
kubectl get pods -n kube-system # 查看指定名称空间的 pods kubectl get pods –all-namespace # 查看所有名称空间的 pods
e-flannel-cfg
将此文件上传到master虚拟机 然后应用此文件
kubectl apply -f kube-flannel.yml
[外链图片转存中...(img-tROhc6E9-1647768758862)]
然后在其他的两个Node结点 运行 上面所复制的信息
信息类似
```shell
kubeadm join 192.168.25.138:6443 --token 71d54b.3dykor5qxnset0lv
--discovery-token-ca-cert-hash sha256:9b3a5080852e1d9cc33a3b46b05086fe0c6d88cdceb96aef8218251440a0a489
[外链图片转存中…(img-XeaV8rvR-1647768758862)]
如果token过期 可以用下面两行命令
kubeadm token create --print-join-command kubeadm token create --ttl 0 --print-join-command
运行完将node结点 加入到master网络
通过 在master 中 运行 kubectl get nodes 查看结点信息
[外链图片转存中…(img-dpq2LNK9-1647768758863)]
监控Pod进度
watch kubectl get pod -n kube-system -o wide
[外链图片转存中…(img-GA0Dhokd-1647768758863)]
全是Running 说明成功
其他相关的命令
kubectl get pods -n kube-system # 查看指定名称空间的 pods kubectl get pods –all-namespace # 查看所有名称空间的 pods



