NFS是网络文件系统Network File System的缩写,NFS服务器可以让PC将网络中的NFS服务器共享的目录挂载到本地的文件系统中,而在本地的系统中来看,那个远程主机的目录就好像是自己的一个磁盘分区一样。
kubernetes使用NFS共享存储有两种方式
1.手动方式静态创建所需要的PV和PVC。
2.通过创建PVC动态地创建对应PV,无需手动创建PV。
集群Masrer节点192.168.5.11作为NFS server服务器,这里作为测试,使用docker部署NFS服务器:
docker run -d --name nfs-server
--privileged
--restart always
-p 2049:2049
-v /nfs-share:/nfs-share
-e SHARED_DIRECTORY=/nfs-share
itsthenetwork/nfs-server-alpine:latest
手动方式部署nfs服务器
#master节点安装nfs yum -y install nfs-utils #创建nfs目录 mkdir -p /nfs/data/ #修改权限 chmod -R 777 /nfs/data #编辑export文件,这个文件就是nfs默认的配置文件 vim /etc/exports /nfs/data *(rw,no_root_squash,sync) #配置生效 exportfs -r #查看生效 exportfs #启动rpcbind、nfs服务 systemctl restart rpcbind && systemctl enable rpcbind systemctl restart nfs && systemctl enable nfs #查看 RPC 服务的注册状况 rpcinfo -p localhost #showmount测试 showmount -e 192.168.5.11
所有node节点安装客户端,开机启动
yum -y install nfs-utils systemctl start nfs && systemctl enable nfs
准备工作,我们已经在master-1(192.168.5.11) 节点上搭建了一个 NFS 服务器,目录为 /nfs/data
静态PV卷添加pv卷对应目录,这里创建1个pv卷,则添加1个pv卷的目录作为挂载点。
创建NFS挂载点#创建pv卷对应的目录 mkdir -p /nfs/data/pv001 mkdir -p /nfs/data/pv001 #配置exportrs(我觉得可以不用这步,因为父目录/nfs/data,已经设为共享文件夹) vim /etc/exports /nfs/data/pv001 *(rw,no_root_squash,sync) /nfs/data/pv002 *(rw,no_root_squash,sync) #配置生效 exportfs -r #重启rpcbind、nfs服务 systemctl restart rpcbind && systemctl restart nfs创建PV–nfs-pv001.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv001
labels:
pv: nfs-pv001
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteonce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /nfs/data/pv001
server: 192.168.5.11
配置说明:
① capacity 指定 PV 的容量为 1G。
② accessModes 指定访问模式为 ReadWriteOnce,支持的访问模式有:
2.1ReadWriteonce – PV 能以 read-write 模式 mount 到单个节点。
2.2ReadonlyMany – PV 能以 read-only 模式 mount 到多个节点。
2.3ReadWriteMany – PV 能以 read-write 模式 mount 到多个节点。
③ persistentVolumeReclaimPolicy 指定当 PV 的回收策略为 Recycle,支持的策略有:
3.1Retain – 需要管理员手工回收。
3.2Recycle – 清除 PV 中的数据,效果相当于执行 rm -rf /thevolumenamespace: $NAMESPACE/g" ./deploy/rbac.yaml ./deploy/deployment.yaml
$ kubectl create -f deploy/rbac.yaml
配置NFS-Client provisioner
配置deploy/class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false" # When set to "false" your PVs will not be archived
# by the provisioner upon deletion of the PVC.
修改deployment.yaml文件
这里修改的参数包括NFS服务器所在的IP地址(192.168.5.11),以及NFS服务器共享的路径(/nfs/data/pv002),两处都需要修改为你实际的NFS服务器和共享目录。
[root@master-1 deploy]# cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.5.11
- name: NFS_PATH
value: /nfs/data
volumes:
- name: nfs-client-root
nfs:
server: 192.168.5.11
path: /nfs/data
测试环境
部署
kubectl create -f deploy/test-claim.yaml -f deploy/test-pod.yaml
检查NFS Server 文件是否成功,下面情况说明成功了。
[root@master-1 nfs-client]# cd /nfs/data/pv002 [root@master-1 pv002]# ls archived-ingress-nginx-test-claim-pvc-efd702ba-102d-4d61-9a5f-6fc4805f12c0
查看pvc,pv状态
[root@master-1 pv002]# kubectl get pvc,pv NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/test-claim Bound ingress-nginx-test-claim-pvc-efd702ba-102d-4d61-9a5f-6fc4805f12c0 1Mi RWX managed-nfs-storage 25m NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/ingress-nginx-test-claim-pvc-efd702ba-102d-4d61-9a5f-6fc4805f12c0 1Mi RWX Delete Bound ingress-nginx/test-claim managed-nfs-storage 19m
删除
kubectl delete -f deploy/test-pod.yaml -f deploy/test-claim.yaml部署自己的PVC和pod
确保storage-class是正确的,在deploy/class.yaml中定义的。
apiVersion: v1
metadata:
name: sc-claim
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-sc-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: sc-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
问题: 创建pvc后状态一直是pending
- pvc 报错
# kubectl describe pvc test-claim Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ExternalProvisioning 15s (x25 over 80s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "fuseim.pri/ifs" or manually created by system administrator
- nfs-client-provisioner报错
kubectl logs nfs-client-provisioner-5ff56b5cfc-fqnzv E0205 02:12:39.764761 1 controller.go:756] Unexpected error getting claim reference to claim "ingress-nginx/test-claim": selflink was empty, can't make reference
- 通过第二步的报错,查到1.20版本默认禁止使用selflink。
Stop propagating Selflink (deprecated in 1.16) in kube-apiserver (#94397, @wojtek-t) [SIG API Machinery and Testing]
解决办法:
/etc/kubernetes/manifests/kube-apiserver.yaml 添加这段" - --feature-gates=RemoveSelflink=false "
...
- command:
- kube-apiserver
- --advertise-address=192.168.5.11
- --allow-privileged=true
- --feature-gates=RemoveSelflink=false
...
保存退出,kubernetes会自动重建apiserver。



