nfs-client-provisioner

nfs-client-provisioner 是一个Kubernetes的简易NFS的外部provisioner,本身不提供NFS,需要现有的NFS服务器提供存储

安装部署

1、创建deployment

需要修改的地方只有NFS服务器所在的IP地址,以及NFS服务器共享的路径,两处都需要修改为你实际的NFS服务器和共享目录

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: nfs-client-provisioner
  namespace: kube-system
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: 21cn.com/nfs
            - name: NFS_SERVER
              value: nfs服务器所在的ip
            - name: NFS_PATH
              value: /protected #共享存储目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: nfs服务器所在的ip
            path: /protected #共享存储目录

2、如果启用了RBAC

创建授权

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: kube-system
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: kube-system
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: kube-system
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

3、创建storageclass

名称为nfs,并且provisioner需要与deployment中的PROVISIONER_NAME对应

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs
provisioner: 21cn.com/nfs

测试

1、创建pvc

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: "nfs"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

 

2、创建test-pods

kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: busybox:1.24
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim

创建后发现permission denied

chmod go+w 共享存储文件目录

解决后

并且打印出日志

错误二:

Events:
  Type     Reason            Age                 From                    Message
  ----     ------            ----                ----                    -------
  Warning  FailedScheduling  70s (x25 over 92s)  default-scheduler       persistentvolumeclaim "qisubo-data" not found
  Warning  FailedMount       55s                 kubelet, 192.168.0.241  MountVolume.SetUp failed for volume "pvc-d23f8253-7a2b-11ea-9efc-fa163e0a3c50" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/c4bbba84-7a2b-11ea-9efc-fa163e0a3c50/volumes/kubernetes.io~nfs/pvc-d23f8253-7a2b-11ea-9efc-fa163e0a3c50 --scope -- mount -t nfs 192.168.0.103:/data/protected/meapp10494-opt-disk-pvc-d23f8253-7a2b-11ea-9efc-fa163e0a3c50 /var/lib/kubelet/pods/c4bbba84-7a2b-11ea-9efc-fa163e0a3c50/volumes/kubernetes.io~nfs/pvc-d23f8253-7a2b-11ea-9efc-fa163e0a3c50
Output: Running scope as unit run-15340.scope.
mount: wrong fs type, bad option, bad superblock on 192.168.0.103:/data/protected/meapp10494-opt-disk-pvc-d23f8253-7a2b-11ea-9efc-fa163e0a3c50,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
  Warning  FailedMount  55s  kubelet, 192.168.0.241  MountVolume.SetUp failed for volume "pvc-d5feb10c-7a2b-11ea-9efc-fa163e0a3c50" : mount failed: exit status 32

需要在宿主机yum安装nfs相关软件 

yum install nfs-common  nfs-utils -y 

 

Logo

开源、云原生的融合云平台

更多推荐