K8s supports cinder but only working in ReadWriteOnce mode, which means if a pod associated with a cinder volume got migrated to other nodes, it will fail as nova will try to reattach a cinder volume to another nodes on openstack which is not supported. A workaround is to use NFS, luckily openstack has a native solution. NFS has another advantage which is it supports ReadWriteMany mode. It offers posibility for mutiple pods to associate with 1 pvc, which lead to advanced level of HA.
K8s NFS feature requires few things to be done in advance:
sudo apt install nfs-commonon all k8s slave nodes, this enables ubuntu OS to mount nfs drive.
- Create NFS share on openstack, copy mount path:
- Deploy k8s nfs provisioner, k8s-nfs.
- Make new nfs storageclass as default while disabling cinder storageclass or make it non default prefered.
Example of nfs provisioner manifest:
kind: Deployment apiVersion: extensions/v1beta1 metadata: name: nfs-client-provisioner namespace: kube-system spec: replicas: 1 strategy: type: Recreate template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: tdlab.ca/nfs - name: NFS_SERVER value: 10.254.0.10 - name: NFS_PATH value: /shares/share-fceb21f7-ec20-4cc3-a0bc-c6f93b95638d volumes: - name: nfs-client-root nfs: server: 10.254.0.10 path: /shares/share-fceb21f7-ec20-4cc3-a0bc-c6f93b95638d
if you use standalone NFS server like ubuntu’s nfs-kernel-server, you may endup with
write permission issue. This is mainly because of you’re using root to mount NFS volume, and it’s designed to block this kind of action due to security concerns.
However we can get over it by using: