apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-neonsan
provisioner: neonsan.csi.qingstor.com
parameters:
fsType: "ext4"
rep_count: "1"
pool_name: "kube"
reclaimPolicy: Delete
allowVolumeExpansion: true
Support ext3
, ext4
, xfs
, default ext4
.
Number of disk replicas, default 1
,maximum 3
.
Neonsan pool name,not empty
Set the annotation .metadata.annotations.storageclass.beta.kubernetes.io/is-default-class
value as true
. See details in Kubernetes docs
Set the value of .allowVolumeExpansion
as true
. See details in Kubernetes docs
Volume management including dynamical provisioning/deleting volume, attaching/detaching volume. Please reference volume example.
- Kubernetes 1.14+
- Neonsan CSI installed
- Neonsan CSI StorageClass created
- Create
$ kubectl create -f sc.yaml
- Check
$ kubectl get sc
NAME PROVISIONER AGE
csi-neonsan neonsan.csi.qingstor.com 14m
- Create
$ kubectl create -f pvc.yaml
persistentvolumeclaim/pvc-test created
- Check
$ kubectl get pvc pvc-test
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-test Bound pvc-76429525-a930-11e9-9a6a-5254ef68c8c1 20Gi RWO csi-neonsan 25m
- Create Deployment
$ kubectl create -f deploy-nginx.yaml
deployment.apps/nginx created
- Check
$ kubectl exec -ti nginx-84474cf674-zfhbs /bin/bash
# cd /mnt
# ls
lost+found
- Delete Deployment
$ kubectl delete deploy nginx
deployment.extensions "nginx" deleted
- Delete
$ kubectl delete pvc pvc-test
persistentvolumeclaim "pvc-test" deleted
- Check
$ kubectl get pvc pvc-test
Error from server (NotFound): persistentvolumeclaims "pvc-example" not found
This plugin only supports offline volume expansion. The procedure of offline volume expansion is shown as follows.
- Ensure volume in unmounted status
- Edit the capacity of PVC
- Mount volume on workload Please reference volume example.
- Kubernetes 1.14+ cluster
- Add
ExpandCSIVolumes=true
infeature-gate
- Set
allowVolumeExpansion
astrue
in StorageClass - Create a Pod mounting a volume
$ kubectl scale deploy nginx --replicas=0
- Change Volume capacity
$ kubectl patch pvc pvc-test -p '{"spec":{"resources":{"requests":{"storage": "40Gi"}}}}'
persistentvolumeclaim/pvc-test patched
- Mount Volume
$ kubectl scale deploy nginx --replicas=1
- Check PVC Capacity
$ kubectl get pvc pvc-test
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-test Bound pvc-906f5760-a935-11e9-9a6a-5254ef68c8c1 40Gi RWO csi-qingcloud 6m7s
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-6c444c9b7f-d6n29 1/1 Running 0 3m38s
$ kubectl exec -ti nginx-6c444c9b7f-d6n29 /bin/bash
root@nginx-6c444c9b7f-d6n29:/# df -ah
Filesystem Size Used Avail Use% Mounted on
...
/dev/vdc 40G 49M 40G 1% /mnt
...
Cloning is defined as a duplicate of an existing PVC. Please reference volume example
- Kubernetes 1.15+
- Enable
VolumePVCDataSource=true
feature gat - Neonsan CSI installed
- Neonsan CSI StorageClass created
- Source PVC created
- Check source PVC
$ kubectl get pvc pvc-test
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-test Bound pvc-3bdbde24-7016-430e-b217-9eca185caca3 20Gi RWO csi-neonsan 3h16
- Clone Volume
$ kubectl create -f pvc-clone.yaml
persistentvolumeclaim/pvc-clone created
- Check
$ kubectl get pvc pvc-clone
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-clone Bound pvc-a75e3f7c-59af-43ef-82d3-300508871432 20Gi RWO csi-neonsan 7m4s
Snapshot management contains creating/deleting snapshot and restoring volume from snapshot. Please reference snapshot examples.
- Kubernetes 1.14+
- Enable
VolumeSnapshotDataSource=true
feature gate at kube-apiserver and kube-controller-manager - Neonsan CSI v1.2.0 installed
- Neonsan CSI StorageClass created
- Source PVC created
- Create
$ kubectl create -f pvc-source.yaml
persistentvolumeclaim/pvc-source created
- Check
$ kubectl get pvc pvc-source
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-source Bound pvc-3bdbde24-7016-430e-b217-9eca185caca3 20Gi RWO csi-neonsan 4h25m
$ kubectl create -f snapshot-class.yaml
volumesnapshotclass.snapshot.storage.k8s.io/csi-neonsan created
$ kubectl get volumesnapshotclass
NAME AGE
csi-neonsdan 16s
$ kubectl create -f snapshot.yaml
volumesnapshot.snapshot.storage.k8s.io/snap-1 created
$ kubectl get volumesnapshot
NAME AGE
snap-1 91s
$ kubectl create -f pvc-snapshot.yaml
persistentvolumeclaim/pvc-snap created
$ kubectl get pvc pvc-snap
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-snap Bound pvc-a56f6ebe-b37b-40d7-bfb7-aafbecb6672b 20Gi RWO csi-neonsan 59m
$ kubectl delete volumesnapshot snap-1
volumesnapshot.snapshot.storage.k8s.io "snap-1" deleted
Volume Access Mode ReadWriteMany
is only available
VolumeMode Block
on NeonSAN-CSI.
Following are examples for Block RWM
PVC. Please reference volume examples.
kubectl apply -f pvc-block.yaml
persistentvolumeclaim/pvc-block created
kubectl get pvc pvc-block
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-block Bound pvc-d4e44291-c8e8-4a6d-9a4c-6a3662672d77 1Gi RWX csi-neonsan 9m57s
kubectl apply -f deploy-nginx-block-1.yaml
deployment.apps/nginx-block-1 created
kubectl apply -f deploy-nginx-block-2.yaml
deployment.apps/nginx-block-2 created
kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-block-1-65ddc6bf75-zdnqg 1/1 Running 0 8m40s
nginx-block-2-788bfbbf4b-b4kpd 1/1 Running 0 7m55s
Block device in container-1:
kubectl exec -it deployment/nginx-block-1 -- fdisk -l /dev/xvda
Disk /dev/xvda: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Block device in container-2:
kubectl exec -it deployment/nginx-block-2 -- fdisk -l /dev/xvda
Disk /dev/xvda: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes