-
Notifications
You must be signed in to change notification settings - Fork 255
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VolumeFailedDelete, when i deleted pvc but pv wasn't deleted #595
Comments
it seams archive subdirectory after umount operator. sc yaml: apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: 56-nfs-sc
provisioner: nfs.csi.k8s.io
parameters:
server: "xxx.xxx.xxx.xxx"
share: "/vol1/k8s"
subDir: "${pvc.metadata.namespace}/${pvc.metadata.name}"
mountPermissions: "0"
onDelete: "archive"
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
mountOptions:
- nfsvers=3
- rsize=1048576
- wsize=1048576
- tcp
- hard
- nolock pvc yaml apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc-nfs-dynamic
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: 56-nfs-sc can somebody help me? |
the new info: sc yaml: apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: 56-nfs-sc
provisioner: nfs.csi.k8s.io
parameters:
server: "xxx.xxx.xxx.xxx"
share: "/vol1/k8s"
#subDir: "${pvc.metadata.namespace}/${pvc.metadata.name}"
mountPermissions: "0"
onDelete: "archive"
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
mountOptions:
- nfsvers=3
- rsize=1048576
- wsize=1048576
- tcp
- hard
- nolock |
so archive mode with subDir set does not work, right? @liuyuexizhi |
/kind bug |
yes, it do not work with subDir set! |
@andyzhangx hi, i find a new question.
do you know something? |
|
this driver does not support pvc expansion |
@liuyuexizhi if you set |
hello, i'm having the same issue but my subdir contains hypens:
update: commenting out the subDir part did not change the behaviour though |
Yes could try it. I've used Helm so far though, but for testing purposes, I could edit the image used by the pod with |
@MRColorR just |
ok this is the results of my test:
|
@MRColorR can you provide the nfs container logs: kubectl logs csi-nfs-controller-xxx -c nfs -n kube-system > csi-nfs-controller.log |
Sorry, I've already reverted the changes because I can't keep the k8s cluster on halt any longer. Luckily I had already included in the previous comment part of the controller log from after its startup through the volume creation to the errors during the archiving phase that follow the pvc decommision. Hope it will suffice |
Hello, and sorry for the delay. I’ve been offline for a week. I’ve tested it, and now the behavior is as follows: When I apply a manifest of a test PVC using the defined StorageClass with onDelete: archive, the PV for the PVC is correctly created inside the defined folder on my NFS server. Then, I delete the PVC, and the PV enters the terminating phase but never completes it. The folder of the PV is deleted from the NFS, but no “archived-” folder appears. Notice that the PV just hangs in the terminating state until I manually remove the finalizer with an edit (but the relative folder inside the NFS has been automatically removed). The following is the dump of the logs csi-nfs-controller.log: https://pastebin.com/iyUPiQJq |
@MRColorR can you set csi-driver-nfs/deploy/csi-nfs-controller.yaml Line 117 in 8036bff
|
Yes, you’re right. Sorry, I didn’t think about the fact that the image tag for the canary image is always the same. I’ve edited the pull policy and checked in the pod description that the new canary image has been pulled. I’ve tested again, but the behavior seems quite the same. From the log's timestamps it seems that it’s archiving the pv folder correctly (it reach and print in logs line 268 csi-driver-nfs/pkg/nfs/controllerserver.go Line 268 in 8036bff
csi-driver-nfs/pkg/nfs/controllerserver.go Line 257 in 8036bff
csi-driver-nfs/pkg/nfs/controllerserver.go Line 258 in 8036bff
Info: I’ve updated the Pastebin in the previous comment with the new logs. Edit: If I check the folder in my NFS where the PV folders are stored, I cannot see either the original PV folder or the archived folder. Currently it behaves like a delete policy but hangs in the terminating state. |
Same problem here with v4.8.0 |
@MRColorR thanks for the test, could you upgrade csi-provisioner to v5.0.2 to have a try again? I doubt it's related to this bug: kubernetes-csi/external-provisioner#1235 |
@MRColorR nvm, I have made a fix to disable removing archived volume path since csi-provisioner v5.0.2 does not fix the volume deletion twice issue, pls try canary image again: gcr.io/k8s-staging-sig-storage/nfsplugin:canary |
Ok I'll try it as soon as i can thank you. |
IMHO this issue should have been reopened...as it makes the driver not usable in real life |
pls try with new released version: https://github.com/kubernetes-csi/csi-driver-nfs/releases/tag/v4.9.0, it should have the fix, thx |
i've just tested the release 4.9.0 through helm and it works. The issues regarding the onDelete: archive seems resolved. |
Warning VolumeFailedDelete 7s (x5 over 22s) nfs.csi.k8s.io_linux-k8s-master-2_c99df0ef-f12f-40a3-998f-f99dbc758a17 rpc error: code = Internal desc = archive subdirectory(/tmp/pvc-ee9206d9-a4fc-4897-93ab-0bb3f25c893a/default/test-busybox-pvc, /tmp/pvc-ee9206d9-a4fc-4897-93ab-0bb3f25c893a/archived-default/test-busybox-pvc) failed with rename /tmp/pvc-ee9206d9-a4fc-4897-93ab-0bb3f25c893a/default/test-busybox-pvc /tmp/pvc-ee9206d9-a4fc-4897-93ab-0bb3f25c893a/archived-default/test-busybox-pvc: no such file or directory
The text was updated successfully, but these errors were encountered: