-
Notifications
You must be signed in to change notification settings - Fork 554
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
failed to delete pv #192
Comments
as discussed with @ShyamsundarR, current per PV configmap has a scaling issue. We aim to shift to a per Ceph cluster configmap and embeds the configmap name in the PV name, so controller server can look up the configmap and get ceph config from the configmap. |
This might be related to kubernetes-csi/external-provisioner#131: what might be happening is that while one |
@gman0 I destroyed the setup,but I will try once again, from the logs, it looks like the issue is with the configmap (need to investigate in which case am hitting this issue and also why am getting configmap not found error and when configmap got deleted) |
@gman0 what we have to do if the configmap not found. currently we are returning an |
@Madhu-1 That's one of the harder problems - there is no way to distinguish between "volume already deleted" and "cannot delete a volume because of missing metadata". See the issue i've linked above. |
Hello,
when are you fix that? |
Got hit by this in rbdplugin as well. Some so helpful logs:
but on the ceph cluster:
so the image is actually deleted, but external-provisioner ignores the success (kubernetes-csi/external-provisioner#131). Looking at the logs, the first attempt (A) to delete takes too long to finish so the external-provisioner issues a new
(A) holds the lock though, so even though (B) is the "active" RPC (which will have the definitive answer over the result of the operation), it has to wait till (A) finishes:
Once (A) is successfully done (along with the deletion of the image AND the metadata stored in configmap), it releases the lock, so that (B) may continue on its merry way:
Since the metadata is now gone, (B) immediately exits because it cannot continue without it:
And that's what external-provisioner sees. The most recent "active" I'll create a new issue tomorrow for possible solutions of this situation. |
tested and not reproducible now ,closing this one |
Describe the bug
failed to delete pv when the pvc got deleted
Kubernetes and Ceph CSI Versions
Kubernetes: 1.13
Ceph CSI: 1.0.0
To Reproduce
create 100 pvc and delete 100 pvc
Expected behavior
all pv should be deleted when pvc is deleted
The text was updated successfully, but these errors were encountered: