-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CRD is hanging while deleting with "foregroundDeletion" policy #1755
Comments
this appears to be kubernetes/kubernetes#87603 |
we're not doing anything special for clusterIP allocation or garbage collector settings, so I'm pretty sure this is purely an upstream kubernetes bug you've found. |
@BenTheElder Any idea how to work around this? I can delete with |
I don't think there's a good workaround, there's a fix in progress upstream. I've commented there. |
I am seeing the same problem with our own CRDs (not |
@ctron I need a little more information than that, is this across the same kubernetes version? |
It seems to work on Minikube (1.17.3), OpenShift (1.18.3), but fails on Kind (0.8.1 -> kindest/node:v1.18.2). Let me know if you need more information. |
both kind and minikube use kubeadm under the hood, so i'm curious to what is the difference here. please try a matching minikube version (k8s = v1.18.2):
|
@neolit123 Unfortunately that isn't possible due to: kubernetes/minikube#8414 |
i'd appreciate if this is reproduced with a raw kubeadm setup too. |
I looks like I can select the Kubernetes version with Minikube using |
So I can confirm that using Minikube with |
you might try https://github.com/kubernetes-sigs/kind/releases/tag/v0.8.0#New-Features |
Just tested with Kubernetes 1.18.6, same issue |
Since this is reproduced in minikube, and the original in kubernetes/kubernetes#87603, I'm going to close this in the KIND tracker. |
Btw … switching back to 1.17.x with Kind works as well. |
Excellent. If you can identify the kubernetes bug please file an issue with the kubernetes/kubernetes tracker so we can get it fixed upstream. |
Or kubernetes/kubeadm if it turns out to be some kubeadm setting. |
What happened:
I'm running a CRD controller. On the deployment of the CRD, the controller creates a set of k8s
services
,statefulset
,role
,rolebinding
etc. The operator also sets theownerReference
(CRD) withownerReference.blockOwnerDeletion=true
of those objects.Now, when I delete the CRD with
foregroudDeletion
policy.CRD is hanging, I checked the dependent objects, the deletionTimestamp and finalizer are set. But somehow the garbage collector isn't cleaning up those.
What you expected to happen:
The created
services
,statefulset
,role
,rolebinding
etc. will be deleted first, and once all those are deleted by the garbage collector the CRD is removed.Anything else we need to know?:
Also, when I described the services, I encountered warning like below:
Environment:
The text was updated successfully, but these errors were encountered: