Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix Fabric8 client delete behavior due to new 4.6.x version #2223

Closed
ppatierno opened this issue Nov 20, 2019 · 0 comments · Fixed by #2241
Closed

Fix Fabric8 client delete behavior due to new 4.6.x version #2223

ppatierno opened this issue Nov 20, 2019 · 0 comments · Fixed by #2241
Labels

Comments

@ppatierno
Copy link
Member

The fabric8 Kubernetes client changed the delete orphan behavior between 4.1.3 and 4.6.x (currently we are using 4.6.2).
The following issues were raised by people working on EnMasse project and the related operator.

fabric8io/kubernetes-client#1775
fabric8io/kubernetes-client#1840

We faced the same problem while running the TopicOperator IT tests where a KafkaTopic wasn't delete even calling the delete() method via the fabric8 client.
This was the part of the code which actually changed the previous behavior.

https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-client/src/main/java/io/fabric8/kubernetes/client/dsl/base/OperationSupport.java#L210

For coming back to the right behavior and having the KafkaTopic deleted, we added the cascading(true) for each delete() call involved.
We did the same in the Topic Operator code when a topic is delete from Kafka and the TO has to delete the related KafkaTopic resource.

This issue is about checking in the overall Strimzi operator(s) code and tests that where we do a custom resource deletion via fabric8 kubernetes client, we should always add either .cascade(true) or .withPropagationPolicy(..) to get back the old behaviour.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant