Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove Cluster and Pod finalizers #160

Merged
merged 1 commit into from
Jun 21, 2024

Conversation

RafalKorepta
Copy link
Contributor

Previous usage of finalizer handlers was unreliable in the case of
flipping Kubernetes Nodes ready status. Local SSD disks that could be
attached to Redpanda Pod prevents rescheduling as the Persistent Volume
affinity bounds Pod to only one Node. In case of Kubernetes Node coming
back to live Cluster controller could already delete Redpanda data (PVC
deletion and Redpanda decommissioning). If particular Redpanda Node
would host single replica partition, then it would be a data lost.

If the majority of Redpanda process would run in unstable Kubernetes
Nodes, then Redpanda operator could break whole cluster by losing Raft
quorum.

Reference

#112
redpanda-data/redpanda#6942
https://redpandadata.atlassian.net/browse/K8S-250

@CLAassistant
Copy link

CLAassistant commented Jun 19, 2024

CLA assistant check
All committers have signed the CLA.

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

Previous usage of finalizer handlers was unreliable in the case of
flipping Kubernetes Nodes ready status. Local SSD disks that could be
attached to Redpanda Pod prevents rescheduling as the Persistent Volume
affinity bounds Pod to only one Node. In case of Kubernetes Node coming
back to live Cluster controller could already delete Redpanda data (PVC
deletion and Redpanda decommissioning). If particular Redpanda Node would
host single replica partition, then it would be a data lost.

If the majority of Redpanda process would run in unstable Kubernetes
Nodes, then Redpanda operator could break whole cluster by losing Raft
quorum.

Reference

#112
redpanda-data/redpanda#6942
@RafalKorepta RafalKorepta force-pushed the rk/K8S-250/remove-pod-finalizers branch from 24fa001 to 2cc7747 Compare June 21, 2024 13:00
@RafalKorepta RafalKorepta merged commit 1d58db3 into main Jun 21, 2024
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants