-
Notifications
You must be signed in to change notification settings - Fork 252
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 Wait and requeue if LB + its ports not deleted #2122
base: main
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
✅ Deploy Preview for kubernetes-sigs-cluster-api-openstack ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
/hold |
/hold cancel |
/test pull-cluster-api-provider-openstack-e2e-test |
/lgtm |
New changes are detected. LGTM label has been removed. |
If the LB that is being deleted when a cluster is deleted, it'll go through the PENDING_DELETE state and at this stage there is nothing we can do but wait for the LB to be actually deleted. If the LB is in that state during the deletion, let's just return no error but request a reconcile after some time.
In a best-effort mode, when cleaning a load-balancer, wait for the ports with a device ID (mapped with the LB ID) and a certain prefix is deleted (by Octavia itself, not CAPO managed) before claiming the LB is really deleted. This will avoid the reconcile to fail later when trying to remove the network while some ports are still attached.
} | ||
|
||
if lb == nil { | ||
return nil | ||
if lbPortsExist { | ||
s.scope.Logger().Info("Load balancer ports still exist, waiting for them to be deleted", "name", loadBalancerName) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mdbooth ready for review when time permits. Maybe over-engineered but it does the job apparently. Feedback is open. |
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What this PR does / why we need it:
Wait and requeue if LB is in PENDING_DELETE
If the LB that is being deleted when a cluster is deleted, it'll go
through the PENDING_DELETE state and at this stage there is nothing we
can do but wait for the LB to be actually deleted.
If the LB is in that state during the deletion, let's just return no
error but request a reconcile after some time.
Wait for Octavia-managed ports to be removed
In a best-effort mode, when cleaning a load-balancer, wait for the ports
with a device ID (mapped with the LB ID) and a certain prefix is deleted
(by Octavia itself, not CAPO managed) before claiming the LB is really
deleted.
This will avoid the reconcile to fail later when trying to remove the
network while some ports are still attached.
Which issue(s) this PR fixes:
Fixes #2124
Fixes #2121