-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 Ensure that capd tests always clean up the kind cluster #2917
Conversation
Having to abuse variable scoping a little bit, but otherwise would have to change framework.InitManagementCluster and maybe add another method to the interface. This also aggregates the log errors so that if one errors it doesn't prevent the rest from still attempting. Worst case they all fail but we still get one error.
/milestone v0.3.4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
/assign @fabriziopandini @sedefsavas
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: benmoss, vincepri The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
lgtm form my side. |
/test pull-cluster-api-capd-e2e |
/lgtm |
What this PR does / why we need it:
Ensure that we always delete the kind cluster created by the e2e suite. The problem is that
InitManagementCluster
doesn't return errors, it uses Gomega assertions, so if an error occurs after the cluster is created it will immediately return from theBeforeSuite
beforemgmt
is assigned.Because of the
NewManagementClusterFn
feature I was able to abuse variable scoping a little bit to work around this, but otherwise would have to change framework.InitManagementCluster and maybe add another method to the interface. It definitely isn't the cleanest solution right now.This also aggregates the log errors so that if one errors it doesn't prevent the rest from still attempting. Worst case they all fail but we still get one error.
Which issue(s) this PR fixes:
Fixes #2881
/assign @fabriziopandini