Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TestReconcileMachinePoolScaleToFromZero is flaky #8993

Closed
killianmuldoon opened this issue Jul 12, 2023 · 10 comments · Fixed by #9745
Closed

TestReconcileMachinePoolScaleToFromZero is flaky #8993

killianmuldoon opened this issue Jul 12, 2023 · 10 comments · Fixed by #9745
Assignees
Labels
area/machinepool Issues or PRs related to machinepools help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/flake Categorizes issue or PR as related to a flaky test. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@killianmuldoon
Copy link
Contributor

This unit test has been flaky for some time now and the flake needs investigation.

Link to the flakes:

/kind flake

@k8s-ci-robot k8s-ci-robot added kind/flake Categorizes issue or PR as related to a flaky test. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jul 12, 2023
@killianmuldoon
Copy link
Contributor Author

@Jont828 given your recent work on MachinePools maybe you have some time to look into this one?

@killianmuldoon killianmuldoon added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Jul 12, 2023
@a-hilaly
Copy link

I tried to run the test locally multiple times, not able to reproduce the errors seen in https://storage.googleapis.com/k8s-triage/index.html?job=.*-cluster-api-.*&xjob=.*-provider-.*
Is there a better way to chase this flaky test @killianmuldoon ?

@killianmuldoon
Copy link
Contributor Author

@a-hilaly - you might have to run these tests thousands of times locally to catch the flake I'm afraid. The easiest way to do this is to add a for loop inside the test and run it locally as many times as you need to reliably catch the flake.

@Jont828
Copy link
Contributor

Jont828 commented Jul 21, 2023

@killianmuldoon I'm trying to wrap up some PRs in CAPI and CAPZ right now but I can take a look when I have some time. FYI I believe those links to the flakes aren't working as it's not in the last week of data now.

@killianmuldoon killianmuldoon added the area/machinepool Issues or PRs related to machinepools label Nov 1, 2023
@killianmuldoon
Copy link
Contributor Author

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Nov 8, 2023
@loktev-d
Copy link
Contributor

@killianmuldoon hi! I'm new to the project and looking for ways to contribute, can I take a look at this issue?

@chrischdi
Copy link
Member

@loktev-d , feel free to investigate. However finding the root cause and resolution seems not to be easy or straight forward as both are currently unknown and this is a very complex part of CAPI. This issue might not be a perfect fit for you to get started in the project.

@Jont828
Copy link
Contributor

Jont828 commented Nov 29, 2023

@loktev-d I was busy trying to push to get DockerMachinePools #8842 in and couldn't get to this. It looks like you opened a PR already, is it safe to say that you're picking this issue up?

@loktev-d
Copy link
Contributor

@Jont828 Yes, forgot to assign myself

/assign

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/machinepool Issues or PRs related to machinepools help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/flake Categorizes issue or PR as related to a flaky test. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants