-
Notifications
You must be signed in to change notification settings - Fork 479
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Flaky Test] HA Multi zones tests timeout frequently #9585
Comments
The Gardener project currently lacks enough active contributors to adequately respond to all issues.
You can:
/lifecycle stale |
The Gardener project currently lacks enough active contributors to adequately respond to all issues.
You can:
/lifecycle rotten |
The Gardener project currently lacks enough active contributors to adequately respond to all issues.
You can:
/close |
@gardener-ci-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
How to categorize this issue?
/area testing
/kind flake
Which test(s)/suite(s) are flaking:
The tests run by the ProwJob
pull-gardener-e2e-kind-ha-multi-zone
. More specifically[It] Shoot Tests Shoot with workers Create, Update, Delete [Shoot, default, basic, simple]
.CI link:
https://prow.gardener.cloud/view/gs/gardener-prow/pr-logs/pull/gardener_gardener/9449/pull-gardener-e2e-kind-ha-multi-zone/1779763882739372032
https://testgrid.k8s.io/gardener-gardener#ci-gardener-e2e-kind-ha-multi-zone , for example https://prow.gardener.cloud/view/gs/gardener-prow/logs/ci-gardener-e2e-kind-ha-multi-zone/1779501481980858368
Reason for failure:
Apparently there are too many machines:
{"level":"info","ts":"2024-04-14T18:03:00.661Z","logger":"shoot-test.test","msg":"Shoot is not yet reconciled","shoot":{"name":"e2e-default","namespace":"garden-local"},"reason":"condition type EveryNodeReady is not true yet, had message too many worker nodes are registered. Exceeding maximum desired machine count (4/3) with reason NodesScalingDown"}
I've noticed the flaky test as part of #9449, but the test runs on testgrid also contain the exact same error message.
The text was updated successfully, but these errors were encountered: