-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🏃 🛠 fix prow presubmits #1201
🏃 🛠 fix prow presubmits #1201
Conversation
/hold |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: alexeldeib, mengqiy The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/test pull-kubebuilder-e2e |
Hmm, seems like they moved the runner in test-infra? Looking at Kind presumbits, they use |
oh, the actual runner is |
fixing in kubernetes/test-infra#15291 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
/retest |
1 similar comment
/retest |
@alexeldeib Any update? |
@mengqiy sorry, I've been at Kubecon since the test-infra PR merged and it seems the test runner requires more adjustments on the invocation side. Happy to merge this as is to fix the vanilla test and finish e2e separately. If that sounds ok to you, feel free to cancel the hold. |
Oh, actually I think it's because the files changed to |
If this PR still doesn't pass the e2e after that, I would go ahead and remove hold + merge anyway to get the other fix done. |
@mengqiy tested Apologies again for all the trial and error -- the new test-infra tooling should help avoid this kind of mess in the future. |
/retest |
e164926
to
3412515
Compare
@alexeldeib It seems we have some problem with kubeadm |
few additional references for kind using DinD: kubernetes-sigs/kind#303 |
I can't repro this locally, but it seems even after fixing the immediate issue the clusters still fails to come up. Hoping kubernetes/test-infra#15348 will get the last bits fixed or else need to get some additional debug help from Ben w.r.t kind inside a pod using dind. I don't want to make the CI more and more red with each attempt to fix it...I keep thinking I have it, but it's been a few rounds of the same. Open to direction on how to proceed without making a bigger mess 🙁 |
/retest |
1 similar comment
/retest |
@alexeldeib: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/retest |
It is great that this is finally passing! |
Will do, I’m also relieved this finally works :) I’m traveling for personal reasons over the weekend post-Kubecon, but will clean this up for merge Monday if that is okay? |
Sure. Let's merge this on Monday. |
d5a3a9b
to
3f961fd
Compare
3f961fd
to
d2c8166
Compare
/hold cancel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It shows fine for me 👍
/lgtm /approved
/test pull-kubebuilder-e2e-k8s-1-15-3 |
@camilamacedo86 I think the bots didn't like the two commands in one line. |
|
||
# You can use --image flag to specify the cluster version you want, e.g --image=kindest/node:v1.13.6, the supported version are listed at https://hub.docker.com/r/kindest/node/tags | ||
kind create cluster --config test/kind-config.yaml --image=kindest/node:$K8S_VERSION | ||
kind create cluster -v 4 --retain --wait=1m --config test/kind-config.yaml --image=kindest/node:$K8S_VERSION |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-v 4 --retain --wait=1m
I wonder if these are related to debugging?
Re.--wait
, do we have to wait for 1 minute? Is it causing any flakes? It seems the default wait time is 0s.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kind
cluster is known to be fast to bring up.
Adding 1 minute wait time for bringing up the cluster slow it down ~10%. Look at https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/pull/kubernetes-sigs_kubebuilder/1201/pull-kubebuilder-e2e-k8s-1-16-2/1198845676235001858, the e2e test only takes <10 minutes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can remove this if desired, but I think it should stay. I matched it to what kind itself does in e2e: https://github.com/kubernetes-sigs/kind/blob/36eae75e067681658d59a31ed679e67e39d9cfc5/hack/ci/e2e-k8s.sh#L88-L93
Without this, kind won't wait at all for the nodes to come up: https://github.com/kubernetes-sigs/kind/blob/6a0de6124de068cd55b24025cf05e8e4ea172641/pkg/internal/cluster/create/actions/waitforready/waitforready.go#L45-L48
If anything, I can reduce -v
from 4 to 3 I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not adding a static 1 minute wait. It's only checking that the nodes actually come up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note that it's also a fairly tight loop so there is not really additional wait time
/lgtm |
Fixes prow e2e optional presubmit (maybe?)