-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci-kubernetes-e2e-kubeadm-gce pulling ci/latest not bazel build results #6979
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: leblancd Assign the PR to them by writing The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/ok-to-test |
16aa06c
to
9b12cd3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I looked at ci-kubernetes-e2e-kubeadm-gce and it is fully passing today.
https://k8s-testgrid.appspot.com/sig-cluster-lifecycle-all#kubeadm-gce
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kubeadm-gce/9623
Are we sure the problem this PR is attempting to fix is still an issue?
jobs/config.json
Outdated
@@ -8775,8 +8775,6 @@ | |||
"--gcp-zone=us-central1-f", | |||
"--kubeadm=ci", | |||
"--kubernetes-anywhere-dump-cluster-logs=true", | |||
"--kubernetes-anywhere-kubelet-ci-version=latest", | |||
"--kubernetes-anywhere-kubernetes-version=ci/latest", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You need a kubernetes version specified. Even the pull job has it specified.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for catching that! I'll add that back in.
…acts The ci-kubernetes-e2e-kubeadm-gce test jobs are consistently failing. If you look at test results and logs for a given failing test job, and then compare it to the corresponding prior (prerequisite) bazel build job, you can see that the build job is pushing the build artifacts in the proper gs://kubernetes-release/bazel/... storage bucket, but the test job is extracting (or attempting to extract) build results from ci/latest. The test job also seems to be using inconsistent versions of kubeadm/kubelet/kubernetes. This is very likely one of the causes of the CI test outages described in kubernetes/kubernetes#59762. This failure mode is also seen in other ci-kubernetes-e2e-XXXX test cases, but I'd like to try a fix first on one representative test job, and then replicate the fix to other test jobs if it works. Fixes kubernetes#6978
9b12cd3
to
daa4ba2
Compare
@jessicaochen - Yes, I believe that this is still an issue, although now it's reverted to being an intermittent issue. In the last 10 runs, it's failed 2 times. I believe that there's a race condition involved here, e.g. when this periodic CI build + test job is run vs. the version that ci/latest points to. If you review the test results in the recent past, there have been a lot of intermittent failures with the same signature. For example, if you look at the test result page for the most recent failure:
and
Here, the "Version" field matches the GCS storage bucket that the prior build job used to store its build results (e.g. Debian packages that we need to download, unpack and test with). The "job-version" field matches whatever ci/latest happens to be at the moment. And since this job is configured with "--kubernetes-anywhere-kubelet-ci-version=latest", that's going to direct kubernetes-anywhere to where it looks for the build results GCS bucket. Every time that this mismatch happens, kubernetes-anywere is looking for build results in the wrong location, and it ends up attempting to rsync debian packages from a non-existent GCS bucket, so kubedm, kubelet are never installed. If we delete the "--kubernetes-anywhere-kubelet-ci-version=latest" from the job config, and make sure that the kubernetes-anywhere-kubelet-version points to the same version that we're using for kubeadm (i.e. the version that was just built), then we'll eliminate the possibility of this mismatch. |
Hi @jessicaochen, @BenTheElder - Maybe we should hold off on this change and just keep it in our back pockets?
Please note that the 2 most recent CI failures appear to be unrelated to this proposed change (i.e. this change wouldn't help): |
Seems fair to hold off given your provided reasons. :) |
@leblancd: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I think this isn't quite valid now after some other changes I've made. |
/lifecycle stale Seems like this one is irrelevant now per Ben's comments? Ok to close? |
/close |
@BenTheElder - Yes, this was superseded by your changes. Thanks for closing this. |
The ci-kubernetes-e2e-kubeadm-gce test jobs are intermittently failing. If you look at test results and logs for a given failing test job, and then compare it to the corresponding prior (prerequisite) bazel build job, you can see that the build job is pushing the build artifacts in the proper gs://kubernetes-release/bazel/... storage bucket, but the test job is extracting (or attempting to extract) build results from ci/latest. The test job also seems to be using inconsistent versions of kubeadm/kubelet/kubernetes.
This is very likely one of the causes of the CI test outages described in kubernetes/kubernetes#59762.
This failure mode is also seen in other ci-kubernetes-e2e-XXXX test cases, but I'd like to try a fix first on one representative test job, and then replicate the fix to other test jobs if it works.
Fixes #6978