-
Notifications
You must be signed in to change notification settings - Fork 260
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Retry clone kill which is flaky #1950
Conversation
/hold |
/test all |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
line 1319:
err = utils.WaitTimeoutForPodReady(f.K8sClient, utils.UploadPodName(targetPvc), targetNs.Name, utils.PodWaitForTime)
This wait has polling interval set to 2 seconds. So, in worst case, it can miss any action that has finished in 2 seconds. I think in good conditions our upload server can handle tinyCoreIso in milliseconds.
[test_id:4000] Create a data volume and then clone it while killing the container and verify retry count was pretty flaky lately. It was failing on attempting to connect to the upload server pod to kill it. This PR causes a retry. Signed-off-by: Alexander Wels <awels@redhat.com>
to kill the process Signed-off-by: Alexander Wels <awels@redhat.com>
38a3edf
to
6ce07d7
Compare
Signed-off-by: Alexander Wels <awels@redhat.com>
6ce07d7
to
2d604b6
Compare
/retest |
/test all |
hpp destructive has a problem in before each, looks like it cannot find CDI
/test all |
/retest |
/test all |
Yeah the destructive lane has a flake where it sometimes messes up the cdi object, which is one of the main reasons we put it in a separate lane to start with. I am mainly interested in seeing several runs where the clone test that is failing often right now doesn't fail. |
/test pull-containerized-data-importer-e2e-k8s-1.19-ceph |
/hold cancel |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: awels The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/lgtm |
/retest |
Signed-off-by: Alexander Wels awels@redhat.com
What this PR does / why we need it:
[test_id:4000] Create a data volume and then clone it while killing the
container and verify retry count was pretty flaky lately. The likely culprit was the small amount of data being cloned. This causes the pods to be removed before we have a chance to connect issue the kill command.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #
Special notes for your reviewer:
Release note: