Skip to content
This repository has been archived by the owner on Mar 13, 2022. It is now read-only.

Fix bug with Watch and 410 retries #227

Merged
merged 1 commit into from
Feb 25, 2021
Merged

Fix bug with Watch and 410 retries #227

merged 1 commit into from
Feb 25, 2021

Conversation

chrisayoub
Copy link
Contributor

There is a bug that was introduced with the following PR:
#133

When there is a 410 error that needs to be retried and the user specifics any timeout values (timeouts), the code currently returns, when it really should proceed to the next iteration of the for loop and actually do the retry.

@k8s-ci-robot
Copy link
Contributor

Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA.

It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.


Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. label Feb 22, 2021
@chrisayoub chrisayoub changed the title Fix bug with Watch and 410 retry Fix bug with Watch and 410 retries Feb 22, 2021
@k8s-ci-robot
Copy link
Contributor

Welcome @chrisayoub!

It looks like this is your first PR to kubernetes-client/python-base 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-client/python-base has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Feb 22, 2021
@chrisayoub
Copy link
Contributor Author

/assign @mbohlool @roycaihw @yliaog

@k8s-ci-robot
Copy link
Contributor

@chrisayoub: GitHub didn't allow me to assign the following users: mbohlool.

Note that only kubernetes-client members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time.
For more information please see the contributor guide

In response to this:

/assign @mbohlool @roycaihw @yliaog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@chrisayoub
Copy link
Contributor Author

/assign @micw523

@yliaog
Copy link
Contributor

yliaog commented Feb 24, 2021

if timeout or stop, retry should not be attempted? i think the current behavior is expected.

@PaulFurtado
Copy link

@yliaog If you look at the timeouts variable in the code, it is an argument to this function for a user-specified watch timeout for the request, it does not indicate that a timeout has actually occurred.

So for clients that specify a timeout as an argument to the function, the result of this is that when a 410 error occurs, the caller gets no exception at all and it appears that the watch has gracefully ended, which makes it impossible for a caller to recover from 410.

@yliaog
Copy link
Contributor

yliaog commented Feb 24, 2021

i see. in that case, timeouts condition is not useful. but it should still break without retry when stop is true. so what about the following:

if self._stop:
break

@chrisayoub
Copy link
Contributor Author

This is the previous PR and issue where this timeouts functionality was introduced:
#36
kubernetes-client/python#124

Do you think that change would conflict with this?

@PaulFurtado
Copy link

Yeah, removing the timeouts condition would break functionality. If that were set, a user might end up having a watch which lasts forever despite having set a timeout on it explicitly.

Thinking about this more, I actually think the best fix would be to never do any type of retry when timeouts is set because then it is impossible to guarantee that the function exits within the specified timeout. That would remain true to the way that this code worked prior to the 410 error handling. To achieve that, line 169 could be altered to include timeouts is not None so that we don't retry 410 when the timeouts parameter is set.

Additionally, I think maybe we should make that intention more clear so that similar bugs don't get introduced in the future. Before the while loop, we could set a variable like:

disable_retries = timeouts is not None

and then alter this condition to be:

if self._stop or disable_retries:
    break

and change the condition on line 169 to:

if not disable_retries and not retry_after_410 and obj['code'] == HTTP_STATUS_GONE:

@chrisayoub
Copy link
Contributor Author

@PaulFurtado I updated the PR with your suggested changes, and I tested and they seem to work correctly for me.

@yliaog can you take another look? Thanks!

@yliaog
Copy link
Contributor

yliaog commented Feb 24, 2021

i'm not sure how the PR solve the original problem as given in the PR description below, so when there is 410 error, and timeout is given, there will be no retry at all, it returns.

"When there is a 410 error that needs to be retried and the user specifics any timeout values (timeouts), the code currently returns, when it really should proceed to the next iteration of the for loop and actually do the retry."

@chrisayoub
Copy link
Contributor Author

I think this is just a question of the desired behavior ultimately. When the user specifies a timeout value and we encounter an event of type error, should we yield that event as normal, or should we raise an ApiException? I can easily modify the PR to do whichever seems more correct here.

Ultimately, the desired fix is that under these circumstances, we do not simply return from the function, and we either yield the event or raise an Exception.

@PaulFurtado
Copy link

@yliaog by adding the not disable_retries condition to the if statement on line 169, it hits the else block of that if statement which immediately raises the ApiException indicating that the 410 occurred, so that the caller can deal with the 410 error.

@@ -166,7 +166,7 @@ def stream(self, func, *args, **kwargs):
obj = event['raw_object']
# Current request expired, let's retry,
# but only if we have not already retried.
if not retry_after_410 and \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please add a comment about the added condition "not disable_retries and not"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, see lines 154-155 as well

@yliaog
Copy link
Contributor

yliaog commented Feb 24, 2021

looks good to disable retry for the 410 case, could you please add a test case for it?

@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Feb 25, 2021
Comment on lines +291 to +293
# No events are generated when no initial resourceVersion is passed
# No retry is attempted either, preventing an ApiException
assert not list(w.stream(fake_api.get_thing))
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated the existing test case, which was not actually correct for the current version of the code with retries. Additionally, I added 2 more test cases to test both code paths that my PR is related to.


w = Watch()
try:
for _ in w.stream(fake_api.get_thing, timeout_seconds=10):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no resource_version?

Copy link
Contributor Author

@chrisayoub chrisayoub Feb 25, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For testing this code path and condition, you do not actually need to supply resource_version here, as an exception is raised and only the code in the watch finally block will execute. resource_version is needed in the other test because the code path goes beyond the finally block

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

@yliaog
Copy link
Contributor

yliaog commented Feb 25, 2021

thanks for the pr.

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Feb 25, 2021
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: chrisayoub, yliaog

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Feb 25, 2021
@chrisayoub
Copy link
Contributor Author

@yliaog it looks like the failed CI run is preventing this from automatically merging:
https://travis-ci.org/github/kubernetes-client/python-base/jobs/760405517

However, this test case isn't even in this repo? Am I supposed to do anything else to proceed further in getting this merged?

Thank you!

@yliaog
Copy link
Contributor

yliaog commented Feb 25, 2021

closing the pr, and reoopen to trigger another CI run

@yliaog
Copy link
Contributor

yliaog commented Feb 25, 2021

/close

@k8s-ci-robot
Copy link
Contributor

@yliaog: Closed this PR.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@yliaog
Copy link
Contributor

yliaog commented Feb 25, 2021

/open

@yliaog
Copy link
Contributor

yliaog commented Feb 25, 2021

/reopen

@k8s-ci-robot
Copy link
Contributor

@yliaog: Reopened this PR.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Feb 25, 2021
@k8s-ci-robot k8s-ci-robot merged commit 060cac1 into kubernetes-client:master Feb 25, 2021
@chrisayoub chrisayoub deleted the fix_watch_bug branch May 2, 2021 02:01
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm Indicates that a PR is ready to be merged. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants