-
Notifications
You must be signed in to change notification settings - Fork 243
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[multikueue] Cluster connection monitoring and reconnect. #1806
Conversation
✅ Deploy Preview for kubernetes-sigs-kueue canceled.
|
2a3758d
to
5b95297
Compare
5b95297
to
8d28a93
Compare
I tested that this PR fixes #1787. Please note that in the PR description. The watches are closed after 30min (as indicated by the API server (
|
I'm a little bit curious about this
is this desired / can we do something about it? EDIT: if we can not log this as error I think it is a win, but I'm happy to leave it for follow up if there are any complications. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall lgtm. Big plus for the e2e test.
My main ask would be to try to delegate the backoff to the built-in mechanisms, if feasible. If not feasible please describe why.
|
||
// retryAfter returns an exponentially increasing interval between | ||
// retryIncrement and 2^retryMaxSteps * retryIncrement | ||
func retryAfter(failedAttempts uint) time.Duration { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally, I would leave the backoff calculations to the built-in mechanisms, if feasible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not being able to connect should not be seen as a reconcile error in my opinion, as it is not related to k8s state.
Also with this we maintain the control over the retry timing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not being able to connect should not be seen as a reconcile error in my opinion, as it is not related to k8s state.
In most cases when Kueue sends a request from node to kube API server, and the API server drops the request we handle the failure as reconcile error.
However, this is "internal" (within cluster) connect error, maybe for external connect errors longer baseDelay is preferred indeed.
Also with this we maintain the control over the retry timing.
I see, I just have a preference for the KISS principle, we could introduce our timing mechanism later when proven to be needed.
However, on the fence here, because maybe communication with external cluster higher baseDelay
is preferred indeed. WDYT @alculquicondor ?
If case we want to control the timings, is it much of complication to use the standard rate limiting queue, like for example here. Then we could pass the baseDelay
and maxDelay
. However, if this is a big complication, I'm fine as is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could also use the Backoff class from k8s.io/apimachinery/pkg/util/wait
But on the nit side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@trasc if you prefer to keep the custom timings, I'm fine just do a quick review if we can simplify the code by using the rate limiter or the package suggested by Aldo, so that we avoid reinventing the wheel. If you find this is the simplest approach. I'm ok, but please review the options.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did look at Backoff
in k8s.io/apimachinery/pkg/util/wait but it's a bit overkill for what we are doing here.
Another thing I was thinking of was to just double the time since the cluster was declared inactive , so if it failed 5min ago , we try now and failed again retry in 5 min, the plus side of this being that we don't need to keep an internal state but the behavior is harder to predict.
pkg/controller/admissionchecks/multikueue/multikueuecluster_test.go
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
Leaving the LGTM to @mimowo
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: alculquicondor, trasc The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
LGTM label has been added. Git tree hash: 6d592aaa04fcc7d278bc685363066bb8e7677935
|
/cherry-pick release-0.6 |
@alculquicondor: new pull request created: #1809 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/release-note-edit
|
…-sigs#1806) * [multikueue] Cluster connection monitoring and reconnect. * Review Remarks
/release-note-edit
|
What type of PR is this?
/kind feature
What this PR does / why we need it:
[multikueue] Cluster connection monitoring and reconnect. Try to reconnect to the worker cluster when any of its watch loops ends.
Which issue(s) this PR fixes:
Fix #1787
Relates to #693
Special notes for your reviewer:
Does this PR introduce a user-facing change?