-
Notifications
You must be signed in to change notification settings - Fork 336
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bugfix: get capacity grpc request should have timeout #688
bugfix: get capacity grpc request should have timeout #688
Conversation
Hi @bai3shuo4. Thanks for your PR. I'm waiting for a kubernetes-csi member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@@ -475,6 +475,7 @@ func main() { | |||
factoryForNamespace.Storage().V1beta1().CSIStorageCapacities(), | |||
*capacityPollInterval, | |||
*capacityImmediateBinding, | |||
*operationTimeout, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Setting some timeout makes sense, I just wonder whether we should use operationTimeout
for it. It's currently defined as
operationTimeout = flag.Duration("timeout", 10*time.Second, "Timeout for waiting for creation or deletion of a volume")
Perhaps change that into Timeout for volume operations (creation, deletion. capacity queries)
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Creation and deletion all use same timeoutOperation. I think we should keep this pattern. We don't need define too much timeout which is unnecessary
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, but then the description should be changed to include the new usage. Querying capacity is not "creation or deletion of a volume" (current description).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh!I miss it, I will add it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pohly done~
332fa4b
to
e0bdab2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
e0bdab2
to
83eb8c7
Compare
/lgtm |
@bai3shuo4: you cannot LGTM your own PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
83eb8c7
to
95d34c0
Compare
@pohly Sorry, I fix the unit test problem, why all this check stales here? How to recover it |
/retest |
@bai3shuo4: Cannot trigger testing until a trusted user reviews the PR and leaves an In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/ok-to-test |
Can you add a release note? |
95d34c0
to
dcae33f
Compare
/retest |
@xing-yang Done |
@pohly Hi, I think it's ok to merge :) |
/lgtm For approval. |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: bai3shuo4, pohly, xing-yang The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/kind bug
bug
What this PR does / why we need it:
If one node csidriver fails or it does not set any timeout for grpc, this sync capacity will stuck here. If we only use one thread to process CSIStoragecapacities, it will stuck forever. Provisioning pv or delete all has timeout, it should have timeout for GetCapacity as well
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?: