Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bugfix: get capacity grpc request should have timeout #688

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion cmd/csi-provisioner/csi-provisioner.go
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ var (
workerThreads = flag.Uint("worker-threads", 100, "Number of provisioner worker threads, in other words nr. of simultaneous CSI calls.")
finalizerThreads = flag.Uint("cloning-protection-threads", 1, "Number of simultaneously running threads, handling cloning finalizer removal")
capacityThreads = flag.Uint("capacity-threads", 1, "Number of simultaneously running threads, handling CSIStorageCapacity objects")
operationTimeout = flag.Duration("timeout", 10*time.Second, "Timeout for waiting for creation or deletion of a volume")
operationTimeout = flag.Duration("timeout", 10*time.Second, "Timeout for waiting for volume operation (creation, deletion, capacity queries)")

enableLeaderElection = flag.Bool("leader-election", false, "Enables leader election. If leader election is enabled, additional RBAC rules are required. Please refer to the Kubernetes CSI documentation for instructions on setting up these RBAC rules.")

Expand Down Expand Up @@ -475,6 +475,7 @@ func main() {
factoryForNamespace.Storage().V1beta1().CSIStorageCapacities(),
*capacityPollInterval,
*capacityImmediateBinding,
*operationTimeout,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Setting some timeout makes sense, I just wonder whether we should use operationTimeout for it. It's currently defined as

        operationTimeout     = flag.Duration("timeout", 10*time.Second, "Timeout for waiting for creation or deletion of a volume")                                                                                                                                                                                                                                                                    

Perhaps change that into Timeout for volume operations (creation, deletion. capacity queries)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Creation and deletion all use same timeoutOperation. I think we should keep this pattern. We don't need define too much timeout which is unnecessary

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, but then the description should be changed to include the new usage. Querying capacity is not "creation or deletion of a volume" (current description).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh!I miss it, I will add it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pohly done~

)
legacyregistry.CustomMustRegister(capacityController)

Expand Down
7 changes: 6 additions & 1 deletion pkg/capacity/capacity.go
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,7 @@ type Controller struct {
cInformer storageinformersv1beta1.CSIStorageCapacityInformer
pollPeriod time.Duration
immediateBinding bool
timeout time.Duration

// capacities contains one entry for each object that is
// supposed to exist. Entries that exist on the API server
Expand Down Expand Up @@ -164,6 +165,7 @@ func NewCentralCapacityController(
cInformer storageinformersv1beta1.CSIStorageCapacityInformer,
pollPeriod time.Duration,
immediateBinding bool,
timeout time.Duration,
) *Controller {
c := &Controller{
csiController: csiController,
Expand All @@ -178,6 +180,7 @@ func NewCentralCapacityController(
cInformer: cInformer,
pollPeriod: pollPeriod,
immediateBinding: immediateBinding,
timeout: timeout,
capacities: map[workItem]*storagev1beta1.CSIStorageCapacity{},
}

Expand Down Expand Up @@ -596,7 +599,9 @@ func (c *Controller) syncCapacity(ctx context.Context, item workItem) error {
Segments: item.segment.GetLabelMap(),
}
}
resp, err := c.csiController.GetCapacity(ctx, req)
syncCtx, cancel := context.WithTimeout(ctx, c.timeout)
defer cancel()
resp, err := c.csiController.GetCapacity(syncCtx, req)
if err != nil {
return fmt.Errorf("CSI GetCapacity for %+v: %v", item, err)
}
Expand Down
2 changes: 2 additions & 0 deletions pkg/capacity/capacity_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,7 @@ func init() {
}

const (
timeout = 10 * time.Second
driverName = "test-driver"
ownerNamespace = "testns"
csiscRev = "CSISC-REV-"
Expand Down Expand Up @@ -1361,6 +1362,7 @@ func fakeController(ctx context.Context, client *fakeclientset.Clientset, owner
cInformer,
1000*time.Hour, // Not used, but even if it was, we wouldn't want automatic capacity polling while the test runs...
immediateBinding,
timeout,
)

// This ensures that the informers are running and up-to-date.
Expand Down