Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Add storage class parameter for min iops/throughput scaling #1816

Open
Champ-Goblem opened this issue Aug 30, 2024 · 3 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@Champ-Goblem
Copy link

When creating a hyperdisk-balanced volume of size less than 6Gi the automatically configured iops is less than 3000, this causes an issue when resizing the storage as the minimum allowed iops changes until 6Gi and above where the min becomes 3000.

It is also not possible to configure a starting iops value of 3000 for sizes below 6Gi because the maximum iops for these disks are also less than 3000

The current error returned from the PVC events is:

Warning  VolumeResizeFailed  18s (x6 over 24s)  external-resizer pd.csi.storage.gke.io  resize volume "pvc-5e695099-2852-4e2f-8895-2b3456fcd7cd" by resizer "pd.csi.storage.gke.io" failed: rpc error: code = Unknown desc = ControllerExpandVolume failed to resize disk: failed to resize zonal volume Key{"pvc-5e695099-2852-4e2f-8895-2b3456fcd7cd", zone: "us-east4-b"}: googleapi: Error 400: Requested provisioned IOPS cannot be smaller than 3000., badRequest

AWS has implemented a storage class parameter that auto-adjusts the minimum based on the supported size:

allowAutoIOPSPerGBIncrease -> When "true", the CSI driver increases IOPS for a volume when iopsPerGB * <volume size> is too low to fit into IOPS range supported by AWS. This allows dynamic provisioning to always succeed, even when user specifies too small PVC capacity or iopsPerGB value. On the other hand, it may introduce additional costs, as such volumes have higher IOPS than requested in iopsPerGB.

It would be useful if GCP could support something similar as well so that automatic resizing from 4Gi to higher values works successfully.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 28, 2024
@dfajmon
Copy link

dfajmon commented Dec 10, 2024

@mattcary @tyuchn does this have some priority?
We have also seen this behavior when resizing from 4Gi/5Gi to 6Gi, having the wrong IOPS. I believe having default values as the minimum value would be enough.

@mattcary
Copy link
Contributor

Thanks for the ping, This is an interesting use case that we hadn't seen before.

We'll be looking into IO provisioning early next year as we roll out volume attribute classes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

5 participants