Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dynamically create file systems #310

Closed
wochanda opened this issue Jan 22, 2021 · 18 comments
Closed

Dynamically create file systems #310

wochanda opened this issue Jan 22, 2021 · 18 comments

Comments

@wochanda
Copy link

With the newly introduced dynamic provisioner, the driver lets users associate an EFS File System (FS) with a Storage Class (SC), with Persistent Volume Claims (PVC) resulting in provisioning of EFS Access Points underneath that file system with unique directories/UIDs/GIDs for privacy. Currently EFS has a hard limit of 120 APs per FS, so in environments that need more than 120 volumes multiple FSs need to be created and associated with multiple SCs.

We can enhance this by allowing the provisioner to both create file systems and manage multiple file systems per storage class, that way users can provision up to thousands of PVCs per SC without manual intervention.

The SC definition could look something like this (fields added to control :

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: efs-sc
provisioner: efs.csi.aws.com
parameters:
provisioningMode: efs-fs-ap
maxFileSystems: 50
maxApsPerFileSystem: 100 // 50 FS * 100 APs = 5k potential volumes
fileSystemMode: generalPurpose // (generalPurpose or maxIo)
mountTargetSubnets: [] //input subnet ids (one per AZ) for us to create mount targets, if not specified use default subnets
securityGroup: sg-123456 //SG to apply to mount targets, default to default security group
throughputMode: bursting // (bursting (default) or specify amount of provisioned throughput, e.g. 100 for 100MB/s)
lifecyclePolicy: 7 // default 30d, 0 to turn off lifecycle management
autobackup: true // default true, false to turn off AWS Backup
tags: [] // tags added to both FS and AP
gidRangeStart: "1000" //Optional
gidRangeEnd: "2000" //Optional
directoryPerms: "777" //Optional
basePath: "/data" //Optional

@wongma7 wongma7 mentioned this issue Feb 4, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 22, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 22, 2021
@gabegorelick
Copy link

This would be useful for more isolation than what access points provide.

@wongma7
Copy link
Contributor

wongma7 commented Jun 4, 2021

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jun 4, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 2, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 2, 2021
@iusergii
Copy link

Am I correct that this would allow having the same user experience as with EBS?

@kbasv kbasv removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Oct 27, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 25, 2022
@mimmus
Copy link

mimmus commented Feb 8, 2022

The hard limit of 120 access points can be extremely limiting for using it in Kubernetes since that would essentially mean you're limited to 120 PVCs. Then you'll need to create another EFS with another storage class to scale further.... I think it defeats the purpose of the scalability aspect of EFS and will require manual intervention.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 10, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@itsvshreyas
Copy link

Any idea why do we have 120 AP as hard limit? Was this limit selected based on some performance statistics?

@mimmus
Copy link

mimmus commented Apr 26, 2022

It's an AWS hard-limit based on use cases.
I had a call with EFS engineers, they asked for details abouth this specific use case but don't give me any timeframe.

@itsvshreyas
Copy link

We even have a use case to spin up more than 1000 Persistent Volumes. AWS does not support nfs-subdir-external-provisioner. Switching to AWS-CSI, which supports dynamic provisioning, now has its own issues with its hard limit. We have even had a case opened with EFS Engineers to see if we can get any changes to this in future versions at least. We will have to wait and see.

@elgalu
Copy link

elgalu commented Jul 11, 2023

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jul 11, 2023
@Almenon
Copy link

Almenon commented Oct 20, 2023

@leakingtapan Can we re-open this and freeze the lifecycle please? I think it's clear based on the number of 👍 that this is desirable. This would be ranked the second most requested feature by +1's if it were open.

To give some additional context, cost tracking is important to us, so I'd like to be able to create different EFS filesystems for different cost categories, instead of creating a single one and reusing access points.

But you could create multiple filesystem with Terraform. I have three reasons why I'd prefer the driver:

  1. That way everything (filesystem, access point, PV, PVC, PV mount) is grouped together
  2. It matches the behavior of EBS CSI driver
  3. Personally, I prefer using Kubernetes over Terraform

@ns-mkusper
Copy link

ns-mkusper commented Mar 21, 2024

@leakingtapan I'd also like to request re-opening and for the lifecycle to be frozen for the same reasons that @Almenon mentioned.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests