-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Pooled PersistentVolumeClaims #3417
Comments
Worth noting that this feature request was also recently opened against Argo: argoproj/argo-workflows#4130 I've tried searching for "kubernetes pvc pool" and "kubernetes storage pool" but haven't found anything. I wonder if this would also be worth looking at as a platform feature and raising with the k8s team. |
That makes sense. Perhaps the ideal solution would look something like Kubernetes having a new Resource representing a pooled PVC, and there being a workspace binding to the PipelineRun. Something like the following: ---
apiVersion: v1
kind: PooledPersistentVolumeClaim
metadata:
name: my-pvc-pool
spec:
accessMode:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
spec:
pipelineRef:
name: my-pipeline-run
workspaces:
- name: my-workspace
pooledPersistentVolumeClaim:
claimName: my-pvc-pool Without native k8s support for this, perhaps the other approaches here are:
Curious about @skaegi's thoughts per our Slack conversation in https://tektoncd.slack.com/archives/CLCCEBUMU/p1603199756169700?thread_ts=1603139560.165600&cid=CLCCEBUMU |
This would be desirable for other use cases as well. We've had requests to support more Workspace types for example, and this could be one way to do that. It could also open the door to Workspace types that aren't Volume-backed, such as using GCS / S3 buckets instead. |
I agree with this. (from the slack thread). Storage pooling for PVCs is what cloud providers already do for this, in the layer under PVC/PV. |
I'm definitely no expert of PV's. By the layer under PVC/PV's, are you referring to storage classes? And if so, how would this work? Are they able to provision stateful volumes? E.g. can I request a new PV that retains its file system from the last time I used it? |
I guess you could do some really clever re-use of PVCs but that is essentially re-doing what a storage provisioner does. It might even be possible to write a storage provisioner that re-uses another storage provisioner's PVs (or underlying storage) but I do not know of active work in that area but that could be cool ;) In our world we solved the problem a little differently. We found that in our provider managed clusters that the PVs allocated when using the "default" storage-class (and all the other storage-classes) were ridiculously too slow and expensive. They're generally designed for 500G+ of storage and double digit IOPs and can take minutes to allocate. Being cheap and wanting good performance we wrote a "local" provisioner that does pseudo-dynamic provisioning. Our integration to use it as the backing storage for workspaces is a bit messy but some of the work that @jlpettersson did really helps. Maybe this would too -- #2595 (comment) A few weeks back I wondered aloud if maybe Tekton could optionally package a basic storage provisioner like ours (e.g. new experimental project) as otherwise using "workspaces" is painful/expensive, but never took it further. |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
Hey friends, We've had a need for this for quite some time and I've finally decided to take a stab at implementing it here: https://github.com/puppetlabs/pvpool. It doesn't use the approach of layering storage provisioners because some of those APIs just didn't seem to fit the model well (for example, it can't support static binding and I haven't taken a look at what we're going to need to do to integrate it with Tekton (we've been using mutating admission webhooks to, ehm, make this process "easy" because I'm behind on learning the new APIs), but it will be on my plate in the next few weeks. Hopefully there are minimal (or no) changes to Tekton needed to get this working -- either way I'll follow up with some additional thoughts as I add this into our product. Feel free to reach out to me on Slack too and I'd be happy to discuss further! |
Stale issues rot after 30d of inactivity. /lifecycle rotten Send feedback to tektoncd/plumbing. |
We rolled the implementation of this out in our product in the last week or so, so I thought I'd close the loop on this. Because we have a "supervisor" controller that creates PipelineRuns for us, the implementation here was actually quite straightforward using the workspaces feature. Basically, we end up with something like this:
And that's it! Bind it through the pipeline to tasks as needed. Also, our PVCs are ROX so we turn off the affinity assistant. But otherwise it "just works," which is really nice. |
Rotten issues close after 30d of inactivity. /close Send feedback to tektoncd/plumbing. |
@tekton-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Feature request
There should be a way to select a
PersistentVolumeClaim
from a pool as the workspace binding when creatingPipelineRun
s andTaskRun
s.Ideally, there should be a way to dynamically grow the pool size; if there are no PVC's available in the pool, a new one gets created dynamically and added to the pool. This implies that there should be some way to expire these PVC's as well.
Use case
There's a couple use-cases that I can think of:
The text was updated successfully, but these errors were encountered: