You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 21, 2020. It is now read-only.
I've been struggling to figure out how to properly do (dynamic) provisioning of local volumes for some time now, already using local-volume for static provisioning, and thought of using LVM volumes to implement dynamic provisioning somehow. I now noticed this is in the TODO list of said module.
While thinking about how to implement this, I fail to see how a certain situation can be handled. Since this feels like a blocker, I'd like to reach out to the maintainers and community before launching any development effort.
The scenario is as follows: while working on other dynamic PV provisioners (using the external-storage library), these provisioners can be deployed in a cluster and will use some kind of master election system for each PVC, which makes sure one instance will create the remote volume somehow.
With local-volume, this could work in the basic case: one provisioner grabs a PVC, and assuming it can create an LV according to requested properties, creates one, then posts a PV with the proper annotations (node affinity etc.) and spec, like the static provisioner does.
Things become tricky, though, when there's a Pod which requires access to 2 volumes. There'd be 2 PVCs posted, which could be grabbed by provisioners running on different nodes, i.e. the LVs will be created on different servers, then PVs will be posted, but binding of the PV(C)s and scheduling of the Pod will fail because neither server has both PVs required available.
Running a single instance of a provisioner, which then creates LVs on nodes remotely, would not work either, because all incoming PVCs are independent, even when they're not at some later stage.
I currently fail to see how to work around this. Input more than welcome.
@lichuqiang is working on a design for lvm dynamic provisioner. It has to leverage changes from kubernetes/community#1857 to get a node that the scheduler has picked for the pod.
I've been struggling to figure out how to properly do (dynamic) provisioning of local volumes for some time now, already using
local-volume
for static provisioning, and thought of using LVM volumes to implement dynamic provisioning somehow. I now noticed this is in the TODO list of said module.While thinking about how to implement this, I fail to see how a certain situation can be handled. Since this feels like a blocker, I'd like to reach out to the maintainers and community before launching any development effort.
The scenario is as follows: while working on other dynamic PV provisioners (using the
external-storage
library), these provisioners can be deployed in a cluster and will use some kind of master election system for each PVC, which makes sure one instance will create the remote volume somehow.With
local-volume
, this could work in the basic case: one provisioner grabs a PVC, and assuming it can create an LV according to requested properties, creates one, then posts a PV with the proper annotations (node affinity etc.) and spec, like the static provisioner does.Things become tricky, though, when there's a Pod which requires access to 2 volumes. There'd be 2 PVCs posted, which could be grabbed by provisioners running on different nodes, i.e. the LVs will be created on different servers, then PVs will be posted, but binding of the PV(C)s and scheduling of the Pod will fail because neither server has both PVs required available.
Running a single instance of a provisioner, which then creates LVs on nodes remotely, would not work either, because all incoming PVCs are independent, even when they're not at some later stage.
I currently fail to see how to work around this. Input more than welcome.
/CC @msau42 @vishh @saad-ali @wongma7 @jsafrane @ianchakeres @dhirajh
The text was updated successfully, but these errors were encountered: