Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Add PersistentVolume support to Habitat type #195

Closed
wants to merge 7 commits into from

Conversation

asymmetric
Copy link
Contributor

@asymmetric asymmetric commented Feb 21, 2018

This PR adds a Persistence key to the Habitat type, which allows users to setup persistent storage.

TODO

  • Mounting a PersistentVolume in each Pod.
  • Validate Size field, so that the Habitat object creation fails if it's invalid. Right now, the Habitat gets created, while the Deployment fails. Either here or as part of Use Schema for validation instead of ad-hoc code #185.
  • React on PV/PVC updates/deletions/creations
    • is this an aspect where a StatefulSet would require less work?
      • No, we would still have to tie the Habitat to the PV.
  • Figure out if we want to support changing the size of the PV
  • Figure out if we want to add support for custom StorageClasses
    • The downside is that the user can get overwhelmed
    • The upside is that it's the precondition for dynamic provisioning
  • Figure out if Spec.Persistence.Enabled is a good idea. Normally, the field presence is enough to signal intent: the presence of Spec.Persistence would mean that the user wants it.
  • Figure out which storage plugin we can/should use
    • HostPath provides no data replication on multi-node clusters
  • Add README

Closes #181.
Ref https://github.com/kinvolk/habitat-operator/pull/195.

Lorenzo Manacorda added 4 commits February 21, 2018 18:09
Signed-off-by: Lorenzo Manacorda <lorenzo@kinvolk.io>
When the Spec.Persistence.Field is "true".

Signed-off-by: Lorenzo Manacorda <lorenzo@kinvolk.io>
Signed-off-by: Lorenzo Manacorda <lorenzo@kinvolk.io>
So that we can use it to mount a PeristentVolume in the Deployment.

Signed-off-by: Lorenzo Manacorda <lorenzo@kinvolk.io>
@asymmetric
Copy link
Contributor Author

asymmetric commented Feb 21, 2018

Adding reviewers for early feedback, feel free to ignore if you're only interesed in the finished product :)

@jeremymv2
Copy link

@asymmetric here's an interesting article that describes how even though PersistentVolumes are certainly available for use in Deployments, they make most sense in StatefulSets.

https://blog.thecodeteam.com/2017/08/16/technical-dive-statefulsets-deployments-kubernetes/

Thoughts?

Copy link
Contributor

@lilic lilic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I mentioned before I think going the way of StatefulSets would be better IMHO.

One of the problems I see is with this is the ReadWriteOnce which is the access kind you are using, and that means the volume can be mounted as read-write by only a single node. And because most types support only ReadWriteOnce access mode, we could only support a few volume plugins.

@@ -701,6 +744,26 @@ func (hc *HabitatController) newDeployment(h *habv1beta1.Habitat) (*appsv1beta1.
base.Spec.Template.Spec.Volumes = append(base.Spec.Template.Spec.Volumes, *secretVolume)
}

// Mount Persistent Volume, if requesed.
if h.Spec.Persistence.Enabled {
pv := &apiv1.Volume{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you want a Volume, judging by this PR description and the comment above you want type PersistentVolume.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, this is a Pod's Volume. In a manifest it would be the volumes key:

kind: Pod
apiVersion: v1
metadata:
  name: mypod
spec:
  containers:
    - name: myfrontend
      image: dockerfile/nginx
      volumeMounts:
      - mountPath: "/var/www/html"
        name: mypd
  volumes:
    - name: mypd
      persistentVolumeClaim:
        claimName: myclaim

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But I can rename the variable from pv to v to make this less confusing.

@asymmetric
Copy link
Contributor Author

@lilic Isn't the AccessModes issue orthogonal to Deployment vs StatefulSet? Meaning, StatefulSets would have the same issue, because under the hood they also use PersistentVolumes, with their limitations.

If the flag is enabled.

Signed-off-by: Lorenzo Manacorda <lorenzo@kinvolk.io>
Lorenzo Manacorda added 2 commits February 23, 2018 15:57
So that we can make it optional, and remove the `Enabled` sub-key.

Signed-off-by: Lorenzo Manacorda <lorenzo@kinvolk.io>
Signed-off-by: Lorenzo Manacorda <lorenzo@kinvolk.io>
@lilic
Copy link
Contributor

lilic commented Feb 26, 2018

@asymmetric In a StatefulSet each pod gets their own PersistentVolumeClaim whereas in a Deployment each pod has the same PersistentVolumeClaim. This is why StatefulSet has a PersistentVolumeClaimTemplate as it creates PersistentVolumeClaims for each Pod off of that template. This is why the AccessMode is relevant when using a Deployment, it would be multiple Pods using the same Volume.

@asymmetric
Copy link
Contributor Author

@lilic Ah I see, thanks! This is a good reason why StatefulSets could make more sense, I agree. Nice to have finally found one out! :)

I'll see if it's doable to implement this behaviour with Deployments first, and otherwise I think we have settled the argument.

@lilic
Copy link
Contributor

lilic commented Feb 26, 2018

@asymmetric

I'll see if it's doable to implement this behaviour with Deployments first, and otherwise I think we have settled the argument.

Hmm...can you explain that a bit? Wouldn't that be just trying to reimplement StatefulSets?

@asymmetric
Copy link
Contributor Author

@lilic yes, but I was trying to figure out if it's possible to do that without the other changes in semantics (mainly ordered creation/removal). It seems pointless though, guess we have to accept those as a tradeoff.

@lilic
Copy link
Contributor

lilic commented Feb 26, 2018

@asymmetric According to the docs from 1.7 onwards that works fine https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies

@asymmetric
Copy link
Contributor Author

Closing in favor of StatefulSets.

@asymmetric asymmetric closed this Mar 5, 2018
@asymmetric asymmetric deleted the asymmetric/pv branch March 5, 2018 15:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants