Skip to content
This repository has been archived by the owner on Mar 9, 2022. It is now read-only.
/ pvpool Public archive

PVPool is a Kubernetes controller that makes preconfigured persistent volumes available for immediate consumption

License

Notifications You must be signed in to change notification settings

puppetlabs/pvpool

PVPool CI Go Report Card

PVPool is a Kubernetes operator that preallocates a collection of persistent volumes with a requested size. By preallocating, applications are able to acquire storage more rapidly than underlying provisioners are able to fulfill.

Additionally, PVPool can configure the volumes with a job, allowing them to be prepopulated with application-specific data.

Terminology

PVPool exposes two new Kubernetes resources:

  • Pool: A collection of PVs. The PVPool controller tries to guarantee that the exact number of PVs specified by the replicas field of a pool spec are in the pool at any given time.
  • Checkout: A request to take a single PV from a referenced pool as a PVC. Once the PV is checked out from the pool, the pool will automatically create a new PV to take its place.

Installation

PVPool is distributed as a standalone manifest and as a Kustomize base that can be modified to suit your needs.

To install the latest release on your cluster:

$ kubectl apply -f https://github.com/puppetlabs/pvpool/releases/latest/download/pvpool-release.yaml

To use PVPool as a Kustomize base, you should reference the ZIP archive instead of a particular manifest. Then add it as a resource to your kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://github.com/puppetlabs/pvpool/releases/latest/download/pvpool-release.zip

We also distribute debug manifests and Kustomize bases. The installations are functionally identical, but in debug mode the container logging is more verbose.

Usage

If you're using Rancher's Local Path Provisioner (or have a storage class named local-path), you can create the pools and checkouts in the examples directory without any modifications. You should end up with a set of PVCs with names starting with test-pool-, corresponding PVs, plus a checked out PVC starting with the name test-checkout-a.

Storage class requirements and limitations

PVPool doesn't really understand storage classes that have volumeBindingMode: "WaitForFirstConsumer" in the sense that they're described in the Kubernetes documentation. Rather, we always ensure the PVC is bound before putting it into the pool. We do this using a job, though, so any special requirements around how pods are created (e.g., node taints) will be respected.

You should be careful using storage classes that have a reclaimPolicy other than "Delete". If you do, take note that there are no restrictions on churning through many checkouts, so you may find yourself accumulating lots of stale persistent volumes.

Prepopulating volumes

Here's a pool with an init job that writes some data to the PV before making it available to be checked out:

apiVersion: pvpool.puppet.com/v1alpha1
kind: Pool
metadata:
  name: test-pool-with-init-job
spec:
  replicas: 5
  selector:
    matchLabels:
      app.kubernetes.io/name: pvpool-test-with-init-job
  template:
    metadata:
      labels:
        app.kubernetes.io/name: pvpool-test-with-init-job
    spec:
      storageClassName: local-path
      resources:
        requests:
          storage: 50Mi
  initJob:
    template:
      spec:
        backoffLimit: 2
        activeDeadlineSeconds: 60
        template:
          spec:
            containers:
            - name: init
              image: busybox:stable-musl
              command:
              - /bin/sh
              - -c
              - |
                echo 'Wow, such prepopulated!' >/workspace/data.txt
              volumeMounts:
              - name: my-volume
                mountPath: /workspace
    volumeName: my-volume

When you use init jobs with PVPool, note that the pod restartPolicy will always be Never and that the job backoffLimit and activeDeadlineSeconds are limited to 10 and 600, respectively. If you don't specify a volumeName in the initJob, it will default to "workspace". Volumes are always automatically added to the pod spec, but you must provide the relevant mount path for each container you want to use the volume with.

RBAC

PVPool takes advantage of a lesser-known Kubernetes RBAC verb, "use", to ensure the creator of a checkout has access to the pool they've requested. This allows the pool to exist opaquely, perhaps even in another namespace, while still allowing a user with little trust to provision the storage they need.

For example, given the following checkout object:

apiVersion: pvpool.puppet.com/v1alpha1
kind: Checkout
metadata:
  namespace: restricted
  name: my-checkout
spec:
  poolRef:
    namespace: storage
    name: restricted-pool

The user creating the checkout will need the following roles bound:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: restricted
  name: checkout-owner
rules:
- apiGroups: [pvpool.puppet.com/v1alpha1]
  resources: [checkouts]
  verbs: [get, list, watch, create, update, patch, delete]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: storage
  name: restricted-pool-user
rules:
- apiGroups: [pvpool.puppet.com/v1alpha1]
  resources: [pools]
  resourceNames: [restricted-pool]
  verbs: [use]

Contributing

See CONTRIBUTING.md for more information on how to contribute to this project.

About

PVPool is a Kubernetes controller that makes preconfigured persistent volumes available for immediate consumption

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks