These instructions reflect the latest version of the codebase. For instructions on older versions, please see version links under Version Compatibility.
- Step 1: Bringing up a cluster with local disks
- Step 2: Creating a StorageClass (1.9+)
- Step 3: Creating local persistent volumes
- Step 4: Create local persistent volume claim
If raw local block feature is needed,
$ export KUBE_FEATURE_GATES="BlockVolume=true"
Note: Kubernetes versions prior to 1.10 require several additional feature-gates be enabled on all Kubernetes components, because the persistent local volumes and other features were in alpha.
GCE clusters brought up with clusters/kube-up.sh
script in Kubernetes
repository will automatically format
and mount the requested Local SSDs, so you can deploy the provisioner with the
pre-generated deployment spec and skip to step
4, unless you want to customize
the provisioner spec or storage classes.
$ git clone --depth=1 https://github.com/kubernetes/kubernetes
$ cd kubernetes
$ NODE_LOCAL_SSDS_EXT=<n>,<scsi|nvme>,fs cluster/kube-up.sh
$ cd ../
$ kubectl create -f helm/generated_examples/gce.yaml
GKE clusters will automatically format and mount the requested Local SSDs. Please see GKE documentation for instructions for how to create a cluster with Local SSDs.
Then skip to step 4.
Note: The raw block feature is only supported on GKE Kubernetes alpha clusters.
- Partition and format the disks on each node according to your application's requirements.
- Mount all the filesystems under one directory per StorageClass. The directories are specified in a configmap, see below.
- Configure the Kubernetes API Server, controller-manager, scheduler, and all kubelets
with
KUBE_FEATURE_GATES
as described above. - If not using the default Kubernetes scheduler policy, the following
predicates must be enabled:
- Pre-1.9:
NoVolumeBindConflict
- 1.9+:
VolumeBindingChecker
- Pre-1.9:
Kubernetes provides a script to build and start a lightweight local cluster on Linux. You can try to deploy local-volume-provisioner in this cluster and discovery local volumes on your local machine.
- Create
/mnt/disks
directory and mount several volumes into its subdirectories. The example below uses three ram disks to simulate real local volumes:
$ mkdir /mnt/disks
$ for vol in vol1 vol2 vol3; do
mkdir /mnt/disks/$vol
mount -t tmpfs $vol /mnt/disks/$vol
done
- Run the local cluster.
$ git clone --depth=1 https://github.com/kubernetes/kubernetes
$ cd kubernetes
$ ALLOW_PRIVILEGED=true LOG_LEVEL=5 FEATURE_GATES=$KUBE_FEATURE_GATES hack/local-up-cluster.sh
See running Kubernetes locally for more information.
To delay volume binding until pod scheduling and to handle multiple local PVs in
a single pod, a StorageClass must to be created with volumeBindingMode
set to
WaitForFirstConsumer
.
$ kubectl create -f deployment/kubernetes/example/default_example_storageclass.yaml
-
Generate Provisioner's ServiceAccount, Roles, DaemonSet, and ConfigMap spec, and customize it.
This step uses helm templates to generate the specs. See the helm README for setup instructions. To generate the provisioner's specs using the default values, run:
helm template ./helm/provisioner > deployment/kubernetes/provisioner_generated.yaml
You can also provide a custom values file instead:
helm template ./helm/provisioner --values custom-values.yaml > deployment/kubernetes/provisioner_generated.yaml
-
Deploy Provisioner
Once a user is satisfied with the content of Provisioner's yaml file, kubectl can be used to create Provisioner's DaemonSet and ConfigMap.
$ kubectl create -f deployment/kubernetes/provisioner_generated.yaml
-
Check discovered local volumes
Once launched, the external static provisioner will discover and create local-volume PVs.
For example, if the directory
/mnt/disks/
contained one directory/mnt/disks/vol1
then the following local-volume PV would be created by the static provisioner:$ kubectl get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-ce05be60 1024220Ki RWO Delete Available local-storage 26s $ kubectl describe pv local-pv-ce05be60 Name: local-pv-ce05be60 Labels: <none> Annotations: pv.kubernetes.io/provisioned-by=local-volume-provisioner-minikube-18f57fb2-a186-11e7-b543-080027d51893 StorageClass: local-storage Status: Available Claim: Reclaim Policy: Delete Access Modes: RWO Capacity: 1024220Ki NodeAffinity: Required Terms: Term 0: kubernetes.io/hostname in [my-node] Message: Source: Type: LocalVolume (a persistent volume backed by local storage on a node) Path: /mnt/disks/vol1 Events: <none>
The PV described above can be claimed and bound to a PVC by referencing the
local-storage
storageClassName.
See Kubernetes documentation for an example PersistentVolume spec.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: example-local-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-storage
Please replace the following elements to reflect your configuration:
- "5Gi" with required size of storage volume
- "local-storage" with the name of storage class associated with the local PVs that should be used for satisfying this PVC
For "Block" volumeMode PVC, which tries to claim a "Block" PV, the following example can be used:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: example-local-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
volumeMode: Block
storageClassName: local-storage
Note that the only additional field of interest here is volumeMode, which has been set to "Block".