Skip to content

Commit

Permalink
Update pv workflow example
Browse files Browse the repository at this point in the history
  • Loading branch information
msau42 committed Jan 30, 2017
1 parent 71f2016 commit 2728af6
Showing 1 changed file with 23 additions and 23 deletions.
46 changes: 23 additions & 23 deletions contributors/design-proposals/local-storage-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -184,12 +184,12 @@ spec:
```yaml
$ kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM … NODE
local-pv-1 375Gi RWO Delete Available gke-mycluster-1
local-pv-2 375Gi RWO Delete Available gke-mycluster-1
local-pv-1 375Gi RWO Delete Available gke-mycluster-2
local-pv-2 375Gi RWO Delete Available gke-mycluster-2
local-pv-1 375Gi RWO Delete Available gke-mycluster-3
local-pv-2 375Gi RWO Delete Available gke-mycluster-3
local-pv-1 100Gi RWO Delete Available node-1
local-pv-2 10Gi RWO Delete Available node-1
local-pv-1 100Gi RWO Delete Available node-2
local-pv-2 10Gi RWO Delete Available node-2
local-pv-1 100Gi RWO Delete Available node-3
local-pv-2 10Gi RWO Delete Available node-3
```
3. The addon will monitor the health of secondary partitions and taint PVs whenever the backing local storage devices becomes unhealthy.
4. Alice creates a StatefulSet that uses local PVCs
Expand Down Expand Up @@ -222,19 +222,19 @@ spec:
- metadata:
name: www
labels:
storage.kubernetes.io/medium: local-ssd
storage.kubernetes.io/volume-type: local
storage.kubernetes.io/medium: ssd
spec:
volume-type: local
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
storage: 100Gi
- metadata:
name: log
labels:
storage.kubernetes.io/medium: hdd
storage.kubernetes.io/volume-type: local
spec:
volume-type: local
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
Expand All @@ -244,22 +244,22 @@ spec:
```yaml
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES … NODE
www-local-pvc-1 Bound local-pv-1 375Gi RWO gke-mycluster-1
www-local-pvc-2 Bound local-pv-1 375Gi RWO gke-mycluster-2
www-local-pvc-3 Bound local-pv-1 375Gi RWO gke-mycluster-3
log-local-pvc-1 Bound local-pv-1 375Gi RWO gke-mycluster-1
log-local-pvc-2 Bound local-pv-1 375Gi RWO gke-mycluster-2
log-local-pvc-3 Bound local-pv-1 375Gi RWO gke-mycluster-3
www-local-pvc-1 Bound local-pv-1 375Gi RWO node-1
www-local-pvc-2 Bound local-pv-1 375Gi RWO node-2
www-local-pvc-3 Bound local-pv-1 375Gi RWO node-3
log-local-pvc-1 Bound local-pv-2 375Gi RWO node-1
log-local-pvc-2 Bound local-pv-2 375Gi RWO node-2
log-local-pvc-3 Bound local-pv-2 375Gi RWO node-3
```
```yaml
$ kubectl get pv
NAME CAPACITY … STATUS CLAIM NODE
local-pv-1 375Gi Bound www-local-pvc-1 gke-mycluster-1
local-pv-2 375Gi Bound log-local-pvc-1 gke-mycluster-1
local-pv-1 375Gi Bound www-local-pvc-2 gke-mycluster-2
local-pv-2 375Gi Bound log-local-pvc-2 gke-mycluster-2
local-pv-1 375Gi Bound www-local-pvc-3 gke-mycluster-3
local-pv-2 375Gi Bound log-local-pvc-3 gke-mycluster-3
local-pv-1 100Gi Bound www-local-pvc-1 node-1
local-pv-2 10Gi Bound log-local-pvc-1 node-1
local-pv-1 100Gi Bound www-local-pvc-2 node-2
local-pv-2 10Gi Bound log-local-pvc-2 node-2
local-pv-1 100Gi Bound www-local-pvc-3 node-3
local-pv-2 10Gi Bound log-local-pvc-3 node-3
```
6. If a pod dies and is replaced by a new one that reuses existing PVCs, the pod will be placed on the same node where the corresponding PVs exist. Stateful Pods are expected to have a high enough priority which will result in such pods preempting other low priority pods if necessary to run on a specific node.
7. If a new pod fails to get scheduled while attempting to reuse an old PVC, the StatefulSet controller is expected to give up on the old PVC (delete & recycle) and instead create a new PVC based on some policy. This is to guarantee scheduling of stateful pods.
Expand Down Expand Up @@ -290,7 +290,7 @@ metadata:
annotations:
storage.kubernetes.io/node: k8s-node
labels:
storage.kubernetes.io/medium: hdd
storage.kubernetes.io/medium: ssd
spec:
volume-type: local
storage-type: block
Expand Down

0 comments on commit 2728af6

Please sign in to comment.