Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(ZFSPV): scheduler for ZFSPV #8

Merged
merged 4 commits into from
Nov 6, 2019
Merged

Conversation

pawanpraka1
Copy link
Contributor

@pawanpraka1 pawanpraka1 commented Nov 4, 2019

The scheduler will go through all the nodes as per
topology information and it will pick the node which has less
volume provisioned in the given pool.

lets say there are 2 nodes node1 and node2 with below pool configuration :-

node1
|
|-----> pool1
|         |
|         |------> pvc1
|         |------> pvc2
|-----> pool2
          |------> pvc3

node2
|
|-----> pool1
|         |
|         |------> pvc4
|-----> pool2
          |------> pvc5
          |------> pvc6

So if application is using pool1 as shown in the below storage class, then ZFS driver will schedule it on node2 as it has one volume as compared to node1 which has 2 volumes in pool1.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: openebs-zfspv
provisioner: zfs.csi.openebs.io
parameters:
  blocksize: "4k"
  compression: "on"
  dedup: "on"
  thinprovision: "yes"
  scheduler: "VolumeWeighted"
  poolname: "pool1"

So if application is using pool2 as shown in the below storage class, then ZFS driver will schedule it on node1 as it has one volume only as compared node2 which has 2 volumes in pool2.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: openebs-zfspv
provisioner: zfs.csi.openebs.io
parameters:
  blocksize: "4k"
  compression: "on"
  dedup: "on"
  thinprovision: "yes"
  scheduler: "VolumeWeighted"
  poolname: "pool2"

In case of same number of volumes on all the nodes for the given pool, it can pick any node and schedule the PV on that.

Signed-off-by: Pawan pawan@mayadata.io

The scheduler will go through all the nodes as per
topology information and it will pick the node which has less
volume provisioned in the given pool.

lets say there are 2 nodes node1 and node2 with below pool configuration :-
```
node1
|
|-----> pool1
|         |
|         |------> pvc1
|         |------> pvc2
|-----> pool2
          |------> pvc3

node2
|
|-----> pool1
|         |
|         |------> pvc4
|-----> pool2
          |------> pvc5
          |------> pvc6
```
So if application is using pool1 as shown in the below storage class, then ZFS driver will schedule it on node2 as it has one volume as compared to node1 which has 2 volumes in pool1.
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: openebs-zfspv
provisioner: zfs.csi.openebs.io
parameters:
  blocksize: "4k"
  compression: "on"
  dedup: "on"
  thinprovision: "yes"
  poolname: "pool1"
```

So if application is using pool2 as shown in the below storage class, then ZFS driver will schedule it on node1 as it has one volume only as compared node2 which has 2 volumes in pool2.
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: openebs-zfspv
provisioner: zfs.csi.openebs.io
parameters:
  blocksize: "4k"
  compression: "on"
  dedup: "on"
  thinprovision: "yes"
  poolname: "pool2"
```
In case of same number of volumes on all the nodes for the given pool, it can pick any node and schedule the PV on that.

Signed-off-by: Pawan <pawan@mayadata.io>
@codecov-io
Copy link

codecov-io commented Nov 4, 2019

Codecov Report

Merging #8 into master will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master       #8   +/-   ##
=======================================
  Coverage   89.55%   89.55%           
=======================================
  Files           1        1           
  Lines          67       67           
=======================================
  Hits           60       60           
  Misses          7        7

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d0e97cd...c23484b. Read the comment docs.

Signed-off-by: Pawan <pawan@mayadata.io>
so that we can extend it further to have other logics
like provisioning based on available pool capacity

Signed-off-by: Pawan <pawan@mayadata.io>
Signed-off-by: Pawan <pawan@mayadata.io>
cmd/main.go Show resolved Hide resolved
@kmova kmova merged commit a10dedb into openebs:master Nov 6, 2019
@kmova kmova added this to the v0.1.0 milestone Nov 8, 2019
@pawanpraka1 pawanpraka1 deleted the schedular branch November 8, 2019 10:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants