-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(topology): adding support for custom topology keys #94
Conversation
Codecov Report
@@ Coverage Diff @@
## master #94 +/- ##
==========================================
- Coverage 23.57% 22.90% -0.68%
==========================================
Files 14 14
Lines 475 489 +14
==========================================
Hits 112 112
- Misses 362 376 +14
Partials 1 1
Continue to review full report at Codecov.
|
9da9cfb
to
6a28c11
Compare
56460e0
to
2ba9828
Compare
There are setups where nodename is different than the hostname. The driver uses the nodename and tries to set the "kubernetes.io/hostname" node label to the nodename. Which will fail if nodename is not same as hostname. Here, changing the key to unique name so that the driver can set that key as node label and also it can not modify/touch the existing node labels. Now onwards, the driver will use "openebs.io/nodename" key to set the PV node affinity. Old volumes will have "kubernetes.io/hostname" affinity, and they will also work as after the PR openebs#94, it supports all the node labels as topology key and all the nodes have "kubernetes.io/hostname" label set. So old volumes will work without any issue. Also for the same reason old stoarge classes which are using "kubernetes.io/hostname" as topology key, will work as that key is supported. Signed-off-by: Pawan <pawan@mayadata.io>
There are setups where nodename is different than the hostname. The driver uses the nodename and tries to set the "kubernetes.io/hostname" node label to the nodename. Which will fail if nodename is not same as hostname. Here, changing the key to unique name so that the driver can set that key as node label and also it can not modify/touch the existing node labels. Now onwards, the driver will use "openebs.io/nodename" key to set the PV node affinity. Old volumes will have "kubernetes.io/hostname" affinity, and they will also work as after the PR openebs#94, it supports all the node labels as topology key and all the nodes have "kubernetes.io/hostname" label set. So old volumes will work without any issue. Also for the same reason old stoarge classes which are using "kubernetes.io/hostname" as topology key, will work as that key is supported. This fixes the issue where the driver was trying to create the PV on the master node as master node is having "kubernetes.io/hostname" label, so it is also becoming a valid candidate for provisioning the PV. After changing the key to unique name, since the driver will not run on master node, so it will not set "openebs.io/nodename" label to this node hence this node will never become a valid candidate for the provisioning the volume. Signed-off-by: Pawan <pawan@mayadata.io>
Changes look good. Couple of doc items. Can the release notes be added as per the new guidelines and also, can the readme/examples updated on how to use this feature. |
87d0a04
to
c436703
Compare
docs/faq.md
Outdated
|
||
``` | ||
|
||
Once we have labeled the node, we can install the zfs driver. The driver will pick the node labels and add that as the supported topology key. If the driver is already installed and you want to add a new topology information, you can label the node with the topology information and then restart of the nodes daemonset are required so that the driver can pick the labels and add them as supported topology keys. We should restart the pod in kube-system namespace with the name as openebs-zfs-node-[xxxxx] which is the node agent pod for the ZFS-LocalPV Driver. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
n_: nodes daemonset -> zfs pv csi driver daemon set ( openebs-zfs-node
)
Now user can label the nodes with the required topology, the ZFSPV driver will support all the node labels as topology keys. We should label the nodes first and then deploy the driver to make it aware of all the labels that node has. If we want to add labels after ZFS-LocalPV driver has been deployed, a restart all the node agents are required so that the driver can pick the labels and add them as supported topology keys. Note that if storageclass is using Immediate binding mode and topology key is not mentioned then all the nodes should be labeled using same key, that means, same key should be present on all nodes, nodes can have different values for those keys. If nodes are labeled with different keys i.e. some nodes are having different keys, then ZFSPV's default scheduler can not effictively do the volume count based scheduling. In this case the CSI provisioner will pick keys from any random node and then prepare the preferred topology list using the nodes which has those keys defined. And ZFSPV scheduler will schedule the PV among those nodes only. Signed-off-by: Pawan <pawan@mayadata.io>
Signed-off-by: Pawan <pawan@mayadata.io>
Signed-off-by: Pawan <pawan@mayadata.io>
There are setups where nodename is different than the hostname. The driver uses the nodename and tries to set the "kubernetes.io/hostname" node label to the nodename. Which will fail if nodename is not same as hostname. Here, changing the key to unique name so that the driver can set that key as node label and also it can not modify/touch the existing node labels. Now onwards, the driver will use "openebs.io/nodename" key to set the PV node affinity. Old volumes will have "kubernetes.io/hostname" affinity, and they will also work as after the PR openebs#94, it supports all the node labels as topology key and all the nodes have "kubernetes.io/hostname" label set. So old volumes will work without any issue. Also for the same reason old stoarge classes which are using "kubernetes.io/hostname" as topology key, will work as that key is supported. This fixes the issue where the driver was trying to create the PV on the master node as master node is having "kubernetes.io/hostname" label, so it is also becoming a valid candidate for provisioning the PV. After changing the key to unique name, since the driver will not run on master node, so it will not set "openebs.io/nodename" label to this node hence this node will never become a valid candidate for the provisioning the volume. Signed-off-by: Pawan <pawan@mayadata.io>
Signed-off-by: Pawan <pawan@mayadata.io>
There are setups where nodename is different than the hostname. The driver uses the nodename and tries to set the "kubernetes.io/hostname" node label to the nodename. Which will fail if nodename is not same as hostname. Here, changing the key to unique name so that the driver can set that key as node label and also it can not modify/touch the existing node labels. Now onwards, the driver will use "openebs.io/nodename" key to set the PV node affinity. Old volumes will have "kubernetes.io/hostname" affinity, and they will also work as after the PR #94, it supports all the node labels as topology key and all the nodes have "kubernetes.io/hostname" label set. So old volumes will work without any issue. Also for the same reason old stoarge classes which are using "kubernetes.io/hostname" as topology key, will work as that key is supported. This fixes the issue where the driver was trying to create the PV on the master node as master node is having "kubernetes.io/hostname" label, so it is also becoming a valid candidate for provisioning the PV. After changing the key to unique name, since the driver will not run on master node, so it will not set "openebs.io/nodename" label to this node hence this node will never become a valid candidate for the provisioning the volume. Signed-off-by: Pawan <pawan@mayadata.io>
fixes :- #84
Now user can label the nodes with the required topology, the ZFSPV
driver will support all the node labels as topology keys.
We should label the nodes first and then deploy the driver to make it aware of
all the labels that node has. If we want to add labels after ZFS-LocalPV driver
has been deployed, a restart all the node agents are required so that the driver
can pick the labels and add them as supported topology keys.
Note that if storageclass is using Immediate binding mode and topology key is not mentioned then all the nodes should be labeled using same key, that means, same key should be present on all nodes, nodes can have different values for those keys. If nodes are labeled with different keys i.e.
some nodes are having different keys, then ZFSPV's default scheduler can not effictively
do the volume count based scheduling. In this case the CSI provisioner will pick keys from
any random node and then prepare the preferred topology list using the nodes which has those
keys defined. And ZFSPV scheduler will schedule the PV among those nodes only.
Changed the csi-provisioner image to 1.6.0 as this has the upstream fix kubernetes-csi/external-provisioner#421.
Signed-off-by: Pawan pawan@mayadata.io