-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
error generating accessibility requirements #528
Comments
Restarting the zfs-node daemonset is must, so that driver can pick up the required topologies if they are added after driver is installed. One way is that while installing we can set topologies so that later on for those keys we don't need to restart node-agents. see here -- https://github.com/openebs/zfs-localpv/blob/develop/docs/faq.md#6-how-to-add-custom-topology-key For rebooting the node, it should not be the behaviour and I myself has come across rebooting the node scenario's. Volume provisioning worked fine for me. Can you please share your storage class yaml and |
See gist here
Do you mean compare I restarted < - scheduling.node.kubevirt.io/tsc-frequency-2200000000
---
> - scheduling.node.kubevirt.io/tsc-frequency-2199997000 |
I understand the issue is caused by the topology keys in It seems odd that Kubevirt uses a node label with a dynamic key value, but I guess there must be a good reason. I appreciate the need for the topology keys to match the node etc, but I wonder if there is a better default approach for |
hi @ianb-mp As i can understand in your case the kubervirt has dynamic label which changes upon node restart. The new label need to be updated in the daemonSet directly by editing it and then restarting. |
Scoping this for investigation as part of v4.3 to figure out ways to specify inclusive or exclusive list of labels in ALLOWED_TOPOLOGIES. |
What steps did you take and what happened:
After k8s node reboot, when I create a new PVC using zfs-localpv storageclass the PVC creation fails with error:
(Full error message here)
A temporary fix is to restart the
openebs-zfs-localpv-node
daemonset however when I reboot the k8s node the error returns.What did you expect to happen:
I assume this isn't expected behaviour, so it would be good if this could be resolved without requiring manual intervention.
The output of the following commands will help us better understand what's going on:
kubectl logs -f openebs-zfs-controller-[xxxx] -n openebs -c openebs-zfs-plugin
see gistkubectl logs -f openebs-zfs-node-[xxxx] -n openebs -c openebs-zfs-plugin
see gistkubectl get pods -n openebs
kubectl get zv -A -o yaml
see gistAnything else you would like to add:
zfs-localpv was installed via Openebs helm chart v4.0.0:
[Miscellaneous information that will assist in solving the issue.]
Environment:
kubectl version
):/etc/os-release
): Rocky Linux 9.3The text was updated successfully, but these errors were encountered: