Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

node-role.kubernetes.io/worker label is problematic #130

Closed
mythi opened this issue Nov 14, 2022 · 5 comments
Closed

node-role.kubernetes.io/worker label is problematic #130

mythi opened this issue Nov 14, 2022 · 5 comments

Comments

@mythi
Copy link
Contributor

mythi commented Nov 14, 2022

Describe the bug
The operator expects node-role.kubernetes.io/worker label but this may not work always with some cluster setups.

To Reproduce
Try to build a kind cluster with node-role.kubernetes.io/worker predefined, e.g., using the enclave-cc config:

diff --git a/tests/e2e/enclave-cc-kind-config.yaml b/tests/e2e/enclave-cc-kind-config.yaml
index 5792b16..b650ab4 100644
--- a/tests/e2e/enclave-cc-kind-config.yaml
+++ b/tests/e2e/enclave-cc-kind-config.yaml
@@ -3,6 +3,8 @@ apiVersion: kind.x-k8s.io/v1alpha4
 nodes:
 - role: control-plane
 - role: worker
+  labels:
+    node-role.kubernetes.io/worker:
   extraMounts:
   - hostPath: /tmp/coco
     containerPath: /opt/confidential-containers

Describe the results you expected
The cluster is up.

Describe the results you received:
kubelet on the worker node fails to start due to:

--node-labels in the 'kubernetes.io' namespace must begin with an allowed prefix (kubelet.kubernetes.io, node.kubernetes.io) or be in the specifically allowed set (beta.kubernetes.io/arch, beta.kubernetes.io/instance-type, beta.kubernetes.io/os, failure-domain.beta.kubernetes.io/region, failure-domain.beta.kubernetes.io/zone, kubernetes.io/arch, kubernetes.io/hostname, kubernetes.io/os, node.kubernetes.io/instance-type, topology.kubernetes.io/region, topology.kubernetes.io/zone)

Additional context
N/A

@bpradipt
Copy link
Member

You should be able to use custom node labels in ccNodeSelector field of ccRuntime.
node-role.kubernetes.io/worker is used as default if ccNodeSelector is empty.

@mythi
Copy link
Contributor Author

mythi commented Nov 17, 2022

node-role.kubernetes.io/worker is used as default if ccNodeSelector is empty.

the reason why I opened this ticket was to notify that the default uses kubernetes.io namespace with a prefix that is not "allowed".

@bpradipt
Copy link
Member

We can add instructions in the install guide to show usage of a custom node label. And the same is supported via the operator which might not be very clear since I don't see it documented.
Hence I mentioned it so that you can give it a try by creating a kind cluster with a custom node label and using the same in the ccruntime yaml.

The issue with setting restricted node labels (*kubernetes.io.) during cluster creation time is described here -
kubernetes/kubeadm#2509

@katexochen
Copy link

@mythi Can this be closed as we switched the label in #195?

@mythi mythi closed this as completed Aug 3, 2023
@mythi
Copy link
Contributor Author

mythi commented Aug 3, 2023

@katexochen thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants