-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
scripts (or k8s) presume to use first ethernet controller #12
Comments
To note. I'm running the setup scripts on a bare metal Clear Linux cluster btw, not in the vagrant VM setup. |
From https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#initializing-your-master
as by default |
I'm having trouble getting this to work yet. I've tried adding in the option like: apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
nodeRegistration:
criSocket: /var/run/crio/crio.sock
#localAPIEndpoint:
#aPIEndpoint:
APIEndpoint:
advertiseAddress: "192.168.xxx.yyy"
bindPort: 6443
--- to the I'm going to see if I can edit my Any input/help appreciated here ;-) |
@grahamwhaley yes, that is a thing that @mcastelino and i have observed and was something that we were discussing as well as that affects my vagrant setup too. we thought of settign it as an env variable that a user can set which the script can read and then add the necessary bits to |
grr, I tried to hack the IP into the kube config, but the cert is encoded to the IP address, so it nacked it.... |
results in kube config of
But
|
@grahamwhaley i am testing in vagrant. Can you test the above in your setup? Just to make sure, the join failing is not specific to vagrant setup |
@krsna1729 @mcastelino @ganeshmaharaj ...
and then once $ kubectl get nodes
NAME STATUS ROLES AGE VERSION
sn-nuc-ctl Ready master 7m48s v1.12.3
sn-nuc001 NotReady <none> 5m39s v1.12.3 OK, so, we should discuss if those missing kernel modules on the node are an issue or if the node status of 'NotReady' is an issue, but things look connected across my second ether port! |
@grahamwhaley /cc @mcastelino @ganeshmaharaj
|
Hi @krsna1729 - I think the |
@grahamwhaley ah yes the network pod runs as a daemonset pod which might take time to spin up on the new node added. Suspected it was transient, asked because of 5+ min in the get node output. Can I close this? |
Thanks @krsna1729 Yeah, I think the time anomaly is because sn-nuc001 was my first test node, and I have a feeling it did not shutdown/clean/delete cleanly before I ran up the 'full cluster' - so, the master thought it had still been online. |
I think, by default, the scripts will set up the k8s network hung off the first ether controller on the master node.
In my case, that is not correct - I have two ether cards, and my node pool is hung off the second one.
I'll see if I can figure out what needs to be told to use the second (or rather, a specific) network controller to talk to the nodes. If anybody wants to chip in with some input, please do :-)
The text was updated successfully, but these errors were encountered: