Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: add worksInKubernetesCluster flag #114

Closed
wants to merge 4 commits into from
Closed

WIP: add worksInKubernetesCluster flag #114

wants to merge 4 commits into from

Conversation

sergeimonakhov
Copy link

Hey! I added the worksInKubernetesCluster flag so that the pod environment can be used to access the Kubernetes API, if the kilo is running inside the Kubernetes cluster.

Also, I think you can correct the manifests, because now there will be no difference between kubeadm/k3s/etc

@squat
Copy link
Owner

squat commented Feb 18, 2021

Hi @D1abloRUS thanks for taking a look at this issue :)
I think that the solution needs to be a little bit more sophisticated. The problem, IMO, is not that we were not yet calling rest.InClusterConfig. In fact, calling clientcmd.BuildConfigFromFlags(*master, *kubeconfig) implicitly calls rest.InClusterConfig whenever the arguments are empty. This means that adding this new if doesn't actually get us any new functionality since the in-cluster Pod configuration would already be automatically discovered.

The real problem is that the in-cluster configuration provides an API URL that uses a SERVICE IP but there is no guarantee that the Service IP will work before Kilo has run. In simple cluster topologies, i.e. a cluster where Flannel would suffice, the Service IP is fine, but if you have workers in different clouds and regions, then this won't work. For this reason, Kilo needs a kubeconfig with an API URL that is routable before WireGuard is up and running. To solve this, we require a special kubeconfig. The solution would be to allow users to provide the API's URL manually to override the URL that is discovered from the Pod environment, e.g. using the --master flag. However, this normally causes certificate issues because the in-cluster kubeconfig certificates don't include this URL in the SAN list. Maybe this can be fixed by adding documentation telling users to add the URL explicitly to the control planner configuration so that certificates are generated with this value, but it seems tricky.

Any input is welcome!

@sergeimonakhov
Copy link
Author

@squat Oh, it's all my carelessness. Sorry. Looked at the BuildConfigFromFlags function. Then it will probably be enough to correct the description of the flag from the --kubeconfig and it is possible to add some kind of example, since now --kubeconfig is used everywhere, although in general it is not necessary. When using third-party --kubeconfig, it makes no sense to create sa and roles for it. Maybe should use helm for a more flexible description of the configuration. I can write my own example, but it's pretty scary. I use a separate k3s controll-plane + cilium (in kube-proxy replacement mode), connect nodes in different locations to this controll-plane and use kilo for connectivity between nodes.

@SerialVelocity
Copy link
Contributor

@D1abloRUS Unrelated to the PR but could you post how you have set up cilium and kilo to work together in #81? I was about to try doing this over the next couple of days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants