-
Notifications
You must be signed in to change notification settings - Fork 130
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why does kilo get cluster config using kubeconfig (or API server URL flag) when it has a service account? #49
Comments
Hi Adam, let's continue the conversation from #27 here. Thanks for investing time digging into the code.
It's a little bit tricky. In order to establish connection to the Kubernetes API via a service IP, Kubernetes requires four things:
In Kilo's case, because we want to be able to build clusters without a shared private network (e.g. multi-cloud), these four requirements are not always guaranteed. 3 and 2 are generally OK; this is because most Kubernetes installers are smart and provision Kubeconfigs with a DNS name that resolves to the private IP when inside of the cloud's VPC and to a public IP when outside, e.g. from another cloud. 0 and 1, on the other hand, we cannot know for sure in multi-cloud environments. The problem is that most of the time, the IP address backing a service IP is the master node's private IP address, which will not be routable from nodes in other data centers. This means that even if kube-proxy is installed, we cannot guarantee that service IPs will work until Kilo is running and makes the private IPs routable. So the two ways forward are:
Each has its up- and downsides: What do you think is the best way forward? |
Can you provide the the 1 option as an alternative for extra user intervention? |
Hi @fire, option 1 should already be doable without any new code. Kilo accepts a --master flag to set the url of the K8s API, and users can simply add that flag to any Kilo manifest and take out --kubeconfig flag and the volume mount for the in-cluster kubeconfig to only use the service account. Do you think this should be an additional manifest in the manifests directory? If so, I'm very happy to merge that PR. The nice thing about it is that it is not installer-specific, so we only need one rather than one for kubeadm, bootkube etc |
I would like that, but this codebase is unfamiliar to me. |
Would love to see a more permanent fix for this. k3s seems to not work great with kilo atm. |
Hi @squat
I gave this a try but had to face the rather obvious point that my cluster runs on self-signed certificates and therefor the kilo pods refuse to communicate with that endpoint given with the FROM squat/kilo
RUN apk update && apk add ca-certificates
ENTRYPOINT ["/bin/sh", "-c", "update-ca-certificates && /opt/bin/kg"] As I think that this is a use-case interesting to others, too, would you accept a PR for this? |
I just realized that since applying the changes outlined above, the kilo pods do not respect the |
Also interested in seeing this fixed! :) |
Meanwhile, I developed an init container that insert a kubeconfig for kilo. Here is the deployment yaml file: apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kilo
namespace: kube-system
labels:
app.kubernetes.io/name: kilo
spec:
selector:
matchLabels:
app.kubernetes.io/name: kilo
template:
metadata:
labels:
app.kubernetes.io/name: kilo
spec:
serviceAccountName: kilo
hostNetwork: true
containers:
- name: kilo
image: squat/kilo
args:
- --kubeconfig=/etc/kubernetes/kubeconfig
- --hostname=$(NODE_NAME)
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
privileged: true
volumeMounts:
- name: cni-conf-dir
mountPath: /etc/cni/net.d
- name: kilo-dir
mountPath: /var/lib/kilo
- name: kubeconfig
mountPath: /etc/kubernetes
readOnly: true
- name: lib-modules
mountPath: /lib/modules
readOnly: true
- name: xtables-lock
mountPath: /run/xtables.lock
readOnly: false
initContainers:
- name: generate-kubeconfig
image: unixfox/kilo-kubeconfig
imagePullPolicy: Always
volumeMounts:
- name: kubeconfig
mountPath: /etc/kubernetes
env:
- name: MASTER_URL
value: "your.kube.api:6443"
- name: install-cni
image: squat/kilo
command:
- /bin/sh
- -c
- set -e -x;
cp /opt/cni/bin/* /host/opt/cni/bin/;
TMP_CONF="$CNI_CONF_NAME".tmp;
echo "$CNI_NETWORK_CONFIG" > $TMP_CONF;
rm -f /host/etc/cni/net.d/*;
mv $TMP_CONF /host/etc/cni/net.d/$CNI_CONF_NAME
env:
- name: CNI_CONF_NAME
value: 10-kilo.conflist
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: kilo
key: cni-conf.json
volumeMounts:
- name: cni-bin-dir
mountPath: /host/opt/cni/bin
- name: cni-conf-dir
mountPath: /host/etc/cni/net.d
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
volumes:
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-conf-dir
hostPath:
path: /etc/cni/net.d
- name: kilo-dir
hostPath:
path: /var/lib/kilo
- name: kubeconfig
hostPath:
path: /etc/kilo-kubeconfig
- name: lib-modules
hostPath:
path: /lib/modules
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate Then for the serviceaccount I don't know what kind of RBAC permissions kilo requires, so the current ones may not be enough. If you have any idea about that please let me know! The source code of the project is located here: https://bitbucket.org/unixfox/kilo-kubeconfig/src/master/ |
@unixfox This is a great solution to the issue and should probably be merged into kilo. |
As you can see in the YAML I don't specify any namespace because you can deploy it to whatever namespace you want. |
@unixfox
|
It's just the default namespace that the user will use when not providing any "namespace" in the |
@unixfox Thanks for the clarification. |
Well I'm not sure if that will work, I haven't tested though. |
Here is an iteration of @unixfox approach which:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kilo-scripts
namespace: kube-system
data:
init.sh: |
#!/bin/sh
cat > /etc/kubernetes/kubeconfig <<EOF
apiVersion: v1
kind: Config
name: kilo
clusters:
- cluster:
server: $(sed -n 's/.*server: \(.*\)/\1/p' /var/lib/rancher/k3s/agent/kubelet.kubeconfig)
certificate-authority: /var/lib/rancher/k3s/agent/server-ca.crt
users:
- name: kilo
user:
token: $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
contexts:
- name: kilo
context:
cluster: kilo
namespace: ${NAMESPACE}
user: kilo
current-context: kilo
EOF Add the following initContainer to the Kilo daemonset: [...]
initContainers:
- name: generate-kubeconfig
image: busybox
command:
- /bin/sh
args:
- /scripts/init.sh
imagePullPolicy: Always
volumeMounts:
- name: kubeconfig
mountPath: /etc/kubernetes
- name: scripts
mountPath: /scripts/
readOnly: true
- name: k3s-agent
mountPath: /var/lib/rancher/k3s/agent/
readOnly: true
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace And add the following volumes as well: [...]
volumes:
- name: scripts
configMap:
name: kilo-scripts
- name: kubeconfig
emptyDir: {}
- name: k3s-agent
hostPath:
path: /var/lib/rancher/k3s/agent |
…r address & cacert from kubelet kubeconfig (closes squat#49)
…r address & cacert from kubelet kubeconfig (closes squat#49)
…r address & cacert from kubelet kubeconfig (closes squat#49)
While setting up kilo on a k3s cluster I noticed that it uses
-kubeconfig
, or-master
to get the config that is used when interfacing with the cluster. This code can be seen here.This seems like a security problem - why should kilo require access to my kubeconfig, which contains credentials that have the power to do anything to the cluster? Moreover, it seems redundant: I looked through
kilo-k3s-flannel.yaml
(which is what I used to get it working) and noticed that a service account is created for kilo with all of the permissions it should need.This example (see main.go) uses this function to get the config. Can kilo not use this function instead?
I'm new to interfacing applications with kubernetes clusters, so if I'm missing something my apologies. If it's be welcome I'd be happy to submit a pull request for this.
The text was updated successfully, but these errors were encountered: