Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k8s: Make possible and simple to deploy a Scope probe in master node #1030

Closed
2opremio opened this issue Feb 24, 2016 · 13 comments
Closed

k8s: Make possible and simple to deploy a Scope probe in master node #1030

2opremio opened this issue Feb 24, 2016 · 13 comments
Assignees
Labels
dogfood Important for the developer's own use of the project k8s Pertains to integration with Kubernetes
Milestone

Comments

@2opremio
Copy link
Contributor

The current k8s instructions https://github.com/weaveworks/scope#using-weave-scope-with-kubernetes result in Scope probes only deployed in the worker nodes but not on master.

CC @errordeveloper

@2opremio 2opremio added the dogfood Important for the developer's own use of the project label Feb 24, 2016
@errordeveloper
Copy link
Contributor

Thanks for raising this, @2opremio. Right now I cannot think of a good solution, but let's think of it.

@2opremio
Copy link
Contributor Author

After asking in the kubernetes Slack Channel, I was pointed to https://github.com/kubernetes/kubernetes/blob/80ff0cbbda4418ab40a383e08ab6f55244219e52/docs/design/taint-toleration-dedicated.md

Apparently the master node is just a another node marked as unschedulable so we could use taints (WIP at this point) to make the Scope Probe DaemonSet tolerate the taint of the master node.

Full conversation transcript: https://kubernetes.slack.com/archives/kubernetes-users/p1456341662005360

@2opremio
Copy link
Contributor Author

Also, it seems that after 1.2 DaemonSets will also deployed to master so it may be a matter of waiting.

Another thing we could try is hardcoding the nodename of a pod to the master node and see if that works.

@errordeveloper
Copy link
Contributor

Sounds good, as long as we consider master without kubelet being a rare case, which is probably true for those using vanilla setup. I have considered changing Kubernetes Anywhere to adhere to the self-hosting model, however, in my opinion, at least in it's current state, this model hardly makes things easy to understand. We certainly need to enable majority of users with an existing cluster, however improving the Kubernetes probe itself should be the priority.

@errordeveloper
Copy link
Contributor

Would be great to double-check if this might imply that GKE will also allow users to schedule pods on master, which I don't believe would feasible, as users generally don't have access to master on GKE.

@errordeveloper
Copy link
Contributor

Running Kubernetes probe requires authentication, so I think the only recommended way should be to use DaemonSets, for best user experience. Running probes on kubelet-less master is just like running it on other external machines, and authentication to Kubernetes API is not required there, only app discovery is a problem to consider.

Of the top of my head, examples of projects which don't use self-hosting model (kubelet-less master) are:

There are many more, one way to check is by searching for kube-apiserver+ExecStart, which shows kube-apiserver binary being invoked through systemd.

@2opremio
Copy link
Contributor Author

2opremio commented Mar 2, 2016

I haven't found a way to fix this for kubernetes 1.1 .

On top of the options mentioned above, there's kubectl uncordon which would make the master node schedulable, but this doesn't seem available before 1.2 .

@2opremio
Copy link
Contributor Author

We should confirm this but it seems kubernetes 1.2 will deploy the probes in the master node:

DaemonSet pods will be created on nodes with .spec.unschedulable=true and will not be evicted from nodes whose Ready condition is false.

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md

@tomwilkie tomwilkie added the k8s Pertains to integration with Kubernetes label Apr 11, 2016
@2opremio 2opremio added this to the 0.15.0 milestone Apr 27, 2016
@tomwilkie
Copy link
Contributor

Can confirm, this works:

Toms-Mac-Pro:service twilkie$ kubectl patch node gke-twilkie-test-default-pool-b6168979-n7n4  -p '{"spec": {"unschedulable": true}}'
"gke-twilkie-test-default-pool-b6168979-n7n4" patched
Toms-Mac-Pro:service twilkie$ kubectl get nodes
NAME                                          STATUS                     AGE
gke-twilkie-test-default-pool-b6168979-extp   Ready                      13m
gke-twilkie-test-default-pool-b6168979-fquf   Ready                      13m
gke-twilkie-test-default-pool-b6168979-n7n4   Ready,SchedulingDisabled   13m
Toms-Mac-Pro:service twilkie$ kubectl create -f k8s/local/scope/scope-app-rc.yaml
replicationcontroller "weave-scope-app" created
Toms-Mac-Pro:service twilkie$ kubectl create -f k8s/local/scope/scope-app-svc.yaml
service "weave-scope-app" created
Toms-Mac-Pro:service twilkie$ kubectl create -f k8s/local/scope/scope-probe-ds.yaml
daemonset "weave-scope-probe" created
Toms-Mac-Pro:service twilkie$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                                                READY     STATUS    RESTARTS   AGE
kube-system   fluentd-cloud-logging-gke-twilkie-test-default-pool-b6168979-extp   1/1       Running   0          14m
kube-system   fluentd-cloud-logging-gke-twilkie-test-default-pool-b6168979-fquf   1/1       Running   0          13m
kube-system   fluentd-cloud-logging-gke-twilkie-test-default-pool-b6168979-n7n4   1/1       Running   0          14m
kube-system   heapster-v1.0.2-574809716-5nobl                                     2/2       Running   0          13m
kube-system   kube-dns-v11-oqyvt                                                  4/4       Running   0          14m
kube-system   kube-proxy-gke-twilkie-test-default-pool-b6168979-extp              1/1       Running   0          14m
kube-system   kube-proxy-gke-twilkie-test-default-pool-b6168979-fquf              1/1       Running   0          13m
kube-system   kube-proxy-gke-twilkie-test-default-pool-b6168979-n7n4              1/1       Running   0          14m
kube-system   kubernetes-dashboard-v1.0.1-d5r6d                                   1/1       Running   0          14m
kube-system   l7-lb-controller-v0.6.0-kqimx                                       2/2       Running   0          14m
kube-system   weave-scope-app-jw67w                                               1/1       Running   0          24s
kube-system   weave-scope-probe-8kjnl                                             1/1       Running   0          12s
kube-system   weave-scope-probe-gne8k                                             1/1       Running   0          12s
kube-system   weave-scope-probe-uuupj                                             1/1       Running   0          12s

@2opremio 2opremio reopened this May 5, 2016
@2opremio
Copy link
Contributor Author

2opremio commented May 5, 2016

Reopening since we should document this will only work for versions >=1.2

@errordeveloper
Copy link
Contributor

errordeveloper commented May 5, 2016

So am I getting this correct that it's all about putting scope in kube-system namespace, so that on Kubernetes v1.2 (or greater) probe will end-up running on "unschedulable" nodes?

@tomwilkie
Copy link
Contributor

So am I getting this correct that it's all about putting scope in kube-system namespace?

No, the namespace is completely unrelated.

The problem was probes were not being scheduled on unschedulable hosts. As of kube 1.2 they are. End of story.

@errordeveloper
Copy link
Contributor

@tomwilkie ok, thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dogfood Important for the developer's own use of the project k8s Pertains to integration with Kubernetes
Projects
None yet
Development

No branches or pull requests

3 participants