Skip to content
This repository has been archived by the owner on Jan 9, 2023. It is now read-only.

Implement kube addon manager #688

Merged
merged 7 commits into from
Feb 1, 2019

Conversation

JoshVanL
Copy link
Contributor

@JoshVanL JoshVanL commented Jan 18, 2019

Uses kube addon-manager to manage the life cycle of resources created by tarmak.

fixes #665

Deploy Kubernetes resources using kube-addon-manager 

@jetstack-bot jetstack-bot added release-note Denotes a PR that will be considered when it comes time to generate release notes. dco-signoff: no Indicates that at least one commit in this pull request is missing the DCO sign-off message. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. area/puppet Indicates a PR is affecting Puppet manifests kind/documentation Categorizes issue or PR as related to documentation. labels Jan 18, 2019
@jetstack-bot jetstack-bot added dco-signoff: yes Indicates that all commits in the pull request have the valid DCO sign-off message. and removed dco-signoff: no Indicates that at least one commit in this pull request is missing the DCO sign-off message. labels Jan 18, 2019
@JoshVanL
Copy link
Contributor Author

/assign @simonswine

@simonswine
Copy link
Contributor

There seems to be a race between the masters (so somehow the election of kube-addon-managers and/or cluster-manager is not stable.

tarmak cluster kubectl get pods --all-namespaces -w
kube-system   heapster-687467cd4c-wmkbb   0/2   Pending   0     0s
kube-system   heapster-687467cd4c-wmkbb   0/2   Pending   0     0s
kube-system   heapster-687467cd4c-wmkbb   0/2   ContainerCreating   0     0s
kube-system   heapster-687467cd4c-wmkbb   2/2   Running   0     3s
kube-system   heapster-789c695b5-ztw8b   2/2   Terminating   0     52s
kube-system   heapster-789c695b5-ztw8b   0/2   Terminating   0     53s
kube-system   heapster-789c695b5-ztw8b   0/2   Terminating   0     54s
kube-system   heapster-789c695b5-ztw8b   0/2   Terminating   0     54s
kube-system   heapster-789c695b5-kqh5w   0/2   Pending   0     0s
kube-system   heapster-789c695b5-kqh5w   0/2   Pending   0     0s
kube-system   heapster-789c695b5-kqh5w   0/2   ContainerCreating   0     0s
kube-system   heapster-789c695b5-kqh5w   2/2   Running   0     3s
kube-system   heapster-687467cd4c-wmkbb   2/2   Terminating   0     13s
kube-system   heapster-687467cd4c-wmkbb   0/2   Terminating   0     15s
kube-system   heapster-687467cd4c-wmkbb   0/2   Terminating   0     16s
kube-system   heapster-687467cd4c-wmkbb   0/2   Terminating   0     16s
^[kube-system   heapster-687467cd4c-qphn2   0/2   Pending   0     0s
kube-system   heapster-687467cd4c-qphn2   0/2   Pending   0     0s
kube-system   heapster-687467cd4c-qphn2   0/2   ContainerCreating   0     0s
kube-system   heapster-687467cd4c-qphn2   2/2   Running   0     4s
kube-system   heapster-789c695b5-kqh5w   2/2   Terminating   0     53s
kube-system   heapster-789c695b5-kqh5w   0/2   Terminating   0     54s
kube-system   heapster-789c695b5-vdf9p   0/2   Pending   0     0s
kube-system   heapster-789c695b5-vdf9p   0/2   Pending   0     0s
kube-system   heapster-789c695b5-vdf9p   0/2   ContainerCreating   0     0s
kube-system   heapster-789c695b5-kqh5w   0/2   Terminating   0     61s
kube-system   heapster-789c695b5-kqh5w   0/2   Terminating   0     61s
kube-system   heapster-789c695b5-vdf9p   2/2   Running   0     4s
kube-system   heapster-687467cd4c-qphn2   2/2   Terminating   0     15s
kube-system   heapster-687467cd4c-qphn2   0/2   Terminating   0     16s
kube-system   heapster-687467cd4c-qphn2   0/2   Terminating   0     28s
kube-system   heapster-687467cd4c-qphn2   0/2   Terminating   0     28s
kube-system   heapster-687467cd4c-jhsw7   0/2   Pending   0     1s
kube-system   heapster-687467cd4c-jhsw7   0/2   Pending   0     1s
kube-system   heapster-687467cd4c-jhsw7   0/2   ContainerCreating   0     1s
kube-system   heapster-687467cd4c-jhsw7   2/2   Running   0     4s
kube-system   heapster-789c695b5-vdf9p   2/2   Terminating   0     53s
kube-system   heapster-789c695b5-vdf9p   0/2   Terminating   0     55s
kube-system   heapster-789c695b5-vdf9p   0/2   Terminating   0     56s
kube-system   heapster-789c695b5-vdf9p   0/2   Terminating   0     56s
kube-system   heapster-789c695b5-tp48k   0/2   Pending   0     0s
kube-system   heapster-789c695b5-tp48k   0/2   Pending   0     0s
kube-system   heapster-789c695b5-tp48k   0/2   ContainerCreating   0     0s
kube-system   heapster-789c695b5-tp48k   2/2   Running   0     3s
kube-system   heapster-687467cd4c-jhsw7   2/2   Terminating   0     17s
kube-system   heapster-687467cd4c-jhsw7   0/2   Terminating   0     17s
kube-system   heapster-687467cd4c-jhsw7   0/2   Terminating   0     18s
kube-system   heapster-687467cd4c-jhsw7   0/2   Terminating   0     18s
kube-system   heapster-687467cd4c-744gp   0/2   Pending   0     0s
kube-system   heapster-687467cd4c-744gp   0/2   Pending   0     0s
kube-system   heapster-687467cd4c-744gp   0/2   ContainerCreating   0     0s
kube-system   heapster-687467cd4c-744gp   2/2   Running   0     4s
kube-system   heapster-789c695b5-tp48k   2/2   Terminating   0     50s

This is from a cluster with three master nodes:

NAME                                          STATUS   ROLES    AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME   ROLE

ip-10-99-108-76.eu-west-1.compute.internal    Ready    master   17m   v1.13.1   10.99.108.76    <none>        CentOS Linux 7 (Core)   4.20.0-1.el7.elrepo.x86_64   docker://1.13.1     master
ip-10-99-47-197.eu-west-1.compute.internal    Ready    master   18m   v1.13.1   10.99.47.197    <none>        CentOS Linux 7 (Core)   4.20.0-1.el7.elrepo.x86_64   docker://1.13.1     master
ip-10-99-64-87.eu-west-1.compute.internal     Ready    master   17m   v1.13.1   10.99.64.87     <none>        CentOS Linux 7 (Core)   4.20.0-1.el7.elrepo.x86_64   docker://1.13.1     master

diff between the two RSs is:

61,62c61,62
<             cpu: 100m
<             memory: 150Mi
---
>             cpu: 108m
>             memory: 214Mi
64,65c64,65
<             cpu: 100m
<             memory: 150Mi
---
>             cpu: 108m
>             memory: 214Mi

/assign @JoshVanL
/unassign

@jetstack-bot jetstack-bot assigned JoshVanL and unassigned simonswine Jan 22, 2019
@simonswine simonswine closed this Jan 22, 2019
@simonswine simonswine reopened this Jan 22, 2019
@JoshVanL
Copy link
Contributor Author

/test puppet-tarmak-acceptance-centos v1.11
/test puppet-kubernetes-acceptance

simonswine and others added 4 commits January 24, 2019 16:27
…rmak

Signed-off-by: JoshVanL <vleeuwenjoshua@gmail.com>
Signed-off-by: JoshVanL <vleeuwenjoshua@gmail.com>
Signed-off-by: JoshVanL <vleeuwenjoshua@gmail.com>
Signed-off-by: JoshVanL <vleeuwenjoshua@gmail.com>
@jetstack-bot jetstack-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 25, 2019
@JoshVanL JoshVanL force-pushed the implement-kube-addon-manager branch 6 times, most recently from e84410b to 845caeb Compare January 28, 2019 15:12
Signed-off-by: JoshVanL <vleeuwenjoshua@gmail.com>
@JoshVanL
Copy link
Contributor Author

/unassign
/assign @simonswine

@jetstack-bot jetstack-bot assigned simonswine and unassigned JoshVanL Jan 28, 2019
Copy link
Contributor

@simonswine simonswine left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just some minor bits now. The question is how sure are we that we did not forget the labe anywhere

/assign @JoshVanL
/unassign

Service[$service_apiserver],
]

$command = "/bin/bash -c \"while true; do if [[ \$(curl -k -w '%{http_code}' -s -o /dev/null ${protocol}://localhost:${server_port}/healthz) == 200 ]]; then break; fi; done; kubectl apply -f '${apply_file}' || rm -f '${apply_file})'\""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we add a sleep 2 or something, as this might be quite a few connections on bulk.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, good idea. I did at one point during testing but must have removed it...

@@ -25,6 +27,8 @@ apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: calico-node
labels:
addonmanager.kubernetes.io/mode: EnsureExists
Copy link
Contributor

@simonswine simonswine Jan 29, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a bit below: ServiceAccount is missing the label addonmanager.kubernetes.io/mode

@jetstack-bot jetstack-bot assigned JoshVanL and unassigned simonswine Jan 29, 2019
@JoshVanL JoshVanL force-pushed the implement-kube-addon-manager branch 2 times, most recently from d6ffd73 to 0cf167e Compare January 30, 2019 19:47
@JoshVanL
Copy link
Contributor Author

/unassign
/assign @simonswine

@jetstack-bot jetstack-bot assigned simonswine and unassigned JoshVanL Jan 30, 2019
Signed-off-by: JoshVanL <vleeuwenjoshua@gmail.com>
@JoshVanL
Copy link
Contributor Author

/test puppet-tarmak-acceptance-ubuntu v1.11

Signed-off-by: JoshVanL <vleeuwenjoshua@gmail.com>
@JoshVanL
Copy link
Contributor Author

/test puppet-tarmak-acceptance-centos v1.11

@JoshVanL
Copy link
Contributor Author

/test puppet-tarmak-acceptance-ubuntu v1.11

@simonswine
Copy link
Contributor

Good work

/approve
/lgtm

@jetstack-bot jetstack-bot added the lgtm Indicates that a PR is ready to be merged. label Jan 31, 2019
@jetstack-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: JoshVanL, simonswine

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@simonswine
Copy link
Contributor

/test puppet-fluent_bit-acceptance

@simonswine
Copy link
Contributor

/test puppet-tarmak-acceptance-centos v1.13

@simonswine
Copy link
Contributor

/test puppet-tarmak-acceptance-ubuntu v1.11

@jetstack-bot jetstack-bot merged commit 946c83e into jetstack:master Feb 1, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/puppet Indicates a PR is affecting Puppet manifests dco-signoff: yes Indicates that all commits in the pull request have the valid DCO sign-off message. kind/documentation Categorizes issue or PR as related to documentation. lgtm Indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Ensure we handle deleting of no longer configured kubernetesa resources
3 participants