Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Certs not being regenerated when supplementary_addresses changed #2164

Closed
danielm0hr opened this issue Jan 16, 2018 · 14 comments
Closed

Certs not being regenerated when supplementary_addresses changed #2164

danielm0hr opened this issue Jan 16, 2018 · 14 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@danielm0hr
Copy link
Contributor

Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug report

Environment:

  • Cloud provider or hardware configuration:
    Bare metal

  • OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"):
    Linux 3.16.0-4-amd64 x86_64
    PRETTY_NAME="Debian GNU/Linux 8 (jessie)"
    NAME="Debian GNU/Linux"
    VERSION_ID="8"
    VERSION="8 (jessie)"
    ID=debian
    HOME_URL="http://www.debian.org/"
    SUPPORT_URL="http://www.debian.org/support"
    BUG_REPORT_URL="https://bugs.debian.org/"

  • Version of Ansible (ansible --version):
    2.4.1.0

Kubespray version (commit) (git rev-parse --short HEAD):
2.3.0 ba0a03a8

Network plugin used:
calico

Command used to invoke ansible:
ansible-playbook -b -i inventory cluster.yml

Output of ansible run:
Succeeded.

Anything else do we need to know:
I added the IPs of my external loadbalancer (HA proxy on three hosts) to the variable supplementary_addresses_in_ssl_keys and successfully re-ran cluster.yml playbook. The config file /etc/kubernetes/openssl.conf has been updated correctly, but the apiserver certificates have not been regenerated.

Is this the intended behaviour? And if yes, how to force certificate regeneration?

@luther7
Copy link

luther7 commented Jan 25, 2018

I'm having a similar issue with apiserver_loadbalancer_domain_name

@nwsparks
Copy link

If you delete the api server certs from /etc/kubernetes/ssl on master and re-run the cluster.yml playbook it will regen and then restart the api server containers.

@kmadnani
Copy link
Contributor

we have a similar issue and we delete the certs first, and let kubespray generate it with updated load balancer name in SAN.

@sys0dm1n
Copy link

This is not working for me when I delete the ca.pem and the ca-key.pem from /etc/kubernetes/ssl it throws an error:

TASK [kubernetes-apps/ansible : Kubernetes Apps | Delete old KubeDNS resources] **********************************************************************************************
Monday 27 August 2018  09:11:01 +0000 (0:00:00.050)       0:06:01.868 ********* 
failed: [master-1] (item=deploy) => {"changed": false, "item": "deploy", "msg": "error running kubectl (/usr/local/bin/kubectl --namespace=kube-system delete deploy kube-dns) command (rc=1), out='', err='error: timed out waiting for the condition\n'"}

Anyone knows how to fix this?

@marcstreeter
Copy link
Contributor

marcstreeter commented Mar 8, 2019

@sys0dm1n you've probably found the solution already - however, I didn't have to delete any of the ca.* files in /etc/kubernetes/ssl/. The files I deleted were apiserver.crt and apiserver.key. Then, after rerunning the cluster.yml playbook, it did regen the certs as nwsparks mentioned. In my case I had to do it so that changes to the inventory/<NAME_OF_MY_INVENTORY>/group_vars/k8s-cluster/k8s-cluster.yml (adding my external ip to the supplementary_addresses_in_ssl_keys) took effect.

@ykfq
Copy link

ykfq commented Apr 9, 2019

If you delete the api server certs from /etc/kubernetes/ssl on master and re-run the cluster.yml playbook it will regen and then restart the api server containers.

This doesn't work now cause Kubespray set kubeadm as default deployment mode since v2.8.0, the regenerate certs tasks would never run when kubeadm_enabled is true. Have a look at this comment: #2343 (comment)

@woopstar
Copy link
Member

woopstar commented Apr 9, 2019

kubeadm_enabled var has been totally removed ?

@ykfq
Copy link

ykfq commented Apr 9, 2019

kubeadm_enabled var has been totally removed ?

Not now, but will be in v2.9.0, since then kubeadm is the only default mode.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 8, 2019
@servo1x
Copy link

servo1x commented Jul 28, 2019

How are we suppose to regenerate certs? Does anyone have a recipe?

Edit: kubernetes/kubeadm#1447 (comment) running the steps outlined here on the master and then running upgrade_cluster.yml resolved my issue.

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 27, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@sickboy93
Copy link

Hello, I had a quite similar issue, that in general, pki certificates were note created. I had some configurations file of a previous cluster, removing all the "dirty" ones, I managed to create a new cluster (bare metal installation)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests