-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Certs not being regenerated when supplementary_addresses changed #2164
Comments
I'm having a similar issue with |
If you delete the api server certs from /etc/kubernetes/ssl on master and re-run the cluster.yml playbook it will regen and then restart the api server containers. |
we have a similar issue and we delete the certs first, and let kubespray generate it with updated load balancer name in SAN. |
This is not working for me when I delete the ca.pem and the ca-key.pem from /etc/kubernetes/ssl it throws an error:
Anyone knows how to fix this? |
@sys0dm1n you've probably found the solution already - however, I didn't have to delete any of the |
This doesn't work now cause Kubespray set kubeadm as default deployment mode since v2.8.0, the regenerate certs tasks would never run when |
kubeadm_enabled var has been totally removed ? |
Not now, but will be in v2.9.0, since then kubeadm is the only default mode. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
How are we suppose to regenerate certs? Does anyone have a recipe? Edit: kubernetes/kubeadm#1447 (comment) running the steps outlined here on the master and then running upgrade_cluster.yml resolved my issue. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hello, I had a quite similar issue, that in general, pki certificates were note created. I had some configurations file of a previous cluster, removing all the "dirty" ones, I managed to create a new cluster (bare metal installation) |
Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug report
Environment:
Cloud provider or hardware configuration:
Bare metal
OS (
printf "$(uname -srm)\n$(cat /etc/os-release)\n"
):Linux 3.16.0-4-amd64 x86_64
PRETTY_NAME="Debian GNU/Linux 8 (jessie)"
NAME="Debian GNU/Linux"
VERSION_ID="8"
VERSION="8 (jessie)"
ID=debian
HOME_URL="http://www.debian.org/"
SUPPORT_URL="http://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
Version of Ansible (
ansible --version
):2.4.1.0
Kubespray version (commit) (
git rev-parse --short HEAD
):2.3.0
ba0a03a8
Network plugin used:
calico
Command used to invoke ansible:
ansible-playbook -b -i inventory cluster.yml
Output of ansible run:
Succeeded.
Anything else do we need to know:
I added the IPs of my external loadbalancer (HA proxy on three hosts) to the variable
supplementary_addresses_in_ssl_keys
and successfully re-rancluster.yml
playbook. The config file/etc/kubernetes/openssl.conf
has been updated correctly, but the apiserver certificates have not been regenerated.Is this the intended behaviour? And if yes, how to force certificate regeneration?
The text was updated successfully, but these errors were encountered: