-
Notifications
You must be signed in to change notification settings - Fork 583
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cluster certificate not generated by default? #546
Comments
@joesonw you can pass any extra arguments to any service using extra_args, for example:
|
Any plans to make it configurable also in the UI? |
@andrexus opened an issue in rancher/rancher repo, thanks for the suggestion |
Where does this issue continue? I am trying to state a CSR against my API, but certificates are not issued. How can i solve this, either via the UI or via kubectl? thanks Edit: How can i get my certificates into the rancher container and refer to it in the |
signer is provided by the contoller-manager, you need add cert and its key to --- cluster.yml 2019-07-22 06:00:29.188527946 +0000
+++ cluster.new.yml 2019-07-22 06:00:18.681650925 +0000
@@ -56,6 +56,9 @@
kube-controller:
cluster_cidr: 10.233.64.0/18
service_cluster_ip_range: 10.233.0.0/18
+ extra_args:
+ cluster-signing-cert-file: /etc/kubernetes/ssl/kube-ca.pem
+ cluster-signing-key-file: /etc/kubernetes/ssl/kube-ca-key.pem
scheduler:
kubelet:
cluster_domain: cluster.local |
Hi! I am trying to get Robin storage working on a cluster deployed with Rancher, and I have been told that I need to "enable certificate controller manager on the k8s cluster". Is that couple of args what this is about? Can I do this with an existing cluster? And how do I generate the certificate? Thanks |
could the csr signing feature be activated by default in a fresh rke install? thx |
We are running Rancher 2.4.5 with K8s 1.18.10 and encountered this issue today. This patch from @qrtt1 was the solution. Note that this patch has to go in the |
RKE version:
v0.1.6-rc2
Docker version: (
docker version
,docker info
preferred)Operating system and kernel: (
cat /etc/os-release
,uname -r
preferred)Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)
KVM
cluster.yml file:
Steps to Reproduce:
rke up
Results:
Success
, But when I was installing istio with automatic sidecar injector. It appears that controller manager signer was not enabled by default. https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/#a-note-to-cluster-administratorsI am far from a vet of k8s, these configurations did confuse me a bit.
How would I able to provide such cert/key?
I tried to generate my own cert/key, put them on master node, tuned my
cluster.yml
, and pass these extra_args tokube-controller
, it did start up, but it then gives me the following error when launching new pods.The text was updated successfully, but these errors were encountered: