Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cluster certificate not generated by default? #546

Closed
joesonw opened this issue Apr 25, 2018 · 8 comments
Closed

cluster certificate not generated by default? #546

joesonw opened this issue Apr 25, 2018 · 8 comments

Comments

@joesonw
Copy link

joesonw commented Apr 25, 2018

RKE version:
v0.1.6-rc2
Docker version: (docker version,docker info preferred)

 Version:         1.13.1
 API version:     1.26 (minimum version 1.12)
 Package version: <unknown>
 Go version:      go1.8.3
 Git commit:      774336d/1.13.1
 Built:           Wed Mar  7 17:06:16 2018
 OS/Arch:         linux/amd64
 Experimental:    false

Operating system and kernel: (cat /etc/os-release, uname -r preferred)

NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"

Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)
KVM
cluster.yml file:

# If you intened to deploy Kubernetes in an air-gapped environment,
# please consult the documentation on how to configure custom RKE images.
nodes:
- address: 172.16.10.178
  port: "22"
  internal_address: ""
  role:
  - controlplane
  - worker
  - etcd
  hostname_override: k8s-1
  user: docker
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: "" 
  labels: {}
- address: 172.16.10.179
  port: "22"
  internal_address: ""
  role:
  - worker
  hostname_override: k8s-2
  user: docker
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: "" 
  labels: {}
- address: 172.16.10.180
  port: "22"
  internal_address: ""
  role:
  - worker
  hostname_override: k8s-3
  user: docker 
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: "" 
  labels: {}
services:
  etcd:
    image: rancher/coreos-etcd:v3.1.12
    extra_args: {}
    extra_binds: []
    external_urls: []
    ca_cert: ""
    cert: ""
    key: ""
    path: ""
  kube-api:
    image: rancher/hyperkube:v1.10.1
    #image: rancher/k8s:v1.10.0-rancher1-1
    extra_args:
      admission-control: NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority
    extra_binds: []
    service_cluster_ip_range: 10.43.0.0/16
    pod_security_policy: false
  kube-controller:
    image: rancher/hyperkube:v1.10.1
    #image: rancher/k8s:v1.10.0-rancher1-1
    extra_args: {}
    extra_binds: []
    cluster_cidr: 10.42.0.0/16
    service_cluster_ip_range: 10.43.0.0/16
  scheduler:
    image: rancher/hyperkube:v1.10.1
    #image: rancher/k8s:v1.10.0-rancher1-1
    extra_args: {}
    extra_binds: []
  kubelet:
    image: rancher/hyperkube:v1.10.1
    #image: rancher/k8s:v1.10.0-rancher1-1
    extra_args: {}
    extra_binds: []
    cluster_domain: cluster.local
    infra_container_image: rancher/pause-amd64:3.1
    cluster_dns_server: 10.43.0.10
    fail_swap_on: false
  kubeproxy:
    image: rancher/hyperkube:v1.10.1
    #image: rancher/k8s:v1.10.0-rancher1-1
    extra_args: {}
    extra_binds: []
network:
  plugin: calico
  options: {}
authentication:
  strategy: x509
  options: {}
  sans: []
addons: ""
addons_include: []
system_images:
  etcd: ""
  alpine: ""
  nginx_proxy: ""
  cert_downloader: ""
  kubernetes_services_sidecar: ""
  kubedns: ""
  dnsmasq: ""
  kubedns_sidecar: ""
  kubedns_autoscaler: ""
  kubernetes: ""
  flannel: ""
  flannel_cni: ""
  calico_node: ""
  calico_cni: ""
  calico_controllers: ""
  calico_ctl: ""
  canal_node: ""
  canal_cni: ""
  canal_flannel: ""
  wave_node: ""
  weave_cni: ""
  pod_infra_container: ""
  ingress: ""
  ingress_backend: ""
  dashboard: ""
  heapster: ""
  grafana: ""
  influxdb: ""
  tiller: ""
ssh_key_path: /Users/Joesonw/funlearnworld/chinaworld/devops/k8s_cluster_private_key
ssh_agent_auth: false
authorization:
  mode: rbac
  options: {}
ignore_docker_version: false
kubernetes_version: ""
private_registries: []
ingress:
  provider: ""
  options: {}
  node_selector: {}
  extra_args: {}
cluster_name: ""
cloud_provider:
  name: ""
  cloud_config: {}
prefix_path: ""

Steps to Reproduce:
rke up
Results:
Success, But when I was installing istio with automatic sidecar injector. It appears that controller manager signer was not enabled by default. https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/#a-note-to-cluster-administrators

I am far from a vet of k8s, these configurations did confuse me a bit.

How would I able to provide such cert/key?

I tried to generate my own cert/key, put them on master node, tuned my cluster.yml, and pass these extra_args to kube-controller, it did start up, but it then gives me the following error when launching new pods.

Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"sleep-86f6b99f94", UID:"8be4a994-482c-11e8-b466-00163e10651f", APIVersion:"extensions", ResourceVersion:"3621", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling admission webhook "sidecar-injector.istio.io": Post https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s: x509: certificate signed by unknown authority
@galal-hussein
Copy link
Contributor

@joesonw you can pass any extra arguments to any service using extra_args, for example:

....
services:
  kube-api:
    image: rancher/hyperkube:v1.10.1
    extra_args:
      cluster-signing-cert-file: <path-of-signing-cert>
      cluster-signing-key-file: <path-of-signing-key>
.....    

@andrexus
Copy link

Any plans to make it configurable also in the UI?

@galal-hussein
Copy link
Contributor

@andrexus opened an issue in rancher/rancher repo, thanks for the suggestion

@cgebe
Copy link

cgebe commented Jul 20, 2018

Where does this issue continue? I am trying to state a CSR against my API, but certificates are not issued. How can i solve this, either via the UI or via kubectl? thanks

Edit: How can i get my certificates into the rancher container and refer to it in the cluster.yml?

@qrtt1
Copy link

qrtt1 commented Jul 22, 2019

signer is provided by the contoller-manager, you need add cert and its key to kube-controller

--- cluster.yml	2019-07-22 06:00:29.188527946 +0000
+++ cluster.new.yml	2019-07-22 06:00:18.681650925 +0000
@@ -56,6 +56,9 @@
   kube-controller:
     cluster_cidr: 10.233.64.0/18
     service_cluster_ip_range: 10.233.0.0/18
+    extra_args:
+      cluster-signing-cert-file: /etc/kubernetes/ssl/kube-ca.pem
+      cluster-signing-key-file: /etc/kubernetes/ssl/kube-ca-key.pem
   scheduler:
   kubelet:
     cluster_domain: cluster.local

@vitobotta
Copy link

Hi! I am trying to get Robin storage working on a cluster deployed with Rancher, and I have been told that I need to "enable certificate controller manager on the k8s cluster". Is that couple of args what this is about? Can I do this with an existing cluster? And how do I generate the certificate? Thanks

@titou10titou10
Copy link

could the csr signing feature be activated by default in a fresh rke install? thx

@themowski
Copy link

signer is provided by the contoller-manager, you need add cert and its key to kube-controller

--- cluster.yml	2019-07-22 06:00:29.188527946 +0000
+++ cluster.new.yml	2019-07-22 06:00:18.681650925 +0000
@@ -56,6 +56,9 @@
   kube-controller:
     cluster_cidr: 10.233.64.0/18
     service_cluster_ip_range: 10.233.0.0/18
+    extra_args:
+      cluster-signing-cert-file: /etc/kubernetes/ssl/kube-ca.pem
+      cluster-signing-key-file: /etc/kubernetes/ssl/kube-ca-key.pem
   scheduler:
   kubelet:
     cluster_domain: cluster.local

We are running Rancher 2.4.5 with K8s 1.18.10 and encountered this issue today. This patch from @qrtt1 was the solution. Note that this patch has to go in the kube-controller section of the YAML -- we accidentally put it in the wrong section (there are multiple with extra_args options), and ended up losing a lot of time because it didn't work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants