-
Notifications
You must be signed in to change notification settings - Fork 711
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run kubeadm init command on the host that already create etcd with kubeadm #1107
Comments
@chuckha could you kindly take a look at this? |
i think it makes sense:
if the etcd certs were already created and if the config is setup for a remote etcd, we probably shouldn't fail on that file check. then again the remote/local context is kind of misleading here, because the etcd is local to this VM yet external to that particular kubeadm init invocation. 🤔 |
i think this check maybe not necessary, because if use external etcd, vm not create etcd static pod. |
@pytimer it sounds like you're trying to do stacked control plane & etcd. We've got special instructions for that: https://kubernetes.io/docs/setup/independent/high-availability/#stacked-control-plane-nodes |
@chuckha Thank for you reply. I see this doc for create HA cluster, but I want to |
local etcd was designed to do exactly what you're trying to do. I don't think we are going to support using a colocated etcd with the external configuration. Is there some use case we might be missing that would require the use of external with a colocated control plane / etcd? |
When the small number of virtual machines, install etcd and control plane on the same host. And the other case, there are more vms, install etcd and control plane on the different hosts. I hope that two case, the installation method is consistent, and connection etcd by vip. Is this idea correct? Or have other soluation to do it. |
@pytimer you can install etcd on the same machines of the control plane, but in that case my personal raccomandation is to install etcd without using kubeadm. The reason behind this is that you are using kubeadm for two different scopes on the same machine (install etcd external cluster and install kubernetes using the external cluster), but this is basically obscure to kubeadm itself. As a consequence kubeadm will mix/without distinction all the certificates, manifest etc in the same This can lead to unexpected behaviour like the preflight error (that you can eventually skip) or even more severe problems if you think about what will happen if you run Does this makes sense to you? |
Fyi we are working to a different solution that doesn't use the concept of external etcd but extends the current local etcd #1123 |
I create etcd HA cluster with kubeadm, and then init kubernetes HA cluster with kubeadm.
I run
But like you said, when i run Does kubernetes recommended to use local etcd or external etcd?
I read this issue, is it If etcd and control plane install together, etcd instances will increase, is there problem with this? If kubeadm add |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Closing due to inactivity and the HA stacked documents have been cross-verified by multiple parties. |
Is this a BUG REPORT or FEATURE REQUEST?
BUG REPORT
Versions
kubeadm version (use
kubeadm version
):kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:14:39Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Environment:
kubectl version
):Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
uname -a
):Linux master1 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
kubeadm configuration file:
What happened?
I create Highly Availabile etcd Cluster with kubeadm successfully.
Now i want to init Kubernetes Cluster on the three etcd host, when i run
kubeadm init --config kubeadm.yaml
, it is failed.Below error log:
What you expected to happen?
I found kubeadm code checks.go#L865 always check etcd static pod file, this check should be cancelled if use external etcd cluster?
How to reproduce it (as minimally and precisely as possible)?
create Highly Availabile etcd Cluster with kubeadm on three host
kubeadm init --config kubeadm.yaml
on the etcd hostAnything else we need to know?
The text was updated successfully, but these errors were encountered: