Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using --profile with kubeadm causes kubeadm init: Process exited with status 1 #2574

Closed
kevindrosendahl opened this issue Feb 24, 2018 · 17 comments
Labels
area/profiles issues related to profile handling co/kubeadm Issues relating to kubeadm co/virtualbox kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. triage/obsolete Bugs that no longer occur in the latest stable release

Comments

@kevindrosendahl
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (choose one): bug report

Please provide the following details:

Environment:

  • OS (e.g. from /etc/os-release): macOS 10.13.3
minikube version: v0.25.0

OS:
cat: /etc/os-release: No such file or directory

VM driver:
    "DriverName": "virtualbox",

ISO version
        "Boot2DockerURL": "file:///Users/kevinrosendahl/.minikube/cache/iso/minikube-v0.25.1.iso",

What happened: minikube fails to start when passing in a profile

What you expected to happen: minikube successfully starts a cluster with a given profile

How to reproduce it (as minimally and precisely as possible):
minikube start -p repro --bootstrapper kubeadm

Output of minikube logs (if applicable):

$ minikube -p repro logs
-- Logs begin at Sat 2018-02-24 00:55:03 UTC, end at Sat 2018-02-24 00:59:07 UTC. --
-- No entries -

Anything else do we need to know:
Here are the logs of the command:

$ minikube start -p repro --bootstrapper kubeadm
Starting local Kubernetes v1.9.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0223 17:13:39.910248   90965 start.go:276] Error starting cluster:  kubeadm init error running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --skip-preflight-checks: Process exited with status 1

Rerunning kubeadm init shows that it's failing to taint/label the node:

$ sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --skip-preflight-checks
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.9.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Using the existing ca certificate and key.
[certificates] Using the existing apiserver certificate and key.
[certificates] Using the existing apiserver-kubelet-client certificate and key.
[certificates] Using the existing sa key.
[certificates] Using the existing front-proxy-ca certificate and key.
[certificates] Using the existing front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/var/lib/localkube/certs/"
[kubeconfig] Using existing up-to-date KubeConfig file: "admin.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "kubelet.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "controller-manager.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 10.503617 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node repro as master by adding a label and a taint
timed out waiting for the condition

It appears the issue is with the kubelet registering the node with the apiserver in the first place:

$ journalctl -u kubelet -f
-- Logs begin at Sat 2018-02-24 00:55:03 UTC. --
Feb 24 01:01:43 repro kubelet[3386]: I0224 01:01:43.604083    3386 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
Feb 24 01:01:43 repro kubelet[3386]: I0224 01:01:43.620252    3386 kubelet_node_status.go:82] Attempting to register node minikube
Feb 24 01:01:43 repro kubelet[3386]: E0224 01:01:43.622745    3386 kubelet_node_status.go:106] Unable to register node "minikube" with API server: nodes "minikube" is forbidden: node "repro" cannot modify node "minikube"

Looks like the kubelet is being passed --hostname-override=minikube which I believe is causing it to attempt to register the node as minikube instead of repro even though it's using the user system:node:repro.

Also FWIW this does not seem to effect the localkube bootstrapper.

Hope this helps, let me know if there's any more information I can provide.

@r2d4 r2d4 added kind/bug Categorizes issue or PR as related to a bug. co/kubeadm Issues relating to kubeadm labels Mar 5, 2018
@asbjornu
Copy link

Seems related to #2493.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 12, 2018
@carolynvs
Copy link

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 12, 2018
@carolynvs
Copy link

This is still a problem on the latest release (0.27), and so far I haven't found any workaround, rendering profiles unusable.

On 0.25 I would at least see an error message (from the OP), now on 0.27 it just hangs in the same spot, minus that last log line.

@carolynvs
Copy link

After updating virtual box to the latest patch release, I see this in the logs

[markmaster] Will mark node cats as master by adding a label and a taint
: running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI
.: Process exited with status 1
full logs

$ minikube start --vm-driver=virtualbox     --kubernetes-version=v1.9.6 --profile cats
Starting local Kubernetes v1.9.6 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0612 09:19:27.935610   17915 start.go:276] Error starting cluster:  kubeadm init error sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI  running command: : running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI
 output: [init] Using Kubernetes version: v1.9.6
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[certificates] Using the existing ca certificate and key.
	[WARNING Swap]: running with swap on is not supported. Please disable swap
	[WARNING CRI]: unable to check if the container runtime at "/var/run/dockershim.sock" is running: exit status 1
[certificates] Using the existing apiserver certificate and key.
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/var/lib/localkube/certs/"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 34.002212 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node cats as master by adding a label and a taint
: running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI
.: Process exited with status 1

@btalbot
Copy link

btalbot commented Jun 12, 2018

... , and so far I haven't found any workaround, rendering profiles unusable.

@carolynvs I've been using --bootstrapper=localkube as a work-around which enables profiles to work normally for me. Minikube does emit a deprecation warning, so hopefully they fix these profile issues with the new bootstrapper before removing support for localkube!

@carolynvs
Copy link

carolynvs commented Jun 12, 2018

@btalbot Does that work well with rbac? The reason why I specify kubeadm is that a few releases back rbac stopped working properly on localkube for me.

EDIT: See #2510 for more info on that rbac stuff.

@btalbot
Copy link

btalbot commented Jun 12, 2018

I don't enable RBAC on localhost minikube cluster so can't really say. Sorry.

@javajon
Copy link

javajon commented Jun 14, 2018

Getting same error in version 0.28.0 #2818

Yes @btalbot, using deprecated --bootstrapper localkube appears to avoid the problem.

@tstromberg
Copy link
Contributor

tstromberg commented Aug 17, 2018

I volunteer as tribute!

@tstromberg tstromberg self-assigned this Aug 20, 2018
@tstromberg
Copy link
Contributor

tstromberg commented Sep 6, 2018

I tried this with minikube v0.28.2 (July 23rd), Debian 9, and kvm2 today, and saw no errors:

minikube --logtostderr --loglevel 0 -v 8 start --vm-driver=kvm2 \
  start -p xxxxxxxxxxxxxxxxxx --bootstrapper kubeadm

@balopat has seen a similar issue with Mac OS X & HyperKit, so I'll try next to reproduce this using VirtualBox on Linux & VirtualBox on Mac OS X.

@alisondy
Copy link

Still an issue. Getting same error in 0.28.0 and same as @javajon has said, localkube bootstrapper avoids the problem.

What is the current status of fixing this?

@btalbot
Copy link

btalbot commented Sep 10, 2018

@alisondy

What is the current status of fixing this?

It's fixed in 0.28.2 which was released in July, so use that version.

@javajon
Copy link

javajon commented Sep 11, 2018

@btalbot I'm still reproducing profiles problems with version 0.28.2. See also #2818

$ minikube profile default
minikube profile was successfully set to minikube
/c/dev/core/kubernetes-tools (master)

$ minikube start
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.
/c/dev/core/kubernetes-tools (master)

$ minikube profile experiment
minikube profile was successfully set to experiment

$ minikube config view

  • profile: experiment
  • registry: true
  • WantReportError: true
  • efk: true
  • ingress: true
  • metrics-server: true

$ minikube start --profile experiment
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
E0911 18:38:49.974638 84828 start.go:174] Error starting host: Error loading existing host. Please try running [minikube delete], then run [minikube start] again.: Error loading host from store: open C:\Users\User.minikube\machines\experiment\config.json: The system cannot find the file specified..

Retrying.
E0911 18:38:49.977567 84828 start.go:180] Error starting host: Error loading existing host. Please try running [minikube delete], then run [minikube start] again.: Error loading host from store: open C:\Users\User.minikube\machines\experiment\config.json: The system cannot find the file specified.

$ minikube version
minikube version: v0.28.2

@btalbot
Copy link

btalbot commented Sep 11, 2018

Looks like a different error to me since the output is so different. The OP shows the error as

E0223 17:13:39.910248   90965 start.go:276] Error starting cluster:  kubeadm init error running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --skip-preflight-checks: Process exited with status 1

which is what was fixed in 0.28.2. Maybe you've run into a different issue?

@tstromberg tstromberg changed the title using a profile and kubeadm bootstrapper with minikube start no longer works Using --profile with kubeadm causes kubeadm init: Process exited with status 1 Sep 19, 2018
@tstromberg tstromberg added co/virtualbox area/profiles issues related to profile handling and removed drivers/virtualbox/osx labels Sep 19, 2018
@tstromberg tstromberg removed their assignment Nov 6, 2018
@tstromberg
Copy link
Contributor

I believe we accidentally fixed this. I was able to startup two VM's on Virtualbox & OSX with minikube v0.33.1:

$ minikube start -p repro --bootstrapper kubeadm
$ minikube start -p repro2 --bootstrapper kubeadm
$ minikube ssh -p repro hostname
$ minikube ssh -p repro2 hostname

I do note the following warning however in kubeadm land:

I0122 16:25:30.297362    7442 utils.go:224] ! 	[WARNING Hostname]: hostname "minikube" could not be reached
I0122 16:25:30.297392    7442 utils.go:224] ! 	[WARNING Hostname]: hostname "minikube": lookup minikube on 10.0.2.3:53: no such host

@tstromberg tstromberg added the triage/obsolete Bugs that no longer occur in the latest stable release label Jan 23, 2019
@tstromberg
Copy link
Contributor

I'm going to close this as obsolete - please reopen with more details if you are still experiencing problems with profiles on Virtualbox.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/profiles issues related to profile handling co/kubeadm Issues relating to kubeadm co/virtualbox kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. triage/obsolete Bugs that no longer occur in the latest stable release
Projects
None yet
Development

No branches or pull requests

10 participants