Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minikube 0.26.0 fails to start with RBAC enabled #2712

Closed
wfhartford opened this issue Apr 11, 2018 · 8 comments
Closed

Minikube 0.26.0 fails to start with RBAC enabled #2712

wfhartford opened this issue Apr 11, 2018 · 8 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@wfhartford
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug Report

Please provide the following details:

Environment:

minikube version: v0.26.0

OS:
NAME="Linux Mint"
VERSION="18.3 (Sylvia)"
ID=linuxmint
ID_LIKE=ubuntu
PRETTY_NAME="Linux Mint 18.3"
VERSION_ID="18.3"
HOME_URL="http://www.linuxmint.com/"
SUPPORT_URL="http://forums.linuxmint.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/linuxmint/"
VERSION_CODENAME=sylvia
UBUNTU_CODENAME=xenial

VM driver:
    "DriverName": "kvm2",

ISO version
        "Boot2DockerURL": "file:///home/wesley/.minikube/cache/iso/minikube-v0.26.0.iso",
        "ISO": "/home/wesley/.minikube/machines/minikube/boot2docker.iso",

What happened:
Minikube start command (minikube start --vm-driver kvm2 --extra-config=apiserver.Authorization.Mode=RBAC) hangs for a very long time at Starting cluster components... then outputs some error logs, see https://gist.github.com/wfhartford/aa3b701199ee307522b244f008da3b65

What you expected to happen:
Minikube should start with RBAC enabled. Version v0.25.2 behaves as expected.

How to reproduce it (as minimally and precisely as possible):
minikube start --vm-driver kvm2 --extra-config=apiserver.Authorization.Mode=RBAC

Output of minikube logs (if applicable):
minikube-logs.txt.gz

Anything else do we need to know:

@the-redback
Copy link

Since minikube 26.0, kubeadm is the default bootstrapper for minikube.

For kubeadm, the extra-config key:value pair is slightly different.
It is mentioned in documentation of minikube.

In this case, the correct command will be:

$ minikube start \
    --vm-driver kvm2 \
    --extra-config=apiserver.authorization-mode=RBAC

Furthermore, in kubeadm, the default authorization mode has already been set to RBAC and Node.
So, you can skip providing extra-config flag for RBAC enabled minikube.

$ minikube start --vm-driver kvm2

@wfhartford
Copy link
Author

Thank you, that revised command works for me.

The documentation you linked to includes the older command in the examples section.

@byteSamurai
Copy link

I ran into the same error. Its strange how an obsolete option can block the cluster setup. 🤔

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 29, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 28, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@chrissound
Copy link
Contributor

chrissound commented Mar 9, 2020

TBH this should not be closed.

Ideally we should validate / report an error if this parameter is NOT VALID.
And secondly the logs surely should say something more descriptive than just "node "minikube" not found".

I'm on:

minikube version
minikube version: v1.2.0

And the logs:

Mar 09 08:43:18 minikube kubelet[2910]: E0309 08:43:18.301740    2910 kubelet.go:2248] node "minikube" not found
Mar 09 08:43:18 minikube kubelet[2910]: E0309 08:43:18.402011    2910 kubelet.go:2248] node "minikube" not found
Mar 09 08:43:18 minikube kubelet[2910]: E0309 08:43:18.431584    2910 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)minikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 09 08:43:18 minikube kubelet[2910]: E0309 08:43:18.502177    2910 kubelet.go:2248] node "minikube" not found
Mar 09 08:43:18 minikube kubelet[2910]: E0309 08:43:18.602342    2910 kubelet.go:2248] node "minikube" not found
Mar 09 08:43:18 minikube kubelet[2910]: I0309 08:43:18.630619    2910 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
Mar 09 08:43:18 minikube kubelet[2910]: E0309 08:43:18.631544    2910 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 09 08:43:18 minikube kubelet[2910]: I0309 08:43:18.632478    2910 kubelet_node_status.go:72] Attempting to register node minikube
Mar 09 08:43:18 minikube kubelet[2910]: E0309 08:43:18.702483    2910 kubelet.go:2248] node "minikube" not found
Mar 09 08:43:18 minikube kubelet[2910]: E0309 08:43:18.802778    2910 kubelet.go:2248] node "minikube" not found
Mar 09 08:43:18 minikube kubelet[2910]: E0309 08:43:18.827334    2910 kubelet_node_status.go:94] Unable to register node "minikube" with API server: Post https://localhost:8443/api/v1/nodes: dial tcp 127.0.0.1:8443: connect: connection refused
Mar 09 08:43:18 minikube kubelet[2910]: E0309 08:43:18.903000    2910 kubelet.go:2248] node "minikube" not found
Mar 09 08:43:19 minikube kubelet[2910]: E0309 08:43:19.003120    2910 kubelet.go:2248] node "minikube" not found
Mar 09 08:43:19 minikube kubelet[2910]: E0309 08:43:19.027549    2910 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants