-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
--extra-config=kubelet.authorization-mode=AlwaysAllow ignored #2342
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
This isn't restricted to |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
I can confirm that this option seems to do nothing:
|
I'm confused here Why are you checking If you consider the start command If you check the kubelet's process parameters you'll see the
Checking the running configuration, we get that it's still using
Even though the
This is one case, related to the kubectl componentThe
But we still get the
There is no overriding message displayed at start, as it happened previously with the kubelet component. This is another case, related to the API server componentIf someone confirm that the distinction I made between the kubelet and apiserver extra options is correct, then the override code is not working at all, at least for these two components. I could not reproduce the duplicate 'authorization-mode' case with the settings below. Seems that from a precedence problem we got to no setting or overriding options at all. My settings: OS: VM driver: ISO version |
We're running into the same problem, i.e. --extra-config=apiserver.authorization-mode=AlwaysAllow does nothing. |
same here. minikube |
workaround: |
+1. Having this problem aswel. |
Hey @carlosmkb how do you apply your workaround? Do you start up your cluster |
yes @kwojcicki that's what I did |
Thanks @carlosmkb 😄 |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Same here: minikube version: v1.4.0 |
I can't say exactly when this was fixed, but it no longer appears with at least the v1.9.0-beta.1:
My apologies for it taking so long to get this resolved. |
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Please provide the following details:
Environment: Ubuntu 17.10
Minikube version (use
minikube version
): v0.24.1cat ~/.minikube/machines/minikube/config.json | grep DriverName
): virtualboxcat ~/.minikube/machines/minikube/config.json | grep -i ISO
orminikube ssh cat /etc/VERSION
): minikube-v0.23.6.isoThe above can be generated in one go with the following commands (can be copied and pasted directly into your terminal):
What happened:
Running minikube with:
minikube --bootstrapper=kubeadm --extra-config=kubelet.authorization-mode=AlwaysAllow --kubernetes-version=v1.8.5 start
Results in RBAC still being active.
Excerpt from:
kubectl get pods -n kube-system kube-apiserver-minikube -o yaml
Shows
authorization-mode
being passed twice. The default value is appended at the end which I suspect takes precedence over the overridden value.Running a kubectl command from a pod inside the cluster shows:
Which means that RBAC is active.
What you expected to happen:
RBAC should be turned off.
How to reproduce it (as minimally and precisely as possible):
Run minikube as following:
minikube --bootstrapper=kubeadm --extra-config=kubelet.authorization-mode=AlwaysAllow --kubernetes-version=v1.8.5 start
Verify if RBAC is not active.
Output of
minikube logs
(if applicable):N/A
Anything else do we need to know:
The text was updated successfully, but these errors were encountered: