Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

--extra-config=kubelet.authorization-mode=AlwaysAllow ignored #2342

Closed
aleerizw-zz opened this issue Dec 19, 2017 · 22 comments
Closed

--extra-config=kubelet.authorization-mode=AlwaysAllow ignored #2342

aleerizw-zz opened this issue Dec 19, 2017 · 22 comments
Labels
co/kubeadm Issues relating to kubeadm good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. r/2019q2 Issue was last reviewed 2019q2

Comments

@aleerizw-zz
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Please provide the following details:

Environment: Ubuntu 17.10

Minikube version (use minikube version): v0.24.1

  • OS (e.g. from /etc/os-release): "Ubuntu 17.10 (Artful Aardvark)"
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): virtualbox
  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): minikube-v0.23.6.iso
  • Install tools:
  • Others:
    The above can be generated in one go with the following commands (can be copied and pasted directly into your terminal):
minikube version
echo "";
echo "OS:";
cat /etc/os-release
echo "";
echo "VM driver": 
grep DriverName ~/.minikube/machines/minikube/config.json
echo "";
echo "ISO version";
grep -i ISO ~/.minikube/machines/minikube/config.json

What happened:
Running minikube with:
minikube --bootstrapper=kubeadm --extra-config=kubelet.authorization-mode=AlwaysAllow --kubernetes-version=v1.8.5 start
Results in RBAC still being active.

Excerpt from:
kubectl get pods -n kube-system kube-apiserver-minikube -o yaml

spec:
  containers:
  - command:
    - kube-apiserver
    - --authorization-mode=AlwaysAllow
    - --secure-port=8443
    - --requestheader-client-ca-file=/var/lib/localkube/certs/front-proxy-ca.crt
    - --proxy-client-key-file=/var/lib/localkube/certs/front-proxy-client.key
    - --insecure-port=0
    - --requestheader-group-headers=X-Remote-Group
    - --advertise-address=192.168.99.102
    - --tls-cert-file=/var/lib/localkube/certs/apiserver.crt
    - --kubelet-client-key=/var/lib/localkube/certs/apiserver-kubelet-client.key
    - --enable-bootstrap-token-auth=true
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --requestheader-username-headers=X-Remote-User
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --service-account-key-file=/var/lib/localkube/certs/sa.pub
    - --kubelet-client-certificate=/var/lib/localkube/certs/apiserver-kubelet-client.crt
    - --service-cluster-ip-range=10.96.0.0/12
    - --proxy-client-cert-file=/var/lib/localkube/certs/front-proxy-client.crt
    - --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota
    - --allow-privileged=true
    - --requestheader-allowed-names=front-proxy-client
    - --client-ca-file=/var/lib/localkube/certs/ca.crt
    - --tls-private-key-file=/var/lib/localkube/certs/apiserver.key
    - --authorization-mode=Node,RBAC
    - --etcd-servers=http://127.0.0.1:2379

Shows authorization-mode being passed twice. The default value is appended at the end which I suspect takes precedence over the overridden value.

Running a kubectl command from a pod inside the cluster shows:

Error from server (Forbidden): error when retrieving current configuration of:
&{0xc420f552c0 0xc420a65500 abcd test-service /var/spool/rendered/kube-services.yml 0xc420fa42d8 0xc420fa42d8  false}
from server for: "/var/spool/rendered/kube-services.yml": endpoints "test-service" is forbidden: User "system:serviceaccount:abcd:default" cannot get endpoints in the namespace "abcd"

Which means that RBAC is active.

What you expected to happen:
RBAC should be turned off.

How to reproduce it (as minimally and precisely as possible):
Run minikube as following:
minikube --bootstrapper=kubeadm --extra-config=kubelet.authorization-mode=AlwaysAllow --kubernetes-version=v1.8.5 start
Verify if RBAC is not active.

Output of minikube logs (if applicable):
N/A
Anything else do we need to know:

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 21, 2018
@forana
Copy link

forana commented Mar 22, 2018

This isn't restricted to AlwaysAllow - --extra-config options will always be placed before arguments to kube-apiserver instead of after. Until this is fixed, this means that a fair number of options are impossible to set via minikube.

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 21, 2018
@forana
Copy link

forana commented Apr 21, 2018

/remove-lifecycle rotten

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 20, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 19, 2018
@prateekpandey14
Copy link
Member

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Aug 28, 2018
@tstromberg tstromberg added the kind/bug Categorizes issue or PR as related to a bug. label Sep 19, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 18, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 17, 2019
@tstromberg tstromberg changed the title --extra-config=kubelet.authorization-mode=AlwaysAllow Does not work with Kubeadm --extra-config=kubelet.authorization-mode=AlwaysAllow ignored Jan 24, 2019
@tstromberg
Copy link
Contributor

I can confirm that this option seems to do nothing:

minikube start --extra-config=kubelet.authorization-mode=AlwaysAllow yields:

$ kubectl get pods -n kube-system kube-apiserver-minikube -o yaml | grep -i authorization-mode
    - --authorization-mode=Node,RBAC

@tstromberg tstromberg added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Jan 24, 2019
@tstromberg tstromberg added the priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. label Jan 24, 2019
@kauedg
Copy link
Contributor

kauedg commented Jan 31, 2019

I'm confused here

Why are you checking kube-apiserver-minikube for a parameter set to kubelet?

If you consider the start command minikube start --extra-config=kubelet.authorization-mode=AlwaysAllow then the output --authorization-mode=Node,RBAC is right. First the API will ask the nodes for it's specific authorization mode, then for RBAC, the default behaviour.

https://kubernetes.io/docs/reference/access-authn-authz/authorization/#determine-whether-a-request-is-allowed-or-denied

If you check the kubelet's process parameters you'll see the AlwaysAllow there:

root 2843 4.1 4.5 1338696 89008 ? Ssl 14:34 0:49 /usr/bin/kubelet --cgroup-driver=cgroupfs --fail-swap-on=false --hostname-override=minikube --authorization-mode=AlwaysAllow --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --kubeconfig=/etc/kubernetes/kubelet.conf --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true

Checking the running configuration, we get that it's still using WebHook

root@localhost:~/go/src/k8s.io/minikube/out# kubectl get cm -n kube-system
NAME                                 DATA   AGE
coredns                              1      32m
extension-apiserver-authentication   6      32m
kube-proxy                           2      32m
kubeadm-config                       2      32m
kubelet-config-1.13                  1      32m

root@localhost:~/go/src/k8s.io/minikube/out# kubectl get cm -n kube-system kubelet-config-1.13
NAME                  DATA   AGE
kubelet-config-1.13   1      32m
root@localhost:~/go/src/k8s.io/minikube/out# kubectl describe cm -n kube-system kubelet-config-1.13
Name:         kubelet-config-1.13
Namespace:    kube-system
Labels:       <none>
Annotations:  <none>

Data
====
kubelet:
----
address: 0.0.0.0
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: cgroupfs
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
configMapAndSecretChangeDetectionStrategy: Watch
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuCFSQuotaPeriod: 100ms
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kind: KubeletConfiguration
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeLeaseDurationSeconds: 40
nodeStatusReportFrequency: 1m0s
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
port: 10250
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s

Events:  <none>

Even though the minikube start command outputs:

I0131 12:31:40.819398 29464 versions.go:54] Overwriting default authorization-mode=Webhook with user provided authorization-mode=AlwaysAllow for component kubelet

This is one case, related to the kubectl component


The minikube start command should be:

minikube start --extra-config=apiserver.authorization-mode=AlwaysAllow --vm-driver=kvm2 --alsologtostderr -v 6

But we still get the

kubectl get pods -n kube-system kube-apiserver-minikube -o yaml | grep -i authorization-mode
    - --authorization-mode=Node,RBAC

There is no overriding message displayed at start, as it happened previously with the kubelet component.

This is another case, related to the API server component


If someone confirm that the distinction I made between the kubelet and apiserver extra options is correct, then the override code is not working at all, at least for these two components.

I could not reproduce the duplicate 'authorization-mode' case with the settings below. Seems that from a precedence problem we got to no setting or overriding options at all.


My settings:
minikube version: v0.33.1

OS:
NAME="Ubuntu"
VERSION="18.04.1 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.1 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic

VM driver:
"DriverName": "kvm2",

ISO version
"Boot2DockerURL": "file:///root/go/src/k8s.io/minikube/out/minikube.iso",
"ISO": "/root/.minikube/machines/minikube/boot2docker.iso",

@shrenikd
Copy link

We're running into the same problem, i.e. --extra-config=apiserver.authorization-mode=AlwaysAllow does nothing.

@carlosrmendes
Copy link

carlosrmendes commented Apr 2, 2019

same here.
tested with --extra-config=apiserver.authorization-mode=AlwaysAllow --extra-config=kubelet.authorization-mode=AlwaysAllow and still have the RBAC mode enabled.

minikube v1.0.0 using hyper-v

@carlosrmendes
Copy link

workaround: minikube ssh 'sudo cat /etc/kubernetes/manifests/kube-apiserver.yaml | sed -r "s/--authorization-mode=.+/--authorization-mode=AlwaysAllow/g" | sudo tee /etc/kubernetes/manifests/kube-apiserver.yaml'

@pdeveltere
Copy link

pdeveltere commented Apr 12, 2019

+1. Having this problem aswel. --extra-config=apiserver.authorization-mode=AlwaysAllow is overwritten.

@tstromberg tstromberg added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels May 1, 2019
@tstromberg tstromberg added the r/2019q2 Issue was last reviewed 2019q2 label May 22, 2019
@kwojcicki
Copy link

Hey @carlosmkb how do you apply your workaround? Do you start up your cluster minikube start... apply your workaround and then restart all pods?

@carlosrmendes
Copy link

yes @kwojcicki that's what I did

@kwojcicki
Copy link

Thanks @carlosmkb 😄

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 12, 2019
@forana
Copy link

forana commented Sep 12, 2019

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 12, 2019
@tstromberg tstromberg added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Sep 20, 2019
@j-zimnowoda
Copy link

Same here: minikube version: v1.4.0

@tstromberg
Copy link
Contributor

I can't say exactly when this was fixed, but it no longer appears with at least the v1.9.0-beta.1:

$ minikube start --extra-config=apiserver.authorization-mode=AlwaysAllow
$ kubectl get pods -n kube-system -l component=kube-apiserver -o yaml | grep -C3 authorization-mode
      - kube-apiserver
      - --advertise-address=192.168.64.4
      - --allow-privileged=true
      - --authorization-mode=AlwaysAllow
      - --client-ca-file=/var/lib/minikube/certs/ca.crt
      - --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
      - --enable-bootstrap-token-auth=true

My apologies for it taking so long to get this resolved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/kubeadm Issues relating to kubeadm good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. r/2019q2 Issue was last reviewed 2019q2
Projects
None yet
Development

No branches or pull requests