Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

addons disable fails: Process exited with status 1 (no useful logs!) #2281

Closed
rexatorbit opened this issue Dec 7, 2017 · 21 comments
Closed
Labels
area/addons help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. os/windows priority/backlog Higher priority than priority/awaiting-more-evidence. triage/obsolete Bugs that no longer occur in the latest stable release

Comments

@rexatorbit
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Please provide the following details:

Environment:

Minikube version: v0.24.1

  • OS: Windows 10
  • VM Driver: virtualbox
  • ISO version: minikube-v0.23.6.iso
  • Install tools:
    Copied Minikube.exe straight into windows system32 folder.
  • Others:

What happened:
When trying to disable an addon using the command:

minikube addons disable kube-dns

I encountered an unexpected error:

[error disabling addon deploy/addons/kube-dns/kube-dns-controller.yaml: %!s(MISSING): Process exited with status 1]

What you expected to happen:
I expected the addon to be disabled successfully.

How to reproduce it (as minimally and precisely as possible):
Simply use a new cluster and try and disable kube-dns with the following command:

minikube addons disable kube-dns

Output of minikube logs (if applicable):

-- Logs begin at Thu 2017-12-07 22:22:46 UTC, end at Thu 2017-12-07 22:25:55 UTC. --
Dec 07 22:23:04 minikube4 systemd[1]: Starting Localkube...
Dec 07 22:23:05 minikube4 localkube[3066]: listening for peers on http://localhost:2380
Dec 07 22:23:05 minikube4 localkube[3066]: listening for client requests on localhost:2379
Dec 07 22:23:05 minikube4 localkube[3066]: name = default
Dec 07 22:23:05 minikube4 localkube[3066]: data dir = /var/lib/localkube/etcd
Dec 07 22:23:05 minikube4 localkube[3066]: member dir = /var/lib/localkube/etcd/member
Dec 07 22:23:05 minikube4 localkube[3066]: heartbeat = 100ms
Dec 07 22:23:05 minikube4 localkube[3066]: election = 1000ms
Dec 07 22:23:05 minikube4 localkube[3066]: snapshot count = 10000
Dec 07 22:23:05 minikube4 localkube[3066]: advertise client URLs = http://localhost:2379
Dec 07 22:23:05 minikube4 localkube[3066]: initial advertise peer URLs = http://localhost:2380
Dec 07 22:23:05 minikube4 localkube[3066]: initial cluster = default=http://localhost:2380
Dec 07 22:23:05 minikube4 localkube[3066]: starting member 8e9e05c52164694d in cluster cdf818194e3a8c32
Dec 07 22:23:05 minikube4 localkube[3066]: 8e9e05c52164694d became follower at term 0
Dec 07 22:23:05 minikube4 localkube[3066]: newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
Dec 07 22:23:05 minikube4 localkube[3066]: 8e9e05c52164694d became follower at term 1
Dec 07 22:23:05 minikube4 localkube[3066]: starting server... [version: 3.1.10, cluster version: to_be_decided]
Dec 07 22:23:05 minikube4 localkube[3066]: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
Dec 07 22:23:06 minikube4 localkube[3066]: 8e9e05c52164694d is starting a new election at term 1
Dec 07 22:23:06 minikube4 localkube[3066]: 8e9e05c52164694d became candidate at term 2
Dec 07 22:23:06 minikube4 localkube[3066]: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 2
Dec 07 22:23:06 minikube4 localkube[3066]: 8e9e05c52164694d became leader at term 2
Dec 07 22:23:06 minikube4 localkube[3066]: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2
Dec 07 22:23:06 minikube4 localkube[3066]: setting up the initial cluster version to 3.1
Dec 07 22:23:06 minikube4 localkube[3066]: set the initial cluster version to 3.1
Dec 07 22:23:06 minikube4 localkube[3066]: enabled capabilities for version 3.1
Dec 07 22:23:06 minikube4 localkube[3066]: I1207 22:23:06.017544 3066 etcd.go:58] Etcd server is ready
Dec 07 22:23:06 minikube4 localkube[3066]: published {Name:default ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32
Dec 07 22:23:06 minikube4 localkube[3066]: localkube host ip address: 10.0.2.15
Dec 07 22:23:06 minikube4 localkube[3066]: Starting apiserver...
Dec 07 22:23:06 minikube4 localkube[3066]: Waiting for apiserver to be healthy...
Dec 07 22:23:06 minikube4 localkube[3066]: ready to serve client requests
Dec 07 22:23:06 minikube4 localkube[3066]: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
Dec 07 22:23:06 minikube4 localkube[3066]: I1207 22:23:06.018572 3066 server.go:114] Version: v1.8.0
Dec 07 22:23:06 minikube4 localkube[3066]: W1207 22:23:06.018853 3066 authentication.go:380] AnonymousAuth is not allowed with the AllowAll authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
Dec 07 22:23:06 minikube4 localkube[3066]: I1207 22:23:06.019287 3066 plugins.go:101] No cloud provider specified.
Dec 07 22:23:06 minikube4 localkube[3066]: [restful] 2017/12/07 22:23:06 log.go:33: [restful/swagger] listing is available at https://10.0.2.15:8443/swaggerapi
Dec 07 22:23:06 minikube4 localkube[3066]: [restful] 2017/12/07 22:23:06 log.go:33: [restful/swagger] https://10.0.2.15:8443/swaggerui/ is mapped to folder /swagger-ui/
Dec 07 22:23:07 minikube4 localkube[3066]: I1207 22:23:07.019128 3066 ready.go:30] Performing healthcheck on https://localhost:8443/healthz
Dec 07 22:23:07 minikube4 localkube[3066]: E1207 22:23:07.020937 3066 ready.go:40] Error performing healthcheck: Get https://localhost:8443/healthz: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Dec 07 22:23:07 minikube4 localkube[3066]: [restful] 2017/12/07 22:23:07 log.go:33: [restful/swagger] listing is available at https://10.0.2.15:8443/swaggerapi
Dec 07 22:23:07 minikube4 localkube[3066]: [restful] 2017/12/07 22:23:07 log.go:33: [restful/swagger] https://10.0.2.15:8443/swaggerui/ is mapped to folder /swagger-ui/
Dec 07 22:23:08 minikube4 localkube[3066]: I1207 22:23:08.020793 3066 ready.go:30] Performing healthcheck on https://localhost:8443/healthz
Dec 07 22:23:08 minikube4 localkube[3066]: E1207 22:23:08.023779 3066 ready.go:40] Error performing healthcheck: Get https://localhost:8443/healthz: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Dec 07 22:23:09 minikube4 localkube[3066]: I1207 22:23:09.023525 3066 ready.go:30] Performing healthcheck on https://localhost:8443/healthz
Dec 07 22:23:09 minikube4 localkube[3066]: E1207 22:23:09.024710 3066 ready.go:40] Error performing healthcheck: Get https://localhost:8443/healthz: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Dec 07 22:23:09 minikube4 localkube[3066]: I1207 22:23:09.957721 3066 aggregator.go:138] Skipping APIService creation for scheduling.k8s.io/v1alpha1
Dec 07 22:23:09 minikube4 localkube[3066]: I1207 22:23:09.958785 3066 serve.go:85] Serving securely on 0.0.0.0:8443
Dec 07 22:23:09 minikube4 systemd[1]: Started Localkube.
Dec 07 22:23:09 minikube4 localkube[3066]: I1207 22:23:09.962135 3066 crd_finalizer.go:242] Starting CRDFinalizer
Dec 07 22:23:09 minikube4 localkube[3066]: I1207 22:23:09.962556 3066 available_controller.go:192] Starting AvailableConditionController
Dec 07 22:23:09 minikube4 localkube[3066]: I1207 22:23:09.962569 3066 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Dec 07 22:23:09 minikube4 localkube[3066]: I1207 22:23:09.962592 3066 controller.go:84] Starting OpenAPI AggregationController
Dec 07 22:23:09 minikube4 localkube[3066]: I1207 22:23:09.962636 3066 customresource_discovery_controller.go:152] Starting DiscoveryController
Dec 07 22:23:09 minikube4 localkube[3066]: I1207 22:23:09.962657 3066 naming_controller.go:277] Starting NamingConditionController
Dec 07 22:23:09 minikube4 localkube[3066]: I1207 22:23:09.963867 3066 apiservice_controller.go:112] Starting APIServiceRegistrationController
Dec 07 22:23:09 minikube4 localkube[3066]: I1207 22:23:09.963958 3066 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Dec 07 22:23:09 minikube4 localkube[3066]: I1207 22:23:09.963991 3066 crdregistration_controller.go:112] Starting crd-autoregister controller
Dec 07 22:23:09 minikube4 localkube[3066]: I1207 22:23:09.963997 3066 controller_utils.go:1041] Waiting for caches to sync for crd-autoregister controller
Dec 07 22:23:10 minikube4 localkube[3066]: I1207 22:23:10.018661 3066 ready.go:30] Performing healthcheck on https://localhost:8443/healthz
Dec 07 22:23:10 minikube4 localkube[3066]: I1207 22:23:10.028352 3066 ready.go:49] Got healthcheck response: [+]ping ok
Dec 07 22:23:10 minikube4 localkube[3066]: [+]etcd ok
Dec 07 22:23:10 minikube4 localkube[3066]: [+]poststarthook/generic-apiserver-start-informers ok
Dec 07 22:23:10 minikube4 localkube[3066]: [+]poststarthook/start-apiextensions-informers ok
Dec 07 22:23:10 minikube4 localkube[3066]: [+]poststarthook/start-apiextensions-controllers ok
Dec 07 22:23:10 minikube4 localkube[3066]: [-]poststarthook/bootstrap-controller failed: reason withheld
Dec 07 22:23:10 minikube4 localkube[3066]: [-]poststarthook/ca-registration failed: reason withheld
Dec 07 22:23:10 minikube4 localkube[3066]: [+]poststarthook/start-kube-apiserver-informers ok
Dec 07 22:23:10 minikube4 localkube[3066]: [+]poststarthook/start-kube-aggregator-informers ok
Dec 07 22:23:10 minikube4 localkube[3066]: [+]poststarthook/apiservice-registration-controller ok
Dec 07 22:23:10 minikube4 localkube[3066]: [+]poststarthook/apiservice-status-available-controller ok
Dec 07 22:23:10 minikube4 localkube[3066]: [+]poststarthook/apiservice-openapi-controller ok
Dec 07 22:23:10 minikube4 localkube[3066]: [+]poststarthook/kube-apiserver-autoregistration ok
Dec 07 22:23:10 minikube4 localkube[3066]: [-]autoregister-completion failed: reason withheld
Dec 07 22:23:10 minikube4 localkube[3066]: healthz check failed
Dec 07 22:23:10 minikube4 localkube[3066]: I1207 22:23:10.063865 3066 cache.go:39] Caches are synced for AvailableConditionController controller
Dec 07 22:23:10 minikube4 localkube[3066]: I1207 22:23:10.064569 3066 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Dec 07 22:23:10 minikube4 localkube[3066]: I1207 22:23:10.064954 3066 controller_utils.go:1048] Caches are synced for crd-autoregister controller
Dec 07 22:23:10 minikube4 localkube[3066]: I1207 22:23:10.065126 3066 autoregister_controller.go:136] Starting autoregister controller
Dec 07 22:23:10 minikube4 localkube[3066]: I1207 22:23:10.065152 3066 cache.go:32] Waiting for caches to sync for autoregister controller
Dec 07 22:23:10 minikube4 localkube[3066]: I1207 22:23:10.166043 3066 cache.go:39] Caches are synced for autoregister controller
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.018440 3066 ready.go:30] Performing healthcheck on https://localhost:8443/healthz
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.032296 3066 ready.go:49] Got healthcheck response: ok
Dec 07 22:23:11 minikube4 localkube[3066]: apiserver is ready!
Dec 07 22:23:11 minikube4 localkube[3066]: Starting controller-manager...
Dec 07 22:23:11 minikube4 localkube[3066]: Waiting for controller-manager to be healthy...
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.033218 3066 controllermanager.go:109] Version: v1.8.0
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.038376 3066 leaderelection.go:174] attempting to acquire leader lease...
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.052979 3066 leaderelection.go:184] successfully acquired lease kube-system/kube-controller-manager
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.053250 3066 event.go:218] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"3569182b-db9d-11e7-9dc0-0800274dfab5", APIVersion:"v1", ResourceVersion:"35", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube4 became leader
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.089534 3066 plugins.go:101] No cloud provider specified.
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.095946 3066 controller_utils.go:1041] Waiting for caches to sync for tokens controller
Dec 07 22:23:11 minikube4 localkube[3066]: W1207 22:23:11.097368 3066 shared_informer.go:304] resyncPeriod 52315461518068 is smaller than resyncCheckPeriod 62135979920715 and the informer has already started. Changing it to 62135979920715
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.097667 3066 controllermanager.go:487] Started "resourcequota"
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.098867 3066 controllermanager.go:487] Started "daemonset"
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.099108 3066 daemon_controller.go:230] Starting daemon sets controller
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.099130 3066 controller_utils.go:1041] Waiting for caches to sync for daemon sets controller
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.099055 3066 resource_quota_controller.go:238] Starting resource quota controller
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.099197 3066 controller_utils.go:1041] Waiting for caches to sync for resource quota controller
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.100663 3066 controllermanager.go:487] Started "deployment"
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.100707 3066 deployment_controller.go:151] Starting deployment controller
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.100999 3066 controller_utils.go:1041] Waiting for caches to sync for deployment controller
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.102378 3066 controllermanager.go:487] Started "disruption"
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.102479 3066 disruption.go:288] Starting disruption controller
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.102723 3066 controller_utils.go:1041] Waiting for caches to sync for disruption controller
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.104143 3066 controllermanager.go:487] Started "csrapproving"
Dec 07 22:23:11 minikube4 localkube[3066]: W1207 22:23:11.104404 3066 controllermanager.go:471] "tokencleaner" is disabled
Dec 07 22:23:11 minikube4 localkube[3066]: W1207 22:23:11.104574 3066 core.go:128] Unsuccessful parsing of cluster CIDR : invalid CIDR address:
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.104683 3066 core.go:131] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
Dec 07 22:23:11 minikube4 localkube[3066]: W1207 22:23:11.104780 3066 controllermanager.go:484] Skipping "route"
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.105909 3066 controllermanager.go:487] Started "replicationcontroller"
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.107075 3066 controllermanager.go:487] Started "serviceaccount"
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.108142 3066 serviceaccounts_controller.go:113] Starting service account controller
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.108270 3066 controller_utils.go:1041] Waiting for caches to sync for service account controller
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.104229 3066 certificate_controller.go:109] Starting certificate controller
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.108538 3066 controller_utils.go:1041] Waiting for caches to sync for certificate controller
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.108680 3066 replication_controller.go:151] Starting RC controller
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.108779 3066 controller_utils.go:1041] Waiting for caches to sync for RC controller
Dec 07 22:23:11 minikube4 localkube[3066]: I1207 22:23:11.197993 3066 controller_utils.go:1048] Caches are synced for tokens controller
Dec 07 22:23:12 minikube4 localkube[3066]: controller-manager is ready!
Dec 07 22:23:12 minikube4 localkube[3066]: Starting scheduler...
Dec 07 22:23:12 minikube4 localkube[3066]: Waiting for scheduler to be healthy...
Dec 07 22:23:12 minikube4 localkube[3066]: E1207 22:23:12.038186 3066 server.go:173] unable to register configz: register config "componentconfig" twice
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.312392 3066 controllermanager.go:487] Started "garbagecollector"
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.314565 3066 controllermanager.go:487] Started "replicaset"
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.314961 3066 replica_set.go:156] Starting replica set controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.315994 3066 controller_utils.go:1041] Waiting for caches to sync for replica set controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.315026 3066 garbagecollector.go:136] Starting garbage collector controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.316446 3066 controller_utils.go:1041] Waiting for caches to sync for garbage collector controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.316632 3066 graph_builder.go:321] GraphBuilder running
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.320586 3066 controllermanager.go:487] Started "horizontalpodautoscaling"
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.321287 3066 horizontal.go:145] Starting HPA controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.321451 3066 controller_utils.go:1041] Waiting for caches to sync for HPA controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.322612 3066 node_controller.go:249] Sending events to api server.
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.323094 3066 taint_controller.go:158] Sending events to api server.
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.323495 3066 controllermanager.go:487] Started "node"
Dec 07 22:23:12 minikube4 localkube[3066]: W1207 22:23:12.323647 3066 controllermanager.go:484] Skipping "persistentvolume-expander"
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.323869 3066 node_controller.go:516] Starting node controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.324966 3066 controller_utils.go:1041] Waiting for caches to sync for node controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.325447 3066 controllermanager.go:487] Started "podgc"
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.326330 3066 gc_controller.go:76] Starting GC controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.326515 3066 controller_utils.go:1041] Waiting for caches to sync for GC controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.339563 3066 controllermanager.go:487] Started "namespace"
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.340002 3066 namespace_controller.go:186] Starting namespace controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.340167 3066 controller_utils.go:1041] Waiting for caches to sync for namespace controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.341909 3066 controllermanager.go:487] Started "job"
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.341979 3066 job_controller.go:138] Starting job controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.342321 3066 controller_utils.go:1041] Waiting for caches to sync for job controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.343663 3066 controllermanager.go:487] Started "statefulset"
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.343873 3066 stateful_set.go:146] Starting stateful set controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.343901 3066 controller_utils.go:1041] Waiting for caches to sync for stateful set controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.345137 3066 controllermanager.go:487] Started "cronjob"
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.345277 3066 cronjob_controller.go:98] Starting CronJob Manager
Dec 07 22:23:12 minikube4 localkube[3066]: E1207 22:23:12.348825 3066 core.go:70] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail.
Dec 07 22:23:12 minikube4 localkube[3066]: W1207 22:23:12.349082 3066 controllermanager.go:484] Skipping "service"
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.350566 3066 controllermanager.go:487] Started "persistentvolume-binder"
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.350901 3066 pv_controller_base.go:259] Starting persistent volume controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.351044 3066 controller_utils.go:1041] Waiting for caches to sync for persistent volume controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.352402 3066 controllermanager.go:487] Started "endpoint"
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.352574 3066 endpoints_controller.go:153] Starting endpoint controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.352953 3066 controller_utils.go:1041] Waiting for caches to sync for endpoint controller
Dec 07 22:23:12 minikube4 localkube[3066]: E1207 22:23:12.353839 3066 certificates.go:48] Failed to start certificate controller: error reading CA cert file "/etc/kubernetes/ca/ca.pem": open /etc/kubernetes/ca/ca.pem: no such file or directory
Dec 07 22:23:12 minikube4 localkube[3066]: W1207 22:23:12.353968 3066 controllermanager.go:484] Skipping "csrsigning"
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.355195 3066 controllermanager.go:487] Started "ttl"
Dec 07 22:23:12 minikube4 localkube[3066]: W1207 22:23:12.355456 3066 controllermanager.go:471] "bootstrapsigner" is disabled
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.355842 3066 ttl_controller.go:116] Starting TTL controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.355950 3066 controller_utils.go:1041] Waiting for caches to sync for TTL controller
Dec 07 22:23:12 minikube4 localkube[3066]: W1207 22:23:12.356897 3066 probe.go:215] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.358014 3066 controllermanager.go:487] Started "attachdetach"
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.358595 3066 attach_detach_controller.go:255] Starting attach detach controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.358744 3066 controller_utils.go:1041] Waiting for caches to sync for attach detach controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.401566 3066 controller_utils.go:1048] Caches are synced for deployment controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.401652 3066 controller_utils.go:1048] Caches are synced for daemon sets controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.401693 3066 controller_utils.go:1048] Caches are synced for resource quota controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.403780 3066 controller_utils.go:1048] Caches are synced for disruption controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.403903 3066 disruption.go:296] Sending events to api server.
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.409507 3066 controller_utils.go:1048] Caches are synced for service account controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.410180 3066 controller_utils.go:1048] Caches are synced for RC controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.418627 3066 controller_utils.go:1048] Caches are synced for replica set controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.428828 3066 controller_utils.go:1048] Caches are synced for GC controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.429045 3066 controller_utils.go:1048] Caches are synced for node controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.429079 3066 taint_controller.go:181] Starting NoExecuteTaintManager
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.441169 3066 controller_utils.go:1048] Caches are synced for namespace controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.442878 3066 controller_utils.go:1048] Caches are synced for job controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.445122 3066 controller_utils.go:1048] Caches are synced for stateful set controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.455424 3066 controller_utils.go:1048] Caches are synced for endpoint controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.456135 3066 controller_utils.go:1048] Caches are synced for persistent volume controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.458584 3066 controller_utils.go:1048] Caches are synced for TTL controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.459729 3066 controller_utils.go:1048] Caches are synced for attach detach controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.522874 3066 controller_utils.go:1048] Caches are synced for HPA controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.610492 3066 controller_utils.go:1048] Caches are synced for certificate controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.718843 3066 controller_utils.go:1048] Caches are synced for garbage collector controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.719303 3066 garbagecollector.go:145] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.741880 3066 controller_utils.go:1041] Waiting for caches to sync for scheduler controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.842981 3066 controller_utils.go:1048] Caches are synced for scheduler controller
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.844028 3066 leaderelection.go:174] attempting to acquire leader lease...
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.855614 3066 leaderelection.go:184] successfully acquired lease kube-system/kube-scheduler
Dec 07 22:23:12 minikube4 localkube[3066]: I1207 22:23:12.856389 3066 event.go:218] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-scheduler", UID:"367b96e8-db9d-11e7-9dc0-0800274dfab5", APIVersion:"v1", ResourceVersion:"46", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube4 became leader
Dec 07 22:23:13 minikube4 localkube[3066]: scheduler is ready!
Dec 07 22:23:13 minikube4 localkube[3066]: Starting kubelet...
Dec 07 22:23:13 minikube4 localkube[3066]: Waiting for kubelet to be healthy...
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.038630 3066 feature_gate.go:156] feature gates: map[]
Dec 07 22:23:13 minikube4 localkube[3066]: W1207 22:23:13.038930 3066 server.go:276] --require-kubeconfig is deprecated. Set --kubeconfig without using --require-kubeconfig.
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.488937 3066 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.489242 3066 client.go:95] Start docker client with request timeout=2m0s
Dec 07 22:23:13 minikube4 localkube[3066]: W1207 22:23:13.495659 3066 server.go:289] --cloud-provider=auto-detect is deprecated. The desired cloud provider should be set explicitly
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.538341 3066 manager.go:149] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/localkube.service"
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.570588 3066 fs.go:139] Filesystem UUIDs: map[2017-10-19-17-24-41-00:/dev/sr0 3c8e6c9b-7579-49f9-8052-a7ca314bc0ce:/dev/sda1 fc42776d-2777-441f-8632-0e01c2122aa7:/dev/sda2]
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.570609 3066 fs.go:140] Filesystem partitions: map[tmpfs:{mountpoint:/dev/shm major:0 minor:17 fsType:tmpfs blockSize:0} /dev/sda1:{mountpoint:/mnt/sda1 major:8 minor:1 fsType:ext4 blockSize:0}]
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.571542 3066 manager.go:216] Machine: {NumCores:2 CpuFrequency:2592000 MemoryCapacity:2097229824 HugePages:[{PageSize:2048 NumPages:0}] MachineID:d141a2cc8d394471988156ce5994dbb6 SystemUUID:FFCCE16C-8E10-4583-9930-36EA0B73B387 BootID:4e1a25d9-d714-4c35-971a-45120057c5c1 Filesyst
ems:[{Device:tmpfs DeviceMajor:0 DeviceMinor:17 Capacity:1048612864 Type:vfs Inodes:256009 HasInodes:true} {Device:/dev/sda1 DeviceMajor:8 DeviceMinor:1 Capacity:17293533184 Type:vfs Inodes:9732096 HasInodes:true} {Device:rootfs DeviceMajor:0 DeviceMinor:1 Capacity:0 Type:vfs Inodes:0 HasInodes:true}] DiskMap:map[8:0:{Name:sda Major:8 Mi
nor:0 Size:20971520000 Scheduler:cfq}] NetworkDevices:[{Name:eth0 MacAddress:08:00:27:4d:fa:b5 Speed:-1 Mtu:1500} {Name:eth1 MacAddress:08:00:27:ea:a8:c8 Speed:-1 Mtu:1500} {Name:sit0 MacAddress:00:00:00:00 Speed:0 Mtu:1480}] Topology:[{Id:0 Memory:2097229824 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Typ
e:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:4194304 Type:Unified Level:3}]} {Id:1 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:4194304 Type:Unified Level:3}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.572464 3066 manager.go:222] Version: {KernelVersion:4.9.13 ContainerOsVersion:Buildroot 2017.02 DockerVersion:17.06.0-ce DockerAPIVersion:1.30 CadvisorVersion: CadvisorRevision:}
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.573008 3066 server.go:422] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.576423 3066 container_manager_linux.go:252] container manager verified user specified cgroup-root exists: /
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.576445 3066 container_manager_linux.go:257] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs ProtectKernelDefaults:false NodeAllocata
bleConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: P
ercentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} ExperimentalQOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s}
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.576556 3066 container_manager_linux.go:288] Creating device plugin handler: false
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.576651 3066 kubelet.go:273] Adding manifest file: /etc/kubernetes/manifests
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.576698 3066 kubelet.go:283] Watching apiserver
Dec 07 22:23:13 minikube4 localkube[3066]: W1207 22:23:13.592919 3066 kubelet_network.go:69] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.592954 3066 kubelet.go:517] Hairpin mode set to "hairpin-veth"
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.606518 3066 docker_service.go:207] Docker cri networking managed by kubernetes.io/no-op
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.613690 3066 docker_service.go:224] Setting cgroupDriver to cgroupfs
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.623672 3066 remote_runtime.go:43] Connecting to runtime service unix:///var/run/dockershim.sock
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.625554 3066 kuberuntime_manager.go:174] Container runtime docker initialized, version: 17.06.0-ce, apiVersion: 1.30.0
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.625695 3066 kuberuntime_manager.go:898] updating runtime config through cri with podcidr 10.180.1.0/24
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.625847 3066 docker_service.go:306] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.180.1.0/24,},}
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.626043 3066 kubelet_network.go:276] Setting Pod CIDR: -> 10.180.1.0/24
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.628552 3066 server.go:718] Started kubelet v1.8.0
Dec 07 22:23:13 minikube4 localkube[3066]: E1207 22:23:13.629012 3066 kubelet.go:1234] Image garbage collection failed once. Stats initialization may not have completed yet: unable to find data for container /
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.629596 3066 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.629887 3066 server.go:128] Starting to listen on 0.0.0.0:10250
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.630612 3066 server.go:296] Adding debug handlers to kubelet server.
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.655574 3066 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.655782 3066 status_manager.go:140] Starting to sync pod status with apiserver
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.655921 3066 kubelet.go:1768] Starting kubelet main sync loop.
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.656117 3066 kubelet.go:1779] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Dec 07 22:23:13 minikube4 localkube[3066]: E1207 22:23:13.656892 3066 container_manager_linux.go:603] [ContainerManager]: Fail to get rootfs information unable to find data for container /
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.657652 3066 volume_manager.go:246] Starting Kubelet Volume Manager
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.667323 3066 factory.go:355] Registering Docker factory
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.668331 3066 factory.go:89] Registering Rkt factory
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.669413 3066 factory.go:157] Registering CRI-O factory
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.669442 3066 factory.go:54] Registering systemd factory
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.669645 3066 factory.go:86] Registering Raw factory
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.669754 3066 manager.go:1140] Started watching for new ooms in manager
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.670389 3066 manager.go:311] Starting recovery of all containers
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.716633 3066 manager.go:316] Recovery completed
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.723709 3066 rkt.go:56] starting detectRktContainers thread
Dec 07 22:23:13 minikube4 localkube[3066]: E1207 22:23:13.755515 3066 eviction_manager.go:238] eviction manager: unexpected err: failed to get node info: node 'minikube4' not found
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.758383 3066 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.760744 3066 kubelet_node_status.go:83] Attempting to register node minikube4
Dec 07 22:23:13 minikube4 localkube[3066]: E1207 22:23:13.764757 3066 actual_state_of_world.go:483] Failed to set statusUpdateNeeded to needed true because nodeName="minikube4" does not exist
Dec 07 22:23:13 minikube4 localkube[3066]: E1207 22:23:13.764801 3066 actual_state_of_world.go:497] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true because nodeName="minikube4" does not exist
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.764954 3066 kubelet_node_status.go:86] Successfully registered node minikube4
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.768881 3066 kuberuntime_manager.go:898] updating runtime config through cri with podcidr
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.769680 3066 docker_service.go:306] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}
Dec 07 22:23:13 minikube4 localkube[3066]: I1207 22:23:13.770501 3066 kubelet_network.go:276] Setting Pod CIDR: 10.180.1.0/24 ->
Dec 07 22:23:14 minikube4 localkube[3066]: kubelet is ready!
Dec 07 22:23:14 minikube4 localkube[3066]: Starting proxy...
Dec 07 22:23:14 minikube4 localkube[3066]: Waiting for proxy to be healthy...
Dec 07 22:23:14 minikube4 localkube[3066]: W1207 22:23:14.041750 3066 server_others.go:63] unable to register configz: register config "componentconfig" twice
Dec 07 22:23:14 minikube4 localkube[3066]: I1207 22:23:14.063995 3066 server_others.go:117] Using iptables Proxier.
Dec 07 22:23:14 minikube4 localkube[3066]: W1207 22:23:14.075756 3066 proxier.go:473] clusterCIDR not specified, unable to distinguish between internal and external traffic
Dec 07 22:23:14 minikube4 localkube[3066]: I1207 22:23:14.076334 3066 server_others.go:152] Tearing down inactive rules.
Dec 07 22:23:14 minikube4 localkube[3066]: I1207 22:23:14.101410 3066 config.go:202] Starting service config controller
Dec 07 22:23:14 minikube4 localkube[3066]: I1207 22:23:14.101458 3066 controller_utils.go:1041] Waiting for caches to sync for service config controller
Dec 07 22:23:14 minikube4 localkube[3066]: E1207 22:23:14.101501 3066 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 07 22:23:14 minikube4 localkube[3066]: I1207 22:23:14.101516 3066 config.go:102] Starting endpoints config controller
Dec 07 22:23:14 minikube4 localkube[3066]: I1207 22:23:14.101520 3066 controller_utils.go:1041] Waiting for caches to sync for endpoints config controller
Dec 07 22:23:14 minikube4 localkube[3066]: I1207 22:23:14.201786 3066 controller_utils.go:1048] Caches are synced for endpoints config controller
Dec 07 22:23:14 minikube4 localkube[3066]: I1207 22:23:14.201879 3066 controller_utils.go:1048] Caches are synced for service config controller
Dec 07 22:23:15 minikube4 localkube[3066]: proxy is ready!
Dec 07 22:23:17 minikube4 localkube[3066]: I1207 22:23:17.429866 3066 node_controller.go:563] Initializing eviction metric for zone:
Dec 07 22:23:17 minikube4 localkube[3066]: W1207 22:23:17.429984 3066 node_controller.go:916] Missing timestamp for Node minikube4. Assuming now as a timestamp.
Dec 07 22:23:17 minikube4 localkube[3066]: I1207 22:23:17.430043 3066 node_controller.go:832] Controller detected that zone is now in state Normal.
Dec 07 22:23:17 minikube4 localkube[3066]: I1207 22:23:17.430591 3066 event.go:218] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube4", UID:"3706a18d-db9d-11e7-9dc0-0800274dfab5", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube4 event: Registered Node minikube4
in Controller
Dec 07 22:23:18 minikube4 localkube[3066]: I1207 22:23:18.664397 3066 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "addons" (UniqueName: "kubernetes.io/host-path/7b19c3ba446df5355649563d32723e4f-addons") pod "kube-addon-manager-minikube4" (UID: "7b19c3ba446df5355649563d32723e4f")
Dec 07 22:23:18 minikube4 localkube[3066]: I1207 22:23:18.664850 3066 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/7b19c3ba446df5355649563d32723e4f-kubeconfig") pod "kube-addon-manager-minikube4" (UID: "7b19c3ba446df5355649563d32723e4f")
Dec 07 22:23:33 minikube4 localkube[3066]: I1207 22:23:33.777171 3066 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"storage-provisioner", UID:"42f2ed83-db9d-11e7-9dc0-0800274dfab5", APIVersion:"v1", ResourceVersion:"94", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned storag
e-provisioner to minikube4
Dec 07 22:23:33 minikube4 localkube[3066]: I1207 22:23:33.915856 3066 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-2p7tl" (UniqueName: "kubernetes.io/secret/42f2ed83-db9d-11e7-9dc0-0800274dfab5-default-token-2p7tl") pod "storage-provisioner" (UID: "42f2ed83-db9d-11e7-9dc0-080027
4dfab5")
Dec 07 22:23:34 minikube4 localkube[3066]: I1207 22:23:34.525367 3066 event.go:218] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"kube-system", Name:"kubernetes-dashboard", UID:"436525c4-db9d-11e7-9dc0-0800274dfab5", APIVersion:"v1", ResourceVersion:"101", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' C
reated pod: kubernetes-dashboard-6tq7l
Dec 07 22:23:34 minikube4 localkube[3066]: I1207 22:23:34.530933 3066 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kubernetes-dashboard-6tq7l", UID:"4365a4af-db9d-11e7-9dc0-0800274dfab5", APIVersion:"v1", ResourceVersion:"102", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigne
d kubernetes-dashboard-6tq7l to minikube4
Dec 07 22:23:34 minikube4 localkube[3066]: I1207 22:23:34.532634 3066 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-2p7tl" (UniqueName: "kubernetes.io/secret/4365a4af-db9d-11e7-9dc0-0800274dfab5-default-token-2p7tl") pod "kubernetes-dashboard-6tq7l" (UID: "4365a4af-db9d-11e7-9dc0
-0800274dfab5")
Dec 07 22:23:34 minikube4 localkube[3066]: I1207 22:23:34.739714 3066 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"kube-dns", UID:"4384191e-db9d-11e7-9dc0-0800274dfab5", APIVersion:"extensions", ResourceVersion:"114", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up repli
ca set kube-dns-86f6f55dd5 to 1
Dec 07 22:23:34 minikube4 localkube[3066]: I1207 22:23:34.766787 3066 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"kube-dns-86f6f55dd5", UID:"4384c8a6-db9d-11e7-9dc0-0800274dfab5", APIVersion:"extensions", ResourceVersion:"115", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Creat
ed pod: kube-dns-86f6f55dd5-pl5qc
Dec 07 22:23:34 minikube4 localkube[3066]: I1207 22:23:34.795964 3066 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-86f6f55dd5-pl5qc", UID:"4389c9cc-db9d-11e7-9dc0-0800274dfab5", APIVersion:"v1", ResourceVersion:"117", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned
kube-dns-86f6f55dd5-pl5qc to minikube4
Dec 07 22:23:34 minikube4 localkube[3066]: E1207 22:23:34.808637 3066 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs
Dec 07 22:23:35 minikube4 localkube[3066]: I1207 22:23:35.022981 3066 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-dns-config" (UniqueName: "kubernetes.io/configmap/4389c9cc-db9d-11e7-9dc0-0800274dfab5-kube-dns-config") pod "kube-dns-86f6f55dd5-pl5qc" (UID: "4389c9cc-db9d-11e7-9dc0-08002
74dfab5")
Dec 07 22:23:35 minikube4 localkube[3066]: I1207 22:23:35.024491 3066 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-2p7tl" (UniqueName: "kubernetes.io/secret/4389c9cc-db9d-11e7-9dc0-0800274dfab5-default-token-2p7tl") pod "kube-dns-86f6f55dd5-pl5qc" (UID: "4389c9cc-db9d-11e7-9dc0-
0800274dfab5")
Dec 07 22:23:35 minikube4 localkube[3066]: W1207 22:23:35.144109 3066 container.go:354] Failed to create summary reader for "/system.slice/run-rfe754f4fd5cd4f82a15fbb6c3c3bf884.scope": none of the resources are being tracked.
Dec 07 22:23:35 minikube4 localkube[3066]: E1207 22:23:35.148279 3066 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs
Dec 07 22:23:53 minikube4 localkube[3066]: W1207 22:23:53.807893 3066 conversion.go:110] Could not get instant cpu stats: different number of cpus
Dec 07 22:23:57 minikube4 localkube[3066]: I1207 22:23:57.348148 3066 kuberuntime_manager.go:499] Container {Name:kubernetes-dashboard Image:gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.0 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:ma
p[] Requests:map[]} VolumeMounts:[{Name:default-token-2p7tl ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:}] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:
30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Dec 07 22:23:57 minikube4 localkube[3066]: I1207 22:23:57.348416 3066 kuberuntime_manager.go:738] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-6tq7l_kube-system(4365a4af-db9d-11e7-9dc0-0800274dfab5)"
Dec 07 22:24:03 minikube4 localkube[3066]: W1207 22:24:03.813984 3066 conversion.go:110] Could not get instant cpu stats: different number of cpus
Dec 07 22:24:14 minikube4 localkube[3066]: E1207 22:24:14.101858 3066 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 07 22:24:14 minikube4 localkube[3066]: W1207 22:24:14.301602 3066 kuberuntime_container.go:191] Non-root verification doesn't support non-numeric user (nobody)
Dec 07 22:24:24 minikube4 localkube[3066]: E1207 22:24:24.151993 3066 proxier.go:1621] Failed to delete stale service IP 10.96.0.10 connections, error: error deleting connection tracking state for UDP service IP: 10.96.0.10, error: error looking for path of conntrack: exec: "conntrack": executable file not found in $PATH
Dec 07 22:25:14 minikube4 localkube[3066]: E1207 22:25:14.102306 3066 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 07 22:26:14 minikube4 localkube[3066]: E1207 22:26:14.104449 3066 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 07 22:27:14 minikube4 localkube[3066]: E1207 22:27:14.105836 3066 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 07 22:28:14 minikube4 localkube[3066]: E1207 22:28:14.106965 3066 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 07 22:29:14 minikube4 localkube[3066]: E1207 22:29:14.107558 3066 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 07 22:30:14 minikube4 localkube[3066]: E1207 22:30:14.108409 3066 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 07 22:31:14 minikube4 localkube[3066]: E1207 22:31:14.109252 3066 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 07 22:32:14 minikube4 localkube[3066]: E1207 22:32:14.109724 3066 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 07 22:33:06 minikube4 localkube[3066]: store.index: compact 470
Dec 07 22:33:06 minikube4 localkube[3066]: finished scheduled compaction at 470 (took 662.352µs)
Dec 07 22:33:14 minikube4 localkube[3066]: E1207 22:33:14.110752 3066 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 07 22:34:14 minikube4 localkube[3066]: E1207 22:34:14.112378 3066 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 07 22:35:14 minikube4 localkube[3066]: E1207 22:35:14.113686 3066 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 07 22:36:14 minikube4 localkube[3066]: E1207 22:36:14.115577 3066 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address

Anything else do we need to know: Also trying to remove other addons does not work.
I have tried clearing the cache and using new profiles still to no avail. I have deleted the whole .minikube and .kube directories from %APPDATA% in windows to force a re-download of iso's etc still to no avail. I updated to the v0.24.1 release as I was also having the same problem in the previous release, but this new release has not helped.

@rexatorbit
Copy link
Author

I haven't found a solution for this yet but I have found a workaround and possibly a suggestion for default behavior for when running "minikube addons disable".

The work around I found is to edit .minikube\config\config.json and add the following entry:

"kube-dns": false

So the contents of the file should similar to:

{
    "kube-dns": false,
    "profile": "minikube"
}

Obviously the contents of the file may look different.

After that you need to restart minikube with the following commands:

minikube stop
minikube start

Then after the cluster has started eventually the addon will be disabled and eventually the relevant pods etc seem to be terminated, though this can take a minute or too as the pods look like they initially are targeted to get into the running state, then only after they reach that state the changes seem to be applied and they terminate.

A suggestion I'd like to make is that when disabling an addon with "minikube addons disable" is that even if minikube fails to find whatever file it seems to be trying to remove and thus fails to disable the addon, that it should change the config.json file anyway setting the plugin to false. Then at least when the cluster is restarted the addon will be disabled eventually rather than never.

@ambition-consulting
Copy link

same here

@gtirloni
Copy link
Contributor

On 0.24.1 the config file seems to be updated accordingly after running minikube addons disable kube-dns:

$ minikube addons list
- storage-provisioner: enabled
- ingress: disabled
- registry: disabled
- registry-creds: disabled
- addon-manager: enabled
- default-storageclass: enabled
- kube-dns: enabled
- heapster: disabled
- efk: disabled
- dashboard: enabled
- coredns: disabled

$ minikube addons disable kube-dns
kube-dns was successfully disabled

$ minikube addons list
- dashboard: enabled
- default-storageclass: enabled
- coredns: disabled
- registry-creds: disabled
- addon-manager: enabled
- storage-provisioner: enabled
- kube-dns: disabled
- heapster: disabled
- efk: disabled
- ingress: disabled
- registry: disabled

$ cat config.json 
{
    "kube-dns": false
}

@rexatorbit
Copy link
Author

@gtirloni Correct me if I'm wrong but it looks like your using linux/osx, I believe the problem might be specific to windows but I'm unsure. @ambition-consulting Were you also using windows when you encountered the error?

@gtirloni
Copy link
Contributor

@rexatorbit that's right, Fedora 27 here.

@ambition-consulting
Copy link

@rexatorbit Yes, Windows 10 (with creatorsupdate) with the newest Minikube and Virtualbox while trying to get gofabric8 installed - which is not compatible with k8s 1.8

@codyaray
Copy link

codyaray commented Jan 4, 2018

I'm seeing this same issue on Mac OS X now with minikube 0.24.1 running using hyperkit.

$ minikube addons disable dashboard
[error disabling addon deploy/addons/dashboard/dashboard-rc.yaml: %!s(MISSING): Process exited with status 1]

Minikube installed with brew cask. Hyperkit installed as binary download.

@Vlaaaaaaad
Copy link

Same issue on Windows 10, with minikube 0.24.1: [error disabling addon deploy/addons/ingress/ingress-configmap.yaml: %!s(MISSING): Process exited with status 1]. Minikube logs are empty.

The workaround provided by @rexatorbit works.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 16, 2018
@sshilko
Copy link

sshilko commented May 11, 2018

Same on 0.26, latest k8s 1.10
[error disabling addon deploy/addons/heapster/influx-grafana-rc.yaml: %!s(MISSING): Process exited with status 1]

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 13, 2018
@telemmaite
Copy link

Same on Windows 10/HyperV minikube-windows-amd64.exe addons disable ingress

[error disabling addon deploy/addons/ingress/ingress-configmap.yaml: Process exited with status 1]

minikube version: v0.28.0
k8s: v1.11.0

@SergeyMuha
Copy link

C:\Users\Sergey.Muha\Minikube>minikube addons disable default-storageclass
[error disabling addon deploy/addons/storageclass/storageclass.yaml: Process exited with status 1]

C:\Users\Sergey.Muha\Minikube> minikube update-check
CurrentVersion: v0.28.0
LatestVersion: v0.28.0

Virtual box - Win10

@PierrickLozach
Copy link

PierrickLozach commented Jul 30, 2018

Same here:

minikube addons disable ingress -v 7

[executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]
).ipaddresses[0]
[stdout =====>] : 192.168.1.36

[stderr =====>] :
[error disabling addon deploy/addons/ingress/ingress-configmap.yaml: Process exited with status 1]

minikube version v0.27.0

I fixed it by editing .minikube/config/config.json and setting ingress to false then ran minikube stop and minikube start

@webmutation
Copy link

same on 0.28.2
disabling heapster (now deprecated)
[error disabling addon deploy/addons/heapster/influx-grafana-rc.yaml: Process exited with status 1]

@tstromberg tstromberg changed the title Can't disable addons addons disable fails: Process exited with status 1 (no useful logs!) Sep 19, 2018
@tstromberg tstromberg added area/addons kind/bug Categorizes issue or PR as related to a bug. drivers/virtualbox/windows labels Sep 19, 2018
@tstromberg
Copy link
Contributor

minikube needs to do a much better job of showing error messages here.

@den-is mentioned that one last step is required to get rid of kube-dns:

kubectl -n kube-system delete deployment kube-dns

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 18, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 17, 2019
@lalomartins
Copy link

This lifecycle thing is a slap in the face of the userbase. If it's already acknowledged as a bug (type/bug), it makes no sense to close it just because there's no activity. A bug is a bug.

@tstromberg
Copy link
Contributor

Does anyone have a repro case for this? I wasn't able to replicate with in v0.33.x, but only because because I couldn't find an addon that failed to disable.

@tstromberg tstromberg added area/addons priority/backlog Higher priority than priority/awaiting-more-evidence. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. triage/needs-information Indicates an issue needs more information in order to work on it. and removed addons/kube-dns co/virtualbox labels Jan 22, 2019
@tstromberg tstromberg added triage/obsolete Bugs that no longer occur in the latest stable release and removed triage/needs-information Indicates an issue needs more information in order to work on it. labels Feb 19, 2019
@csrrmrvll
Copy link

Minikube version: v1.2.0

OS: Windows 10 Pro Version 10.0.17134 build 17134
VM Driver: virtualbox
Install tools:
Chocolatey
Others:
What happened:
When trying to disable default-storageclass addon using the command:

minikube addons disable default-storageclass

I encountered an unexpected error:

C:\Users\cromero\k8s-workshop>minikube addons disable default-storageclass

X disable failed: [disabling addon deploy/addons/storageclass/storageclass.yaml.tmpl: Process exited with status 1]

What you expected to happen:
I expected the addon to be disabled successfully.

How to reproduce it (as minimally and precisely as possible):
Simply use a new cluster and try and disable default-storageclass with the following command:

minikube addons disable default-storageclass

No logs were found

Should you need further information do not hesitate to ask for it.

Cheers

@marxangels
Copy link

minikube stop
vi ~/.minikube/config/config.json # change some flag from true to false
minikube start

# then delete relative resources manually
kubectl -n xxx delete xxx xxx

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/addons help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. os/windows priority/backlog Higher priority than priority/awaiting-more-evidence. triage/obsolete Bugs that no longer occur in the latest stable release
Projects
None yet
Development

No branches or pull requests