Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tar: invalid option -- 'I' during minikube start #6983

Closed
noelleleigh opened this issue Mar 10, 2020 · 5 comments
Closed

tar: invalid option -- 'I' during minikube start #6983

noelleleigh opened this issue Mar 10, 2020 · 5 comments
Labels
area/guest-vm General configuration issues with the minikube guest VM triage/duplicate Indicates an issue is a duplicate of other open issue.

Comments

@noelleleigh
Copy link

The exact command to reproduce the issue:

minikube start --vm-driver virtualbox --extra-config=apiserver.service-node-port-range=1-50000 --disk-size 40g --memory 12288

The full output of the command that failed:

😄  minikube v1.8.1 on Darwin 10.15.3
✨  Using the virtualbox driver based on existing profile
💿  Downloading VM boot image ...
⌛  Reconfiguring existing host ...
🔄  Starting existing virtualbox VM for "minikube" ...
E0310 14:16:03.080806   16753 config.go:71] Failed to preload container runtime Docker: extracting tarball: 
** stderr ** 
tar: invalid option -- 'I'
BusyBox v1.29.3 (2020-02-04 22:12:11 PST) multi-call binary.

Usage: tar c|x|t [-hvokO] [-f TARFILE] [-C DIR] [-T FILE] [-X FILE] [--exclude PATTERN]... [FILE]...

Create, extract, or list files from a tar file

        c       Create
        x       Extract
        t       List
        -f FILE Name of TARFILE ('-' for stdin/out)
        -C DIR  Change to DIR before operation
        -v      Verbose
        -O      Extract to stdout
        -o      Don't restore user:group
        -k      Don't replace existing files
        -h      Follow symlinks
        -T FILE File with names to include
        -X FILE File with glob patterns to exclude
        --exclude PATTERN       Glob pattern to exclude

** /stderr **: sudo tar -I lz4 -C /var -xvf /preloaded.tar.lz4: Process exited with status 1
stdout:

stderr:
tar: invalid option -- 'I'
BusyBox v1.29.3 (2020-02-04 22:12:11 PST) multi-call binary.

Usage: tar c|x|t [-hvokO] [-f TARFILE] [-C DIR] [-T FILE] [-X FILE] [--exclude PATTERN]... [FILE]...

Create, extract, or list files from a tar file

        c       Create
        x       Extract
        t       List
        -f FILE Name of TARFILE ('-' for stdin/out)
        -C DIR  Change to DIR before operation
        -v      Verbose
        -O      Extract to stdout
        -o      Don't restore user:group
        -k      Don't replace existing files
        -h      Follow symlinks
        -T FILE File with names to include
        -X FILE File with glob patterns to exclude
        --exclude PATTERN       Glob pattern to exclude
, falling back to caching images
🐳  Preparing Kubernetes v1.17.3 on Docker 19.03.5 ...
    ▪ apiserver.service-node-port-range=1-50000
🚀  Launching Kubernetes ... 
🌟  Enabling addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube"

The output of the minikube logs command:

==> Docker <==
-- Logs begin at Tue 2020-03-10 18:15:43 UTC, end at Tue 2020-03-10 18:19:58 UTC. --
Mar 10 18:16:28 minikube dockerd[2384]: time="2020-03-10T18:16:28.721921938Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ddd399540e3e5629925b386d95608145b50a9d08c632fb68f4d8a25598802f09/shim.sock" debug=false pid=6070
Mar 10 18:16:28 minikube dockerd[2384]: time="2020-03-10T18:16:28.791778342Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/155631157d0f87509caccc99030120d1cb86afe7696b9d3ce19c34e8782503e0/shim.sock" debug=false pid=6091
Mar 10 18:16:28 minikube dockerd[2384]: time="2020-03-10T18:16:28.896953398Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/315614d83ce8d391edba5335c95a98343c1bb26ea674bf395a54885805991898/shim.sock" debug=false pid=6133
Mar 10 18:16:29 minikube dockerd[2384]: time="2020-03-10T18:16:29.177234970Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/793d8c77959dfc49e5ed7fe1b324f97be4a51d4de825266c190bbeae9d7da27a/shim.sock" debug=false pid=6214
Mar 10 18:16:29 minikube dockerd[2384]: time="2020-03-10T18:16:29.277570788Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3f78c7a02428021a68f6abbe6a1643329b1f1b50eb66666b59608cbbe8aa6519/shim.sock" debug=false pid=6251
Mar 10 18:16:29 minikube dockerd[2384]: time="2020-03-10T18:16:29.985608443Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d9c916329fe1ec11f3289adf6a8da16014e49c497e754f497742e14ac53bd7d9/shim.sock" debug=false pid=6731
Mar 10 18:16:30 minikube dockerd[2384]: time="2020-03-10T18:16:30.020289437Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7737caef32a6f0e975b2cdb7530d2a017b93ead652d37f74d7aebf662595413b/shim.sock" debug=false pid=6746
Mar 10 18:16:30 minikube dockerd[2384]: time="2020-03-10T18:16:30.272897650Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/562a053489edfb29eeb1c6bebd3b858a2514a4daf3c7ece3da3bbcfabd217fb6/shim.sock" debug=false pid=6871
Mar 10 18:16:31 minikube dockerd[2384]: time="2020-03-10T18:16:31.643664037Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d9f9d5d76d07248ab92654eb94d1f8b7a8a26977000f12a2a6e399677dbdb4d0/shim.sock" debug=false pid=7077
Mar 10 18:16:32 minikube dockerd[2384]: time="2020-03-10T18:16:32.157554225Z" level=info msg="shim reaped" id=d9f9d5d76d07248ab92654eb94d1f8b7a8a26977000f12a2a6e399677dbdb4d0
Mar 10 18:16:32 minikube dockerd[2384]: time="2020-03-10T18:16:32.166778774Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 10 18:16:32 minikube dockerd[2384]: time="2020-03-10T18:16:32.166929936Z" level=warning msg="d9f9d5d76d07248ab92654eb94d1f8b7a8a26977000f12a2a6e399677dbdb4d0 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/d9f9d5d76d07248ab92654eb94d1f8b7a8a26977000f12a2a6e399677dbdb4d0/mounts/shm, flags: 0x2: no such file or directory"
Mar 10 18:16:32 minikube dockerd[2384]: time="2020-03-10T18:16:32.654432629Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/cf1c1666aab02ce04cd5e581970e30b1d91afb682cdfff8d03b3532a8d858066/shim.sock" debug=false pid=7194
Mar 10 18:16:33 minikube dockerd[2384]: time="2020-03-10T18:16:33.293034449Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4ce3df17293153ba0d234c146621b3e34da2331b860aa10d8741a9bc37ba3fe7/shim.sock" debug=false pid=7312
Mar 10 18:16:33 minikube dockerd[2384]: time="2020-03-10T18:16:33.479526936Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3aeb142e086a66a26fbf0ebf8220db811acf7175c2dfb3aea05c255bd4eb3d58/shim.sock" debug=false pid=7351
Mar 10 18:16:33 minikube dockerd[2384]: time="2020-03-10T18:16:33.682062754Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/cc682bc1c22e53da728700d8bcec2855265f3e3dc04c36e50bf7c9e6d8ebf9e2/shim.sock" debug=false pid=7381
Mar 10 18:16:33 minikube dockerd[2384]: time="2020-03-10T18:16:33.966888927Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e0973e783031f9567f7938f2e282b7c699573735f500155a6dc34b33e00cb037/shim.sock" debug=false pid=7434
Mar 10 18:16:34 minikube dockerd[2384]: time="2020-03-10T18:16:34.135164809Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1cf336ba2ad1d202879e1b499d100dfe09f64736505a2515237289c0eb3976ff/shim.sock" debug=false pid=7464
Mar 10 18:16:34 minikube dockerd[2384]: time="2020-03-10T18:16:34.424410608Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ff1f96ce696313e94dacbdc83501a87f98675f69627a2b3db9d53d864d4730d4/shim.sock" debug=false pid=7524
Mar 10 18:16:34 minikube dockerd[2384]: time="2020-03-10T18:16:34.866916835Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/81328a2fe825b5a292c3d073afcab0f907cf981a15177406e0b86d5654abf20a/shim.sock" debug=false pid=7602
Mar 10 18:16:35 minikube dockerd[2384]: time="2020-03-10T18:16:35.462112488Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e66803543fc3b13e7ab1fdb1b4139f413dc73641da5209a966e0757719021c2a/shim.sock" debug=false pid=7729
Mar 10 18:16:35 minikube dockerd[2384]: time="2020-03-10T18:16:35.465105923Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5f55e3c5e1a5f9bd633c9396ce70fc26f02b19d7878114e9ab1a80a3959e14c0/shim.sock" debug=false pid=7730
Mar 10 18:16:35 minikube dockerd[2384]: time="2020-03-10T18:16:35.655589258Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2263b4e18abd18d0ce41e3d375820543db8795ae6e78aecb6c1cdd14acf204b3/shim.sock" debug=false pid=7787
Mar 10 18:16:39 minikube dockerd[2384]: time="2020-03-10T18:16:39.429304820Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e80dd3d5f1c2c0391594ab298a03dc569f7084e06a09aaf3044264e63964dabd/shim.sock" debug=false pid=8007
Mar 10 18:16:39 minikube dockerd[2384]: time="2020-03-10T18:16:39.434787686Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a48d701ab0948f96701d68d819286faa47550d7c067169c6dc3e3afd81663624/shim.sock" debug=false pid=8011
Mar 10 18:16:40 minikube dockerd[2384]: time="2020-03-10T18:16:40.555935798Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c0af5172978df88f968585fd7a41b980602f6b2f396f516dc56e0c8390ad54cc/shim.sock" debug=false pid=8122
Mar 10 18:16:41 minikube dockerd[2384]: time="2020-03-10T18:16:41.384071553Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8058ad503ef1ced43c20ba7e175d5072ceb7dde0143ca27bf45ac262c2f33cf9/shim.sock" debug=false pid=8184
Mar 10 18:16:41 minikube dockerd[2384]: time="2020-03-10T18:16:41.599523753Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/eb1668edc1041d9bde32e14589e30d10730979816b38aa893511cee112f93084/shim.sock" debug=false pid=8215
Mar 10 18:16:41 minikube dockerd[2384]: time="2020-03-10T18:16:41.930193536Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/05dcf2d3fa668d4626f09d0f1a89190805ec2476f906b8247a0703c3600f7bc2/shim.sock" debug=false pid=8253
Mar 10 18:16:43 minikube dockerd[2384]: time="2020-03-10T18:16:43.147306596Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7ad23a4e759eda539c1528c69156701acfc3da4e50bbcd2842d979fc948a0e6b/shim.sock" debug=false pid=8339
Mar 10 18:16:43 minikube dockerd[2384]: time="2020-03-10T18:16:43.658260333Z" level=info msg="shim reaped" id=7ad23a4e759eda539c1528c69156701acfc3da4e50bbcd2842d979fc948a0e6b
Mar 10 18:16:43 minikube dockerd[2384]: time="2020-03-10T18:16:43.668746691Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 10 18:16:43 minikube dockerd[2384]: time="2020-03-10T18:16:43.669044900Z" level=warning msg="7ad23a4e759eda539c1528c69156701acfc3da4e50bbcd2842d979fc948a0e6b cleanup: failed to unmount IPC: umount /var/lib/docker/containers/7ad23a4e759eda539c1528c69156701acfc3da4e50bbcd2842d979fc948a0e6b/mounts/shm, flags: 0x2: no such file or directory"
Mar 10 18:16:45 minikube dockerd[2384]: time="2020-03-10T18:16:45.222921729Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d3104d4ce1bc9727ff181e9c292191df0a9e6c0cb27c0f155ad4f647625f1049/shim.sock" debug=false pid=8427
Mar 10 18:16:45 minikube dockerd[2384]: time="2020-03-10T18:16:45.409927235Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8491d429fd8408de87b3172717c33c6274d0e40ad281c205c341a7940ad87359/shim.sock" debug=false pid=8452
Mar 10 18:16:45 minikube dockerd[2384]: time="2020-03-10T18:16:45.971233848Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2ec263fa7438e055004c55fb1ca2d0612bc8babdfb099eb42f00b141503731ee/shim.sock" debug=false pid=8523
Mar 10 18:16:47 minikube dockerd[2384]: time="2020-03-10T18:16:47.220245969Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/666b2d5d500ce70d095be19e1d5c298588592f3e7ae2d5a6c8006cf16537be4d/shim.sock" debug=false pid=8611
Mar 10 18:16:47 minikube dockerd[2384]: time="2020-03-10T18:16:47.902534049Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/32b0408125301a794eacc6e938507ed1ff678b670aec13a9de8e55f4ad01b2d8/shim.sock" debug=false pid=8761
Mar 10 18:16:51 minikube dockerd[2384]: time="2020-03-10T18:16:51.905439392Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/62bcbb4918003d2785025e30cf0be9486335fe61610a3b9da6025f9367886e6f/shim.sock" debug=false pid=8918
Mar 10 18:16:51 minikube dockerd[2384]: time="2020-03-10T18:16:51.913722777Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9e9031d499b89a52158e1d8dd44e856b4ad199c76ba43e1c0e33c237d02d8536/shim.sock" debug=false pid=8924
Mar 10 18:17:05 minikube dockerd[2384]: time="2020-03-10T18:17:05.094080848Z" level=info msg="shim reaped" id=32b0408125301a794eacc6e938507ed1ff678b670aec13a9de8e55f4ad01b2d8
Mar 10 18:17:05 minikube dockerd[2384]: time="2020-03-10T18:17:05.104489987Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 10 18:17:05 minikube dockerd[2384]: time="2020-03-10T18:17:05.104872941Z" level=warning msg="32b0408125301a794eacc6e938507ed1ff678b670aec13a9de8e55f4ad01b2d8 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/32b0408125301a794eacc6e938507ed1ff678b670aec13a9de8e55f4ad01b2d8/mounts/shm, flags: 0x2: no such file or directory"
Mar 10 18:17:06 minikube dockerd[2384]: time="2020-03-10T18:17:06.282378817Z" level=info msg="shim reaped" id=ef1dada9969834e5574ccf11c727ea38d3bd1710e14f5d26fadc259fc65ac56f
Mar 10 18:17:06 minikube dockerd[2384]: time="2020-03-10T18:17:06.292690857Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 10 18:17:06 minikube dockerd[2384]: time="2020-03-10T18:17:06.293461783Z" level=warning msg="ef1dada9969834e5574ccf11c727ea38d3bd1710e14f5d26fadc259fc65ac56f cleanup: failed to unmount IPC: umount /var/lib/docker/containers/ef1dada9969834e5574ccf11c727ea38d3bd1710e14f5d26fadc259fc65ac56f/mounts/shm, flags: 0x2: no such file or directory"
Mar 10 18:17:06 minikube dockerd[2384]: time="2020-03-10T18:17:06.404045591Z" level=info msg="shim reaped" id=6692173735faad64b6a3f722751202a6608278698842b63e111a06396fcddc2e
Mar 10 18:17:06 minikube dockerd[2384]: time="2020-03-10T18:17:06.412871398Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 10 18:17:06 minikube dockerd[2384]: time="2020-03-10T18:17:06.413068901Z" level=warning msg="6692173735faad64b6a3f722751202a6608278698842b63e111a06396fcddc2e cleanup: failed to unmount IPC: umount /var/lib/docker/containers/6692173735faad64b6a3f722751202a6608278698842b63e111a06396fcddc2e/mounts/shm, flags: 0x2: no such file or directory"
Mar 10 18:17:17 minikube dockerd[2384]: time="2020-03-10T18:17:17.614089481Z" level=info msg="shim reaped" id=315614d83ce8d391edba5335c95a98343c1bb26ea674bf395a54885805991898
Mar 10 18:17:17 minikube dockerd[2384]: time="2020-03-10T18:17:17.624465069Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 10 18:17:17 minikube dockerd[2384]: time="2020-03-10T18:17:17.624789923Z" level=warning msg="315614d83ce8d391edba5335c95a98343c1bb26ea674bf395a54885805991898 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/315614d83ce8d391edba5335c95a98343c1bb26ea674bf395a54885805991898/mounts/shm, flags: 0x2: no such file or directory"
Mar 10 18:17:20 minikube dockerd[2384]: time="2020-03-10T18:17:20.942805669Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bf5e54bc1bfa734ca105d0b4d793fe4f99db5757744ca0fb7f01c48e7d481660/shim.sock" debug=false pid=9996
Mar 10 18:17:26 minikube dockerd[2384]: time="2020-03-10T18:17:26.050223272Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e6d045f341273d78d84ce19dfd2f9dcb4fb6f40647a505f69ff3470146eb7825/shim.sock" debug=false pid=10082
Mar 10 18:17:27 minikube dockerd[2384]: time="2020-03-10T18:17:27.932189277Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/db9e99ef99298914a99f168cb7d6f8ca84f8dde1d15ac90c5ff167ad81651429/shim.sock" debug=false pid=10157
Mar 10 18:17:38 minikube dockerd[2384]: time="2020-03-10T18:17:38.034290424Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8548a4e4ba2f15c25acce1cabc478640e0719867714a6dedf8020b25f7a720fc/shim.sock" debug=false pid=10327
Mar 10 18:17:46 minikube dockerd[2384]: time="2020-03-10T18:17:46.518607812Z" level=info msg="shim reaped" id=bf5e54bc1bfa734ca105d0b4d793fe4f99db5757744ca0fb7f01c48e7d481660
Mar 10 18:17:46 minikube dockerd[2384]: time="2020-03-10T18:17:46.528709831Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 10 18:17:46 minikube dockerd[2384]: time="2020-03-10T18:17:46.529056238Z" level=warning msg="bf5e54bc1bfa734ca105d0b4d793fe4f99db5757744ca0fb7f01c48e7d481660 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/bf5e54bc1bfa734ca105d0b4d793fe4f99db5757744ca0fb7f01c48e7d481660/mounts/shm, flags: 0x2: no such file or directory"
Mar 10 18:18:11 minikube dockerd[2384]: time="2020-03-10T18:18:11.949249615Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/77bf824d2883bb92755980c08b751e7003efce618757f6efcef161aa18d78482/shim.sock" debug=false pid=10714

==> container status <==
CONTAINER           IMAGE                                                                                                     CREATED              STATE               NAME                        ATTEMPT             POD ID
77bf824d2883b       9687ec6baa7d6                                                                                             About a minute ago   Running             schedule-checker            3                   666b2d5d500ce
8548a4e4ba2f1       eb51a35975256                                                                                             2 minutes ago        Running             kubernetes-dashboard        2                   9cb0c19b8bdf9
db9e99ef99298       d109c0821a2b9                                                                                             2 minutes ago        Running             kube-scheduler              5                   d6b170844d3f0
e6d045f341273       b0f1517c1f4bb                                                                                             2 minutes ago        Running             kube-controller-manager     5                   9ef48ef39940c
bf5e54bc1bfa7       9687ec6baa7d6                                                                                             2 minutes ago        Exited              schedule-checker            2                   666b2d5d500ce
62bcbb4918003       fea3c7fc0b18c                                                                                             3 minutes ago        Running             memsql                      1                   eb1668edc1041
9e9031d499b89       031101f652f86                                                                                             3 minutes ago        Running             ssh-server                  1                   562a053489edf
2ec263fa7438e       9687ec6baa7d6                                                                                             3 minutes ago        Running             cmdg                        1                   8491d429fd840
d3104d4ce1bc9       4689081edb103                                                                                             3 minutes ago        Running             storage-provisioner         1                   cc682bc1c22e5
7ad23a4e759ed       busybox@sha256:b26cd013274a657b86e706210ddd5cc1f82f50155791199d29b9e86e935ce135                           3 minutes ago        Exited              memsql-init                 1                   eb1668edc1041
05dcf2d3fa668       mailhog/mailhog@sha256:98c7e2e6621c897ad86f31610d756d76b8ee622c354c28a76f4ed49fb6ed996f                   3 minutes ago        Running             mailhog                     1                   e80dd3d5f1c2c
8058ad503ef1c       pafortin/goaws@sha256:e2cdefaa005ac7ff706585399026f784b0306b09b941ddf2030230c4a844adbd                    3 minutes ago        Running             sqs                         1                   a48d701ab0948
c0af5172978df       3b08661dc379d                                                                                             3 minutes ago        Running             dashboard-metrics-scraper   1                   fad367be92ae4
2263b4e18abd1       9687ec6baa7d6                                                                                             3 minutes ago        Running             webpack-devserver           1                   ff1f96ce69631
5f55e3c5e1a5f       9687ec6baa7d6                                                                                             3 minutes ago        Running             cmd                         1                   1cf336ba2ad1d
e66803543fc3b       9687ec6baa7d6                                                                                             3 minutes ago        Running             django-redirect             1                   e0973e783031f
81328a2fe825b       9687ec6baa7d6                                                                                             3 minutes ago        Running             django-secure               1                   3aeb142e086a6
4ce3df1729315       9687ec6baa7d6                                                                                             3 minutes ago        Running             events                      1                   cf1c1666aab02
d9f9d5d76d072       busybox@sha256:b26cd013274a657b86e706210ddd5cc1f82f50155791199d29b9e86e935ce135                           3 minutes ago        Exited              configure                   1                   562a053489edf
7737caef32a6f       9687ec6baa7d6                                                                                             3 minutes ago        Running             user-management             1                   96bba911e872e
3f78c7a024280       ae853e93800dc                                                                                             3 minutes ago        Running             kube-proxy                  1                   02746db86a9cb
d9c916329fe1e       9687ec6baa7d6                                                                                             3 minutes ago        Running             django-microsites           1                   155631157d0f8
7c6a56e5c82a4       f76f959b2a494                                                                                             3 minutes ago        Running             mongo                       1                   4434377cbb3d6
315614d83ce8d       eb51a35975256                                                                                             3 minutes ago        Exited              kubernetes-dashboard        1                   9cb0c19b8bdf9
ddd399540e3e5       9687ec6baa7d6                                                                                             3 minutes ago        Running             partner-library             1                   c3b23ec663061
793d8c77959df       datadog/agent@sha256:8c54089bab7fb66c9f6ce5cb7206acb513efbd6e8e806cd8c7c51c1ace000846                     3 minutes ago        Running             datadog-agent               1                   b97ea87ca4d90
20e3dc737f67e       bd287e105bc19                                                                                             3 minutes ago        Running             postgres                    1                   950bcd57e745b
ec47fb404f308       1d2c7ac1c1bbe                                                                                             3 minutes ago        Running             lb                          1                   18a8e711ae2af
038d2c615635d       70f311871ae12                                                                                             3 minutes ago        Running             coredns                     1                   cf5a05fe441ba
078ff6ccf6c14       70f311871ae12                                                                                             3 minutes ago        Running             coredns                     1                   d226adde24ab3
488279b0394ce       791b6e40940cd                                                                                             3 minutes ago        Running             mysql8                      1                   1ee35b1e0b7af
2bd3cb32adf2f       458eedf9515fb                                                                                             3 minutes ago        Running             elasticsearch               1                   380d261653f1d
8c077912b3b59       busybox@sha256:b26cd013274a657b86e706210ddd5cc1f82f50155791199d29b9e86e935ce135                           3 minutes ago        Exited              configure                   1                   18a8e711ae2af
fcd55158763c1       deangiberson/aws-dynamodb-local@sha256:09fbd60d426de65cfbc782df21b26beba3a31740f111b9f818ac88303d5afe23   3 minutes ago        Running             dynamodb                    1                   693426b889e11
56637f4386909       190ed8a616203                                                                                             3 minutes ago        Running             redis                       1                   ac1ba62692a74
36c65472e18d7       deangiberson/aws-dynamodb-local@sha256:09fbd60d426de65cfbc782df21b26beba3a31740f111b9f818ac88303d5afe23   3 minutes ago        Exited              copy-data                   1                   693426b889e11
ea618dc5ab7c5       d46de11f718fb                                                                                             3 minutes ago        Running             solr                        1                   950162b6d0e49
27bc3f075385c       busybox@sha256:b26cd013274a657b86e706210ddd5cc1f82f50155791199d29b9e86e935ce135                           3 minutes ago        Exited              copy-data                   1                   950162b6d0e49
81e08ca6a143d       303ce5db0e90d                                                                                             3 minutes ago        Running             etcd                        3                   5de264db95e16
6692173735faa       b0f1517c1f4bb                                                                                             3 minutes ago        Exited              kube-controller-manager     4                   9ef48ef39940c
ef1dada996983       d109c0821a2b9                                                                                             3 minutes ago        Exited              kube-scheduler              4                   d6b170844d3f0
ed156c20d1298       90d27391b7808                                                                                             3 minutes ago        Running             kube-apiserver              2                   970bc4daf4535
096dc6932ecdb       9687ec6baa7d6                                                                                             3 hours ago          Exited              events                      0                   634879fbabce4
00cc35af28e5f       9687ec6baa7d6                                                                                             3 hours ago          Exited              cmdg                        0                   76460fc7366b9
38cbe0e44aba1       9687ec6baa7d6                                                                                             3 hours ago          Exited              partner-library             0                   7b1f6a26af160
a8ae66af2322a       9687ec6baa7d6                                                                                             3 hours ago          Exited              webpack-devserver           0                   853d933a4be48
0ecc76ea5d80d       9687ec6baa7d6                                                                                             3 hours ago          Exited              cmd                         0                   839e4920770e3
c434d83119bcd       9687ec6baa7d6                                                                                             3 hours ago          Exited              django-redirect             0                   44d99567a7918
397328b881d52       9687ec6baa7d6                                                                                             3 hours ago          Exited              django-secure               0                   408b09dd9d721
338bf111de050       9687ec6baa7d6                                                                                             3 hours ago          Exited              django-microsites           0                   e4968343ac46c
05de56dbc8150       9687ec6baa7d6                                                                                             3 hours ago          Exited              user-management             0                   d5ce9d5f8940d
fa9b35177acbc       031101f652f86                                                                                             3 hours ago          Exited              ssh-server                  0                   3d189d886146b
8d3941d3e532c       deangiberson/aws-dynamodb-local@sha256:09fbd60d426de65cfbc782df21b26beba3a31740f111b9f818ac88303d5afe23   3 hours ago          Exited              dynamodb                    0                   fa1b79acbf2f1
76e86e8e71225       d46de11f718fb                                                                                             3 hours ago          Exited              solr                        0                   a64c1a11ac4a8
74bc4dcf06fd3       mailhog/mailhog@sha256:98c7e2e6621c897ad86f31610d756d76b8ee622c354c28a76f4ed49fb6ed996f                   3 hours ago          Exited              mailhog                     0                   025e6412949e8
03390d96ae7d6       190ed8a616203                                                                                             3 hours ago          Exited              redis                       0                   fa761496f6318
a98eb7fc3e1dc       1d2c7ac1c1bbe                                                                                             3 hours ago          Exited              lb                          0                   720fd780706f2
ca40a01907dd9       fea3c7fc0b18c                                                                                             3 hours ago          Exited              memsql                      0                   1b89595c1cb75
af23191ca4e5f       3b08661dc379d                                                                                             3 hours ago          Exited              dashboard-metrics-scraper   0                   65a1407f871ae
30e8f1ee832e6       datadog/agent@sha256:8c54089bab7fb66c9f6ce5cb7206acb513efbd6e8e806cd8c7c51c1ace000846                     3 hours ago          Exited              datadog-agent               0                   24b02078db0d0
e433d83a11845       bd287e105bc19                                                                                             3 hours ago          Exited              postgres                    0                   c981cd7770bd0
c527d2617c3d5       pafortin/goaws@sha256:e2cdefaa005ac7ff706585399026f784b0306b09b941ddf2030230c4a844adbd                    3 hours ago          Exited              sqs                         0                   e8a7f2b9b2a63
1e1c50e4db647       791b6e40940cd                                                                                             3 hours ago          Exited              mysql8                      0                   d2ceade66e6f2
34902ed62f52f       f76f959b2a494                                                                                             3 hours ago          Exited              mongo                       0                   ee693a921f152
bd4eb5633a024       70f311871ae12                                                                                             3 hours ago          Exited              coredns                     0                   94151fe90c43a
18ccbffe6fef9       458eedf9515fb                                                                                             3 hours ago          Exited              elasticsearch               0                   70db68205a3f5
15d9494a4efcb       70f311871ae12                                                                                             3 hours ago          Exited              coredns                     0                   c128658efe629
49fa768f5c304       4689081edb103                                                                                             3 hours ago          Exited              storage-provisioner         0                   43179929c8a80
34ff7ff849342       ae853e93800dc                                                                                             3 hours ago          Exited              kube-proxy                  0                   a53bcb10fda18
5f6e02e49707a       303ce5db0e90d                                                                                             3 hours ago          Exited              etcd                        2                   559dd8fca1bfa
b54ee4ea498e3       90d27391b7808                                                                                             3 hours ago          Exited              kube-apiserver              1                   6bd1e804af377

==> coredns [038d2c615635] <==
E0310 18:16:58.798945       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:.8: Fail:d 3o list *v1.Servi[e:NFG]t htltpi:///1e0.o96d0. :4u3/anpi/nv1 seovifes?gliritt50o&resou =eV4rsionf0:cdia6l tcp 10.96.0.1:443: i/o timeout
E0310 18:16:58.798969       1 reflector.go:125] pkg/mod9k89.io/ec7li8ent-co@v90.00.0-bc1706200851C1-r8d2aS7921b.6/.o5l
/cacleirnefuextormgd:984:, aigled1to li.st *v .Name1pace: Get https://10.96.0.1b44
/api/[IN/nO] spaceg?lnmir=50d&re ourceler ioa=0t ing on: "kubernetia
 tcpI0310 0.86.1.1:4538.7/o89im8o t E031  1 :16:58a80e.50o  8 2    rrale[7or73go:92527pk "Reflector pkg/mod/k8s.io/client-go@vg/.mod0k82.i1/cli62n0-g8@v0.0.1-208190a6200985b01-b8/2aoo79s/ab/toolrsflachce/re.lec:tor gL:is: AaideWatt lcis" *(1.Etndptinds: Get 0tt0s:// 0196:0.6:423/ap7i/v1/end2oin+ts?l0imitTC00&re+sou0rce3e7sion30: (dtal tcp 1m.e6.:0.3:4.300 0/3 tim1ou3
97s):
Trace[773249627]: [30.000312397s] [30.000312397s] END
E0310 18:16:58.798945       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0310 18:16:58.798945       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0310 18:16:58.798945       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0310 18:16:58.798965       1 trace.go:82] Trace[92507523]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2020-03-10 18:16:28.798015767 +0000 UTC m=+0.041846518) (total time: 30.000837951s):
Trace[92507523]: [30.000837951s] [30.000837951s] END
E0310 18:16:58.798969       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0310 18:16:58.798969       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0310 18:16:58.798969       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0310 18:16:58.800457       1 trace.go:82] Trace[1421839197]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2020-03-10 18:16:28.798727588 +0000 UTC m=+0.042558310) (total time: 30.001714608s):
Trace[1421839197]: [30.001714608s] [30.001714608s] END
E0310 18:16:58.800500       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0310 18:16:58.800500       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0310 18:16:58.800500       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout

==> coredns [078ff6ccf6c1] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
E0310 18:16:57.634883       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0310 18:16:57.634857       1 trace.go:82] Trace[1882243101]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2020-03-10 18:16:27.633787733 +0000 UTC m=+0.247399499) (total time: 30.001008412s):
Trace[1882243101]: [30.001008412s] [30.001008412s] END
E0310 18:16:57.634883       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0310 18:16:57.634883       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0310 18:16:57.634883       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0310 18:16:57.635415       1 trace.go:82] Trace[1125220571]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2020-03-10 18:16:27.634895846 +0000 UTC m=+0.248507589) (total time: 30.000503447s):
Trace[1125220571]: [30.000503447s] [30.000503447s] END
E0310 18:16:57.635437       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0310 18:16:57.635437       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0310 18:16:57.635437       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0310 18:16:57.635437       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0310 18:16:57.635767       1 trace.go:82] Trace[629027841]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2020-03-10 18:16:27.635156074 +0000 UTC m=+0.248767785) (total time: 30.000599788s):
Trace[629027841]: [30.000599788s] [30.000599788s] END
E0310 18:16:57.635786       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0310 18:16:57.635786       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0310 18:16:57.635786       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0310 18:16:57.635786       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout

==> coredns [15d9494a4efc] <==
E0310 18:15:00.365880       1 reflector.go:283] pkg/mod/k8s.io/cl15:0i.e6n5t8-80o @ v 0. 0 .10 r2e0f1l9e0c6t2r0.g8o5:18031]- p8kd2a.7Na2measbpacoeo:l sG/ect chhtet/prs:f/l/e1c0t.o96..0.g:o4493/:apFiail1/dn atoe spatacch s*?vr1.sNoaumrecspaersion=942276&timeout=5m37sctimeeo:u tGSeet httpds:=/3/3170&.w9a6.0.=1:r4u4e3:/apiia/lv 1t/cnpa 1e0s.p96c.e0.1?:r4e4s3o uconenVecrts: ocno=n9n4ecti6o&nt irefouuste=d5
m37s&timeoutSeconds=337&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E0310 18:15:00.365880 E  310 18:15:00.365920       1 reflector.go:283] pkg/mko8d/.ki8s.ciloie/ncl-egnot@-vg0o.0v.00-02001-026021000865210001857180d2-a7f8792bfa7b/toobl/s/ocache/reflector.go:98: Failed to watch *v1.Namlsspaccaec:h eG/erte fhltetptso:r/./go0:.9986.0.1:4e4d3/tapi/v1/hn a*me1s.paecrevsi?cree:s oGuertc ehVttps://10.96.0.1:443/api/v1/esonrdvsi=ce3s7?&weatocuhr=cteruer:s idoina=l9 4tc2p 1&0t.i9m6e.o0u.t1=54m430:s &ctoinmneocutt: ecconnedct=i3o1n0 &reafucshe=dt
rue: diE0 3t1c0p  180:.1956:.000.136:548830:    o n n e c1t :recfolnenec/imond/r8esfusoe/c
lient-gE@0v30.00 .08-:201:00062008957110 1 -7 8 d 2 af 7r9e2blaebc/tooo.lso/:ca8c3he/reflector.go:98: Failed to watc  *v1pNkam/esopda/ce8:s .Giet/ hltipesn:t/-/g1o0@.v906.00.01-:24043/a6p2i0/0v15/1na1me7s8pa2caefs7?9rebsou/rtceoVesr/sciaocnhe9/4r2e2f76e&ttiomreoguot:=98: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=961645&timeout=6m18s&timeoutSeconds=378&watch=true: dial tcp 10.96.0.1:443: connect: connectio5nmr3e7fsu&stim
eoutSeconds=337&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E0310 18:15:00.365920       1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=945235&timeout=5m10s&timeoutSeconds=310&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E0310 18:15:00.365920       1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=945235&timeout=5m10s&timeoutSeconds=310&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E0310 18:15:00.365920       1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=945235&timeout=5m10s&timeoutSeconds=310&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E0310 18:15:00.365971       1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=961645&timeout=6m18s&timeoutSeconds=378&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E0310 18:15:00.365971       1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=961645&timeout=6m18s&timeoutSeconds=378&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E0310 18:15:00.365971       1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=961645&timeout=6m18s&timeoutSeconds=378&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s

==> coredns [bd4eb5633a02] <==
.:53
E0310 18:15:00.365697       1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=942276&timeout=8m22s&timeoutSeconds=502&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2
E0310 18:15:00.365739       1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=961645&timeout=6m24s&timeoutSeconds=384&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E0310 18:15:00.365848       1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=945235&timeout=6m32s&timeoutSeconds=392&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E0310 18:15:00.365697       1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=942276&timeout=8m22s&timeoutSeconds=502&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E0310 18:15:00.365697       1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=942276&timeout=8m22s&timeoutSeconds=502&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E0310 18:15:00.365697       1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=942276&timeout=8m22s&timeoutSeconds=502&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E0310 18:15:00.365739       1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=961645&timeout=6m24s&timeoutSeconds=384&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E0310 18:15:00.365739       1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=961645&timeout=6m24s&timeoutSeconds=384&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E0310 18:15:00.365739       1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=961645&timeout=6m24s&timeoutSeconds=384&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E0310 18:15:00.365848       1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=945235&timeout=6m32s&timeoutSeconds=392&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E0310 18:15:00.365848       1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=945235&timeout=6m32s&timeoutSeconds=392&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E0310 18:15:00.365848       1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=945235&timeout=6m32s&timeoutSeconds=392&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E0310 18:15:01.367253       1 reflector.go:125] pkg/mod/k8s.ioE443/a0i3/110 1names0pa.c3e6s725i3 i    0  &1 rsfourcteoVergsoi:on=0]:  pdial tcp 1k0.96o.0.1:s4.4io:/ cconieent-go@cvo0.0e.0-2o0190620s08d5101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: conEe031:0  c1o8n:15c:t0i1o.n3 6r7efu3s e d
    1 reflector.go:1125]8 :pkg/0mo.d37337.6   c l  e 1 refl@vc0.0r..go2:01250]6 pk0g/mod/k78sd2af792bab/tt-oools/ca0.h-e2/reflect0085g0o1:-788: Failed to lists *cva1c.Nameesplaector.Geot 9h8tt pFa:ile1d0 .t9o. l.i1s:t *v1.pEn/dvpo/inats:s Gec es?limit=500&resourceV/ersii/on1=/endpiointtscp i10i.9=65.000&1re4so3u:rcecVoersnecnt: :connaelc ticon  1r0.96s0ed
:443: co0nne0c t:8:15nne1ct3io3n 7r6   s    1 refEe0310r 1g8:152:01.p7kg4/5mo d / k8s.i1o/clilent-gor@.vg0.:01205-]2 01k906m200858101i-78d2af7t9-2goavb/tools/c1a90620r0e8f5101-o78.d2oaf79:2bab/teoo ls/caicshte/*rve1.Elecdtporints:9 Get ahttepds:/o list *v1./S10r.96.0:.1:443/api/v1//e1ndpo6in0ts?l4imi/t=50/0v&resservircesVlersion5=0: deialu tccpV 10.i96n.00.:1:44a3: connect: co.nnec4ti:onc rnefucste: connection r1ef used
5:01.373376       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E0310 18:15:01.373376       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E0310 18:15:01.373457       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E0310 18:15:01.373457       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E0310 18:15:01.373457       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s

==> dmesg <==
              00:00:00.001455 main     Package type: LINUX_64BITS_GENERIC
[  +0.000037] 00:00:00.001509 main     5.2.32 r132073 started. Verbose level = 0
[  +0.331050] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[  +4.705706] hpet1: lost 288 rtc interrupts
[  +5.004114] hpet1: lost 309 rtc interrupts
[  +3.493229] systemd-fstab-generator[2352]: Ignoring "noauto" for root device
[  +0.127771] systemd-fstab-generator[2368]: Ignoring "noauto" for root device
[  +1.382312] hpet_rtc_timer_reinit: 108 callbacks suppressed
[  +0.000001] hpet1: lost 318 rtc interrupts
[Mar10 18:16] systemd-fstab-generator[2649]: Ignoring "noauto" for root device
[  +0.094454] hpet1: lost 313 rtc interrupts
[  +5.000899] hpet1: lost 318 rtc interrupts
[ +10.001859] hpet_rtc_timer_reinit: 33 callbacks suppressed
[  +0.000001] hpet1: lost 319 rtc interrupts
[  +5.010100] hpet_rtc_timer_reinit: 6 callbacks suppressed
[  +0.000000] hpet1: lost 318 rtc interrupts
[  +5.004326] hpet_rtc_timer_reinit: 42 callbacks suppressed
[  +0.000001] hpet1: lost 318 rtc interrupts
[  +5.011138] hpet1: lost 319 rtc interrupts
[  +5.589670] kauditd_printk_skb: 1 callbacks suppressed
[  +4.411716] hpet1: lost 318 rtc interrupts
[  +5.001088] hpet1: lost 318 rtc interrupts
[ +10.001423] hpet_rtc_timer_reinit: 39 callbacks suppressed
[  +0.000001] hpet1: lost 318 rtc interrupts
[Mar10 18:17] hpet1: lost 318 rtc interrupts
[ +10.001894] hpet_rtc_timer_reinit: 3 callbacks suppressed
[  +0.000001] hpet1: lost 318 rtc interrupts
[  +5.000668] hpet1: lost 318 rtc interrupts
[  +5.001034] hpet1: lost 318 rtc interrupts
[  +5.000371] hpet1: lost 318 rtc interrupts
[  +5.000969] hpet1: lost 318 rtc interrupts
[  +5.001090] hpet1: lost 318 rtc interrupts
[  +5.000515] hpet1: lost 318 rtc interrupts
[  +4.362883] NFSD: Unable to end grace period: -110
[  +0.637777] hpet1: lost 318 rtc interrupts
[  +5.001150] hpet1: lost 318 rtc interrupts
[  +5.000441] hpet1: lost 318 rtc interrupts
[Mar10 18:18] hpet1: lost 318 rtc interrupts
[  +5.000155] hpet1: lost 318 rtc interrupts
[  +5.000999] hpet1: lost 318 rtc interrupts
[  +5.001679] hpet1: lost 318 rtc interrupts
[  +5.003616] hpet1: lost 318 rtc interrupts
[  +5.003014] hpet1: lost 319 rtc interrupts
[  +5.003193] hpet1: lost 319 rtc interrupts
[  +5.002398] hpet1: lost 318 rtc interrupts
[  +5.002681] hpet1: lost 318 rtc interrupts
[  +5.002485] hpet1: lost 318 rtc interrupts
[  +5.001952] hpet1: lost 319 rtc interrupts
[  +5.000408] hpet1: lost 318 rtc interrupts
[Mar10 18:19] hpet1: lost 318 rtc interrupts
[  +5.001192] hpet1: lost 318 rtc interrupts
[  +5.000207] hpet1: lost 318 rtc interrupts
[  +5.001442] hpet1: lost 318 rtc interrupts
[  +5.000349] hpet1: lost 318 rtc interrupts
[  +5.001328] hpet1: lost 318 rtc interrupts
[  +5.001364] hpet1: lost 319 rtc interrupts
[  +5.000404] hpet1: lost 318 rtc interrupts
[  +5.001123] hpet1: lost 318 rtc interrupts
[  +5.000940] hpet1: lost 318 rtc interrupts
[  +5.001484] hpet1: lost 318 rtc interrupts

==> kernel <==
 18:19:58 up 4 min,  0 users,  load average: 2.42, 3.91, 1.87
Linux minikube 4.19.88 #1 SMP Tue Feb 4 22:25:03 PST 2020 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2019.02.8"

==> kube-apiserver [b54ee4ea498e] <==
I0310 18:05:46.315412       1 trace.go:116] Trace[544887124]: "Update" url:/api/v1/namespaces/default/configmaps/datadog-leader-election,user-agent:agent/v0.0.0 (linux/amd64) kubernetes/$Format,client:172.17.0.7 (started: 2020-03-10 18:05:44.440707093 +0000 UTC m=+7943.194403893) (total time: 1.874693927s):
Trace[544887124]: [1.874661082s] [1.874519627s] Object stored in database
I0310 18:05:46.315533       1 trace.go:116] Trace[500825093]: "List etcd3" key:/masterleases/,resourceVersion:0,limit:0,continue: (started: 2020-03-10 18:05:44.212727352 +0000 UTC m=+7942.966424156) (total time: 2.102795732s):
Trace[500825093]: [2.102795732s] [2.102795732s] END
I0310 18:05:47.333858       1 trace.go:116] Trace[2017823004]: "Get" url:/api/v1/persistentvolumes/pvc-e8ad1c82-0a84-47ae-8a23-ace6027e6a5c,user-agent:kubelet/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-10 18:05:46.317578903 +0000 UTC m=+7945.071275697) (total time: 1.01624952s):
Trace[2017823004]: [1.016201435s] [1.01619751s] About to write a response
I0310 18:05:48.291796       1 trace.go:116] Trace[1216580905]: "Get" url:/api/v1/persistentvolumes/pvc-64e6f744-8bb3-4ef5-81fc-29d261eaa791,user-agent:kubelet/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-10 18:05:47.459078419 +0000 UTC m=+7946.212775208) (total time: 832.69228ms):
Trace[1216580905]: [832.652613ms] [832.649036ms] About to write a response
I0310 18:05:48.436872       1 trace.go:116] Trace[1287262351]: "GuaranteedUpdate etcd3" type:*core.Endpoints (started: 2020-03-10 18:05:47.459122492 +0000 UTC m=+7946.212819289) (total time: 977.72694ms):
Trace[1287262351]: [977.631059ms] [977.50103ms] Transaction committed
I0310 18:05:48.437108       1 trace.go:116] Trace[1364696840]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-10 18:05:47.459066673 +0000 UTC m=+7946.212763467) (total time: 978.024668ms):
Trace[1364696840]: [977.987764ms] [977.952777ms] Object stored in database
I0310 18:05:49.079712       1 trace.go:116] Trace[1669115426]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-10 18:05:48.437968093 +0000 UTC m=+7947.191664887) (total time: 641.718335ms):
Trace[1669115426]: [641.681346ms] [641.669449ms] About to write a response
I0310 18:14:46.597250       1 trace.go:116] Trace[1898961550]: "Get" url:/api/v1/namespaces/default/persistentvolumeclaims/src,user-agent:kubelet/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-10 18:14:46.04510734 +0000 UTC m=+8484.798804152) (total time: 552.111102ms):
Trace[1898961550]: [552.065816ms] [552.054694ms] About to write a response
I0310 18:14:50.702183       1 trace.go:116] Trace[1559788504]: "List" url:/api/v1/componentstatuses,user-agent:agent/v0.0.0 (linux/amd64) kubernetes/$Format,client:172.17.0.7 (started: 2020-03-10 18:14:49.475923207 +0000 UTC m=+8488.229619996) (total time: 1.226235686s):
Trace[1559788504]: [1.22617674s] [1.22616484s] Listing from storage done
I0310 18:14:51.704845       1 trace.go:116] Trace[418717699]: "GuaranteedUpdate etcd3" type:*coordination.Lease (started: 2020-03-10 18:14:51.197409827 +0000 UTC m=+8489.951106631) (total time: 507.397403ms):
Trace[418717699]: [507.379715ms] [507.246272ms] Transaction committed
I0310 18:14:51.705129       1 trace.go:116] Trace[1808436737]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/m01,user-agent:kubelet/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-10 18:14:51.197331447 +0000 UTC m=+8489.951028240) (total time: 507.679094ms):
Trace[1808436737]: [507.628807ms] [507.57585ms] Object stored in database
I0310 18:14:51.756024       1 trace.go:116] Trace[114868796]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-10 18:14:51.253734119 +0000 UTC m=+8490.007430920) (total time: 502.258066ms):
Trace[114868796]: [502.215244ms] [502.197108ms] About to write a response
I0310 18:14:52.196084       1 trace.go:116] Trace[621101833]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:kube-scheduler/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-10 18:14:51.393148292 +0000 UTC m=+8490.146845087) (total time: 802.907509ms):
Trace[621101833]: [802.871496ms] [802.845493ms] About to write a response
I0310 18:14:52.360159       1 trace.go:116] Trace[1989908138]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-10 18:14:51.75743639 +0000 UTC m=+8490.511133184) (total time: 602.693391ms):
Trace[1989908138]: [602.653556ms] [602.636835ms] About to write a response
I0310 18:14:52.861860       1 trace.go:116] Trace[164841873]: "GuaranteedUpdate etcd3" type:*core.Endpoints (started: 2020-03-10 18:14:52.301129441 +0000 UTC m=+8491.054826243) (total time: 560.709885ms):
Trace[164841873]: [560.689177ms] [560.492826ms] Transaction committed
I0310 18:14:52.862122       1 trace.go:116] Trace[880639099]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:kube-scheduler/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-10 18:14:52.301021425 +0000 UTC m=+8491.054718216) (total time: 560.897666ms):
Trace[880639099]: [560.863476ms] [560.783363ms] Object stored in database
I0310 18:14:52.941046       1 trace.go:116] Trace[968835960]: "GuaranteedUpdate etcd3" type:*core.Endpoints (started: 2020-03-10 18:14:52.362517323 +0000 UTC m=+8491.116214121) (total time: 578.473884ms):
Trace[968835960]: [578.466703ms] [578.302745ms] Transaction committed
I0310 18:14:52.941245       1 trace.go:116] Trace[567915891]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-10 18:14:52.362460264 +0000 UTC m=+8491.116157050) (total time: 578.63818ms):
Trace[567915891]: [578.614766ms] [578.579279ms] Object stored in database
I0310 18:14:53.559339       1 trace.go:116] Trace[682143076]: "Get" url:/api/v1/namespaces/default/services/kubernetes,user-agent:kube-apiserver/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-10 18:14:52.941942781 +0000 UTC m=+8491.695639567) (total time: 617.372697ms):
Trace[682143076]: [617.348709ms] [617.345891ms] About to write a response
I0310 18:15:00.363324       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController
I0310 18:15:00.363349       1 controller.go:122] Shutting down OpenAPI controller
I0310 18:15:00.363359       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
I0310 18:15:00.363368       1 nonstructuralschema_controller.go:203] Shutting down NonStructuralSchemaConditionController
I0310 18:15:00.363373       1 establishing_controller.go:84] Shutting down EstablishingController
I0310 18:15:00.363378       1 naming_controller.go:299] Shutting down NamingConditionController
I0310 18:15:00.363383       1 customresource_discovery_controller.go:219] Shutting down DiscoveryController
I0310 18:15:00.363389       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
I0310 18:15:00.363395       1 crd_finalizer.go:275] Shutting down CRDFinalizer
I0310 18:15:00.363400       1 apiapproval_controller.go:197] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
I0310 18:15:00.363405       1 apiservice_controller.go:106] Shutting down APIServiceRegistrationController
I0310 18:15:00.363406       1 controller.go:180] Shutting down kubernetes service endpoint reconciler
I0310 18:15:00.363410       1 autoregister_controller.go:164] Shutting down autoregister controller
I0310 18:15:00.363416       1 available_controller.go:398] Shutting down AvailableConditionController
I0310 18:15:00.363429       1 dynamic_cafile_content.go:181] Shutting down request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0310 18:15:00.363436       1 dynamic_cafile_content.go:181] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0310 18:15:00.363498       1 controller.go:87] Shutting down OpenAPI AggregationController
I0310 18:15:00.363506       1 dynamic_cafile_content.go:181] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0310 18:15:00.363512       1 dynamic_serving_content.go:144] Shutting down serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I0310 18:15:00.363517       1 dynamic_cafile_content.go:181] Shutting down request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0310 18:15:00.364686       1 secure_serving.go:222] Stopped listening on [::]:8443
E0310 18:15:00.387794       1 controller.go:183] Get https://localhost:8443/api/v1/namespaces/default/endpoints/kubernetes: dial tcp 127.0.0.1:8443: connect: connection refused

==> kube-apiserver [ed156c20d129] <==
Trace[211220618]: [2.709332638s] [2.709328586s] About to write a response
I0310 18:17:22.359432       1 trace.go:116] Trace[435046722]: "Get" url:/api/v1/namespaces/default/pods/cmd,user-agent:kubelet/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-10 18:17:17.481302788 +0000 UTC m=+66.004186354) (total time: 4.878113365s):
Trace[435046722]: [4.878070277s] [4.8780515s] About to write a response
I0310 18:17:22.359601       1 trace.go:116] Trace[1200087400]: "Get" url:/api/v1/namespaces/default/persistentvolumeclaims/src,user-agent:kubelet/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-10 18:17:19.006681767 +0000 UTC m=+67.529565339) (total time: 3.352906215s):
Trace[1200087400]: [3.352857396s] [3.352757916s] About to write a response
I0310 18:17:22.757597       1 trace.go:116] Trace[1776203415]: "Get" url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:kube-apiserver/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-10 18:17:18.49556499 +0000 UTC m=+67.018448560) (total time: 4.261886339s):
Trace[1776203415]: [4.261842345s] [4.261835746s] About to write a response
I0310 18:17:23.401505       1 trace.go:116] Trace[2065287444]: "GuaranteedUpdate etcd3" type:*core.Pod (started: 2020-03-10 18:17:22.363608764 +0000 UTC m=+70.886492347) (total time: 1.037873929s):
Trace[2065287444]: [1.037813567s] [1.037202306s] Transaction committed
I0310 18:17:23.401831       1 trace.go:116] Trace[631062231]: "Patch" url:/api/v1/namespaces/default/pods/cmd/status,user-agent:kubelet/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-10 18:17:22.363552272 +0000 UTC m=+70.886435844) (total time: 1.038259068s):
Trace[631062231]: [1.037979249s] [1.037424214s] Object stored in database
I0310 18:17:23.660644       1 trace.go:116] Trace[718567041]: "Create" url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-10 18:17:20.952861476 +0000 UTC m=+69.475745053) (total time: 2.707759049s):
Trace[718567041]: [2.707721275s] [2.707646024s] Object stored in database
I0310 18:17:24.443118       1 trace.go:116] Trace[1549059598]: "List etcd3" key:/masterleases/,resourceVersion:0,limit:0,continue: (started: 2020-03-10 18:17:22.758256912 +0000 UTC m=+71.281140508) (total time: 1.684838401s):
Trace[1549059598]: [1.684838401s] [1.684838401s] END
I0310 18:17:24.443456       1 trace.go:116] Trace[1361742935]: "Get" url:/api/v1/namespaces/kube-system/pods/etcd-m01,user-agent:kubelet/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-10 18:17:23.402985234 +0000 UTC m=+71.925868805) (total time: 1.040452141s):
Trace[1361742935]: [1.040404354s] [1.040399709s] About to write a response
I0310 18:17:24.577302       1 trace.go:116] Trace[858951555]: "Create" url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-10 18:17:23.661602196 +0000 UTC m=+72.184485763) (total time: 915.678125ms):
Trace[858951555]: [915.644603ms] [915.548123ms] Object stored in database
I0310 18:17:24.660166       1 trace.go:116] Trace[417820058]: "Get" url:/api/v1/persistentvolumes/pvc-64e6f744-8bb3-4ef5-81fc-29d261eaa791,user-agent:kubelet/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-10 18:17:22.363354227 +0000 UTC m=+70.886237793) (total time: 2.296787251s):
Trace[417820058]: [2.296746744s] [2.296743179s] About to write a response
I0310 18:17:24.813723       1 trace.go:116] Trace[540973210]: "GuaranteedUpdate etcd3" type:*coordination.Lease (started: 2020-03-10 18:17:23.771267724 +0000 UTC m=+72.294151304) (total time: 1.042435541s):
Trace[540973210]: [1.042413496s] [1.042297292s] Transaction committed
I0310 18:17:24.813854       1 trace.go:116] Trace[227291569]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/m01,user-agent:kubelet/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-10 18:17:23.77119659 +0000 UTC m=+72.294080158) (total time: 1.042642036s):
Trace[227291569]: [1.042602873s] [1.042554645s] Object stored in database
I0310 18:17:24.844011       1 trace.go:116] Trace[582073454]: "List" url:/api/v1/componentstatuses,user-agent:agent/v0.0.0 (linux/amd64) kubernetes/$Format,client:172.17.0.12 (started: 2020-03-10 18:17:22.454440041 +0000 UTC m=+70.977323608) (total time: 2.389548668s):
Trace[582073454]: [2.389467281s] [2.389454632s] Listing from storage done
I0310 18:17:25.051525       1 trace.go:116] Trace[713382396]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-10 18:17:24.444306807 +0000 UTC m=+72.967190374) (total time: 607.123114ms):
Trace[713382396]: [607.074308ms] [607.069945ms] About to write a response
I0310 18:17:26.444772       1 trace.go:116] Trace[840285707]: "Get" url:/api/v1/persistentvolumes/pvc-634f939e-43bc-4009-ae04-a23d72bdc703,user-agent:kubelet/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-10 18:17:24.870932393 +0000 UTC m=+73.393815959) (total time: 1.573814923s):
Trace[840285707]: [1.57377319s] [1.573769093s] About to write a response
I0310 18:17:26.472911       1 trace.go:116] Trace[1572910158]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-6955765f44-jqqwl,user-agent:kubelet/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-10 18:17:24.85301729 +0000 UTC m=+73.375900855) (total time: 1.619869649s):
Trace[1572910158]: [1.619817452s] [1.619811334s] About to write a response
I0310 18:17:26.473180       1 trace.go:116] Trace[156579476]: "Get" url:/api/v1/namespaces/default/services/kubernetes,user-agent:kube-apiserver/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-10 18:17:25.052400343 +0000 UTC m=+73.575283919) (total time: 1.420767391s):
Trace[156579476]: [1.420704975s] [1.420700182s] About to write a response
I0310 18:17:26.473376       1 trace.go:116] Trace[335546022]: "Create" url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-10 18:17:24.867108829 +0000 UTC m=+73.389992390) (total time: 1.606253953s):
Trace[335546022]: [1.606233269s] [1.606132255s] Object stored in database
I0310 18:17:27.156236       1 trace.go:116] Trace[1601853214]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-03-10 18:17:26.473571283 +0000 UTC m=+74.996454864) (total time: 682.644688ms):
Trace[1601853214]: [336.641207ms] [305.41271ms] Transaction prepared
Trace[1601853214]: [682.631617ms] [345.99041ms] Transaction committed
I0310 18:17:27.197365       1 trace.go:116] Trace[1556270829]: "Get" url:/api/v1/namespaces/default/pods/mailhog-0,user-agent:kubelet/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-10 18:17:26.5983854 +0000 UTC m=+75.121268967) (total time: 598.95813ms):
Trace[1556270829]: [598.915007ms] [598.910874ms] About to write a response
I0310 18:17:28.377583       1 trace.go:116] Trace[1567194689]: "Get" url:/api/v1/persistentvolumes/pvc-64e6f744-8bb3-4ef5-81fc-29d261eaa791,user-agent:kubelet/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-10 18:17:26.643289935 +0000 UTC m=+75.166173511) (total time: 1.734270797s):
Trace[1567194689]: [1.734234874s] [1.734229051s] About to write a response
I0310 18:17:28.498051       1 trace.go:116] Trace[51538250]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-10 18:17:27.227225822 +0000 UTC m=+75.750109391) (total time: 1.270765947s):
Trace[51538250]: [1.270735798s] [1.27070237s] About to write a response
I0310 18:17:28.585628       1 trace.go:116] Trace[1814553991]: "Get" url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:kube-apiserver/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-10 18:17:27.157416645 +0000 UTC m=+75.680300209) (total time: 1.428162314s):
Trace[1814553991]: [1.428131688s] [1.428128035s] About to write a response
I0310 18:17:28.587079       1 trace.go:116] Trace[474072100]: "Create" url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-10 18:17:27.489978677 +0000 UTC m=+76.012862242) (total time: 1.097088025s):
Trace[474072100]: [1.097068368s] [1.096991531s] Object stored in database
I0310 18:17:29.165879       1 trace.go:116] Trace[1715490599]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-scheduler-m01,user-agent:kubelet/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-10 18:17:27.222455796 +0000 UTC m=+75.745339363) (total time: 1.9433263s):
Trace[1715490599]: [1.94327901s] [1.943274469s] About to write a response
I0310 18:17:29.213388       1 trace.go:116] Trace[223311479]: "Create" url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-10 18:17:28.710638219 +0000 UTC m=+77.233521786) (total time: 502.724208ms):
Trace[223311479]: [502.686058ms] [502.612559ms] Object stored in database
I0310 18:17:29.307152       1 trace.go:116] Trace[379379081]: "Get" url:/api/v1/persistentvolumes/pvc-64e6f744-8bb3-4ef5-81fc-29d261eaa791,user-agent:kubelet/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-10 18:17:28.621453823 +0000 UTC m=+77.144337385) (total time: 685.677022ms):
Trace[379379081]: [685.640088ms] [685.636885ms] About to write a response
I0310 18:17:46.804366       1 trace.go:116] Trace[1457924716]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-10 18:17:46.172303318 +0000 UTC m=+94.695186888) (total time: 632.034399ms):
Trace[1457924716]: [631.996315ms] [631.983252ms] About to write a response
I0310 18:17:46.822425       1 trace.go:116] Trace[932059180]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/node-controller,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/kube-controller-manager,client:127.0.0.1 (started: 2020-03-10 18:17:46.176606409 +0000 UTC m=+94.699489980) (total time: 645.793789ms):
Trace[932059180]: [645.770551ms] [645.766534ms] About to write a response

==> kube-controller-manager [6692173735fa] <==
I0310 18:16:52.833232       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
I0310 18:16:52.833248       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
I0310 18:16:52.833255       1 controllermanager.go:533] Started "resourcequota"
I0310 18:16:52.833425       1 resource_quota_controller.go:271] Starting resource quota controller
I0310 18:16:52.833430       1 shared_informer.go:197] Waiting for caches to sync for resource quota
I0310 18:16:52.833442       1 resource_quota_monitor.go:303] QuotaMonitor running
I0310 18:16:53.382566       1 controllermanager.go:533] Started "garbagecollector"
I0310 18:16:53.382883       1 garbagecollector.go:129] Starting garbage collector controller
I0310 18:16:53.382904       1 shared_informer.go:197] Waiting for caches to sync for garbage collector
I0310 18:16:53.382920       1 graph_builder.go:282] GraphBuilder running
I0310 18:16:53.438451       1 controllermanager.go:533] Started "persistentvolume-binder"
I0310 18:16:53.438707       1 pv_controller_base.go:294] Starting persistent volume controller
I0310 18:16:53.438744       1 shared_informer.go:197] Waiting for caches to sync for persistent volume
I0310 18:16:53.446963       1 controllermanager.go:533] Started "replicaset"
I0310 18:16:53.447138       1 replica_set.go:180] Starting replicaset controller
I0310 18:16:53.447170       1 shared_informer.go:197] Waiting for caches to sync for ReplicaSet
I0310 18:16:54.217367       1 controllermanager.go:533] Started "horizontalpodautoscaling"
I0310 18:16:54.217444       1 horizontal.go:156] Starting HPA controller
I0310 18:16:54.217449       1 shared_informer.go:197] Waiting for caches to sync for HPA
E0310 18:16:54.222484       1 core.go:91] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0310 18:16:54.222505       1 controllermanager.go:525] Skipping "service"
W0310 18:16:54.244611       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="m01" does not exist
I0310 18:16:54.272276       1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
I0310 18:16:54.272481       1 shared_informer.go:204] Caches are synced for certificate-csrsigning 
I0310 18:16:54.276164       1 shared_informer.go:204] Caches are synced for PV protection 
I0310 18:16:54.295954       1 shared_informer.go:204] Caches are synced for service account 
I0310 18:16:54.299880       1 shared_informer.go:204] Caches are synced for expand 
I0310 18:16:54.301995       1 shared_informer.go:204] Caches are synced for namespace 
I0310 18:16:54.320324       1 shared_informer.go:204] Caches are synced for certificate-csrapproving 
I0310 18:16:54.326126       1 shared_informer.go:204] Caches are synced for TTL 
I0310 18:16:54.333781       1 shared_informer.go:204] Caches are synced for job 
I0310 18:16:54.341396       1 shared_informer.go:204] Caches are synced for persistent volume 
I0310 18:16:54.345296       1 shared_informer.go:204] Caches are synced for stateful set 
I0310 18:16:54.350283       1 shared_informer.go:204] Caches are synced for endpoint 
I0310 18:16:54.365825       1 shared_informer.go:204] Caches are synced for daemon sets 
I0310 18:16:54.383066       1 shared_informer.go:204] Caches are synced for PVC protection 
I0310 18:16:54.383092       1 shared_informer.go:204] Caches are synced for taint 
I0310 18:16:54.383124       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
W0310 18:16:54.383303       1 node_lifecycle_controller.go:1058] Missing timestamp for Node m01. Assuming now as a timestamp.
I0310 18:16:54.383342       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
I0310 18:16:54.384215       1 taint_manager.go:186] Starting NoExecuteTaintManager
I0310 18:16:54.384673       1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"m01", UID:"42f82cce-aaf2-401b-93fa-c0652b877418", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node m01 event: Registered Node m01 in Controller
I0310 18:16:54.427185       1 shared_informer.go:204] Caches are synced for ReplicationController 
I0310 18:16:54.427231       1 shared_informer.go:204] Caches are synced for attach detach 
I0310 18:16:54.431813       1 shared_informer.go:204] Caches are synced for GC 
I0310 18:16:54.578202       1 shared_informer.go:197] Waiting for caches to sync for resource quota
I0310 18:16:54.627675       1 shared_informer.go:204] Caches are synced for bootstrap_signer 
I0310 18:16:54.643909       1 shared_informer.go:204] Caches are synced for disruption 
I0310 18:16:54.643997       1 disruption.go:338] Sending events to api server.
I0310 18:16:54.647644       1 shared_informer.go:204] Caches are synced for ReplicaSet 
I0310 18:16:54.665062       1 shared_informer.go:204] Caches are synced for deployment 
I0310 18:16:54.778726       1 shared_informer.go:204] Caches are synced for resource quota 
I0310 18:16:54.783187       1 shared_informer.go:204] Caches are synced for garbage collector 
I0310 18:16:54.783203       1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0310 18:16:54.819219       1 shared_informer.go:204] Caches are synced for HPA 
I0310 18:16:54.833572       1 shared_informer.go:204] Caches are synced for resource quota 
I0310 18:17:02.559660       1 shared_informer.go:197] Waiting for caches to sync for garbage collector
I0310 18:17:02.559758       1 shared_informer.go:204] Caches are synced for garbage collector 
I0310 18:17:06.189363       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: failed to tryAcquireOrRenew context deadline exceeded
F0310 18:17:06.189456       1 controllermanager.go:279] leaderelection lost

==> kube-controller-manager [e6d045f34127] <==
I0310 18:17:48.532936       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges
I0310 18:17:48.533079       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
I0310 18:17:48.533109       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
I0310 18:17:48.533210       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
I0310 18:17:48.533312       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
I0310 18:17:48.533339       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
I0310 18:17:48.533430       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
I0310 18:17:48.533554       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps
I0310 18:17:48.533580       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
I0310 18:17:48.533680       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
I0310 18:17:48.533811       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
I0310 18:17:48.533836       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
I0310 18:17:48.533922       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
I0310 18:17:48.533949       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
I0310 18:17:48.534151       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
I0310 18:17:48.534287       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps
I0310 18:17:48.534308       1 controllermanager.go:533] Started "resourcequota"
I0310 18:17:48.535236       1 resource_quota_controller.go:271] Starting resource quota controller
I0310 18:17:48.535252       1 shared_informer.go:197] Waiting for caches to sync for resource quota
I0310 18:17:48.535267       1 resource_quota_monitor.go:303] QuotaMonitor running
I0310 18:17:48.542246       1 controllermanager.go:533] Started "replicaset"
I0310 18:17:48.542757       1 shared_informer.go:197] Waiting for caches to sync for garbage collector
I0310 18:17:48.542907       1 replica_set.go:180] Starting replicaset controller
I0310 18:17:48.542940       1 shared_informer.go:197] Waiting for caches to sync for ReplicaSet
W0310 18:17:48.555733       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="m01" does not exist
I0310 18:17:48.574984       1 shared_informer.go:204] Caches are synced for ReplicationController 
I0310 18:17:48.580350       1 shared_informer.go:204] Caches are synced for PVC protection 
I0310 18:17:48.580370       1 shared_informer.go:204] Caches are synced for certificate-csrapproving 
I0310 18:17:48.585745       1 shared_informer.go:204] Caches are synced for TTL 
I0310 18:17:48.598308       1 shared_informer.go:204] Caches are synced for bootstrap_signer 
I0310 18:17:48.604793       1 shared_informer.go:204] Caches are synced for certificate-csrsigning 
I0310 18:17:48.606301       1 shared_informer.go:204] Caches are synced for GC 
I0310 18:17:48.612003       1 shared_informer.go:204] Caches are synced for stateful set 
I0310 18:17:48.627331       1 shared_informer.go:204] Caches are synced for taint 
I0310 18:17:48.627535       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
W0310 18:17:48.627657       1 node_lifecycle_controller.go:1058] Missing timestamp for Node m01. Assuming now as a timestamp.
I0310 18:17:48.627785       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
I0310 18:17:48.628070       1 taint_manager.go:186] Starting NoExecuteTaintManager
I0310 18:17:48.629816       1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"m01", UID:"42f82cce-aaf2-401b-93fa-c0652b877418", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node m01 event: Registered Node m01 in Controller
I0310 18:17:48.632461       1 shared_informer.go:204] Caches are synced for service account 
I0310 18:17:48.633477       1 shared_informer.go:204] Caches are synced for PV protection 
I0310 18:17:48.633505       1 shared_informer.go:204] Caches are synced for HPA 
I0310 18:17:48.636666       1 shared_informer.go:204] Caches are synced for expand 
I0310 18:17:48.638696       1 shared_informer.go:204] Caches are synced for persistent volume 
I0310 18:17:48.639511       1 shared_informer.go:204] Caches are synced for endpoint 
I0310 18:17:48.643606       1 shared_informer.go:204] Caches are synced for ReplicaSet 
I0310 18:17:48.646966       1 shared_informer.go:204] Caches are synced for attach detach 
I0310 18:17:48.650677       1 shared_informer.go:204] Caches are synced for namespace 
I0310 18:17:48.666789       1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
I0310 18:17:48.679397       1 shared_informer.go:204] Caches are synced for daemon sets 
I0310 18:17:48.880196       1 shared_informer.go:204] Caches are synced for job 
I0310 18:17:48.976680       1 shared_informer.go:204] Caches are synced for deployment 
I0310 18:17:49.005841       1 shared_informer.go:204] Caches are synced for disruption 
I0310 18:17:49.005938       1 disruption.go:338] Sending events to api server.
I0310 18:17:49.136057       1 shared_informer.go:204] Caches are synced for resource quota 
I0310 18:17:49.144489       1 shared_informer.go:204] Caches are synced for garbage collector 
I0310 18:17:49.149652       1 shared_informer.go:204] Caches are synced for garbage collector 
I0310 18:17:49.149691       1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0310 18:17:50.279991       1 shared_informer.go:197] Waiting for caches to sync for resource quota
I0310 18:17:50.280252       1 shared_informer.go:204] Caches are synced for resource quota 

==> kube-proxy [34ff7ff84934] <==
W0310 15:19:42.508298       1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
I0310 15:19:42.553405       1 node.go:135] Successfully retrieved node IP: 192.168.99.100
I0310 15:19:42.553431       1 server_others.go:145] Using iptables Proxier.
W0310 15:19:42.553529       1 proxier.go:286] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0310 15:19:42.553702       1 server.go:571] Version: v1.17.3
I0310 15:19:42.554129       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0310 15:19:42.554156       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0310 15:19:42.554198       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0310 15:19:42.554221       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0310 15:19:42.557959       1 config.go:313] Starting service config controller
I0310 15:19:42.557975       1 shared_informer.go:197] Waiting for caches to sync for service config
I0310 15:19:42.564449       1 config.go:131] Starting endpoints config controller
I0310 15:19:42.564484       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I0310 15:19:42.658963       1 shared_informer.go:204] Caches are synced for service config 
I0310 15:19:42.664830       1 shared_informer.go:204] Caches are synced for endpoints config 
I0310 15:40:17.191233       1 trace.go:116] Trace[594195141]: "iptables restore" (started: 2020-03-10 15:40:14.446288451 +0000 UTC m=+1232.128664643) (total time: 2.744909988s):
Trace[594195141]: [2.744909988s] [2.744909988s] END
E0310 18:15:00.367077       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=961645&timeout=7m18s&timeoutSeconds=438&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E0310 18:15:00.367105       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=945235&timeout=7m34s&timeoutSeconds=454&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E0310 18:15:01.437944       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=961645&timeout=6m16s&timeoutSeconds=376&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E0310 18:15:01.557029       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=945235&timeout=7m4s&timeoutSeconds=424&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E0310 18:15:03.180914       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=961645&timeout=9m14s&timeoutSeconds=554&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused

==> kube-proxy [3f78c7a02428] <==
W0310 18:16:48.929409       1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
I0310 18:16:50.156169       1 node.go:135] Successfully retrieved node IP: 192.168.99.100
I0310 18:16:50.156626       1 server_others.go:145] Using iptables Proxier.
W0310 18:16:50.159724       1 proxier.go:286] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0310 18:16:50.168158       1 server.go:571] Version: v1.17.3
I0310 18:16:50.532273       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0310 18:16:50.532673       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0310 18:16:50.532872       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0310 18:16:50.539917       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0310 18:16:50.863470       1 config.go:313] Starting service config controller
I0310 18:16:50.863497       1 shared_informer.go:197] Waiting for caches to sync for service config
I0310 18:16:51.102496       1 config.go:131] Starting endpoints config controller
I0310 18:16:51.117560       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I0310 18:16:51.225268       1 shared_informer.go:204] Caches are synced for service config 
I0310 18:16:51.225333       1 shared_informer.go:204] Caches are synced for endpoints config 

==> kube-scheduler [db9e99ef9929] <==
I0310 18:17:28.460780       1 serving.go:312] Generated self-signed cert in-memory
W0310 18:17:28.801587       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0310 18:17:28.801776       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0310 18:17:29.126788       1 authorization.go:47] Authorization is disabled
W0310 18:17:29.126934       1 authentication.go:92] Authentication is disabled
I0310 18:17:29.127010       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0310 18:17:29.128009       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0310 18:17:29.128262       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0310 18:17:29.128443       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0310 18:17:29.128558       1 tlsconfig.go:219] Starting DynamicServingCertificateController
I0310 18:17:29.133677       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0310 18:17:29.139674       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0310 18:17:29.229371       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-scheduler...
I0310 18:17:29.229640       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
I0310 18:17:29.241304       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
I0310 18:17:46.842334       1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler

==> kube-scheduler [ef1dada99698] <==
I0310 18:16:12.188359       1 serving.go:312] Generated self-signed cert in-memory
W0310 18:16:12.744285       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0310 18:16:12.744369       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0310 18:16:15.986482       1 authentication.go:348] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0310 18:16:15.986494       1 authentication.go:296] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0310 18:16:15.986499       1 authentication.go:297] Continuing without authentication configuration. This may treat all requests as anonymous.
W0310 18:16:15.986502       1 authentication.go:298] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
W0310 18:16:16.026896       1 authorization.go:47] Authorization is disabled
W0310 18:16:16.026906       1 authentication.go:92] Authentication is disabled
I0310 18:16:16.026912       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0310 18:16:16.027882       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0310 18:16:16.027948       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0310 18:16:16.027955       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0310 18:16:16.027963       1 tlsconfig.go:219] Starting DynamicServingCertificateController
I0310 18:16:16.129235       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
I0310 18:16:16.129343       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-scheduler...
I0310 18:16:38.861504       1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
I0310 18:17:06.035771       1 leaderelection.go:288] failed to renew lease kube-system/kube-scheduler: failed to tryAcquireOrRenew context deadline exceeded
F0310 18:17:06.035799       1 server.go:257] leaderelection lost

==> kubelet <==
-- Logs begin at Tue 2020-03-10 18:15:43 UTC, end at Tue 2020-03-10 18:19:58 UTC. --
Mar 10 18:16:53 minikube kubelet[2715]: W0310 18:16:53.031242    2715 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/memsql-0 through plugin: invalid network status for
Mar 10 18:17:05 minikube kubelet[2715]: W0310 18:17:05.684526    2715 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/schedule-checker through plugin: invalid network status for
Mar 10 18:17:05 minikube kubelet[2715]: E0310 18:17:05.693468    2715 pod_workers.go:191] Error syncing pod b478336b-1495-4a6a-af87-ea6036f133cd ("schedule-checker_default(b478336b-1495-4a6a-af87-ea6036f133cd)"), skipping: failed to "StartContainer" for "schedule-checker" with CrashLoopBackOff: "back-off 10s restarting failed container=schedule-checker pod=schedule-checker_default(b478336b-1495-4a6a-af87-ea6036f133cd)"
Mar 10 18:17:06 minikube kubelet[2715]: W0310 18:17:06.746641    2715 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/schedule-checker through plugin: invalid network status for
Mar 10 18:17:06 minikube kubelet[2715]: E0310 18:17:06.768943    2715 pod_workers.go:191] Error syncing pod 67b7e5352c5d7693f9bfac40cd9df88f ("kube-controller-manager-m01_kube-system(67b7e5352c5d7693f9bfac40cd9df88f)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-m01_kube-system(67b7e5352c5d7693f9bfac40cd9df88f)"
Mar 10 18:17:06 minikube kubelet[2715]: E0310 18:17:06.784654    2715 pod_workers.go:191] Error syncing pod e3025acd90e7465e66fa19c71b916366 ("kube-scheduler-m01_kube-system(e3025acd90e7465e66fa19c71b916366)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-m01_kube-system(e3025acd90e7465e66fa19c71b916366)"
Mar 10 18:17:08 minikube kubelet[2715]: I0310 18:17:08.423530    2715 log.go:172] http: TLS handshake error from 172.17.0.12:55826: remote error: tls: bad certificate
Mar 10 18:17:11 minikube kubelet[2715]: W0310 18:17:11.016032    2715 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/schedule-checker through plugin: invalid network status for
Mar 10 18:17:11 minikube kubelet[2715]: I0310 18:17:11.164262    2715 log.go:172] http: TLS handshake error from 172.17.0.12:55844: remote error: tls: bad certificate
Mar 10 18:17:15 minikube kubelet[2715]: E0310 18:17:15.382309    2715 pod_workers.go:191] Error syncing pod e3025acd90e7465e66fa19c71b916366 ("kube-scheduler-m01_kube-system(e3025acd90e7465e66fa19c71b916366)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-m01_kube-system(e3025acd90e7465e66fa19c71b916366)"
Mar 10 18:17:15 minikube kubelet[2715]: E0310 18:17:15.665620    2715 pod_workers.go:191] Error syncing pod 67b7e5352c5d7693f9bfac40cd9df88f ("kube-controller-manager-m01_kube-system(67b7e5352c5d7693f9bfac40cd9df88f)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-m01_kube-system(67b7e5352c5d7693f9bfac40cd9df88f)"
Mar 10 18:17:18 minikube kubelet[2715]: W0310 18:17:18.300140    2715 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-79d9cd965-kwj5l through plugin: invalid network status for
Mar 10 18:17:18 minikube kubelet[2715]: E0310 18:17:18.316978    2715 pod_workers.go:191] Error syncing pod dc366141-09ff-40a7-9b7e-b6b17138462f ("kubernetes-dashboard-79d9cd965-kwj5l_kubernetes-dashboard(dc366141-09ff-40a7-9b7e-b6b17138462f)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79d9cd965-kwj5l_kubernetes-dashboard(dc366141-09ff-40a7-9b7e-b6b17138462f)"
Mar 10 18:17:19 minikube kubelet[2715]: W0310 18:17:19.348922    2715 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-79d9cd965-kwj5l through plugin: invalid network status for
Mar 10 18:17:21 minikube kubelet[2715]: W0310 18:17:21.450017    2715 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/schedule-checker through plugin: invalid network status for
Mar 10 18:17:23 minikube kubelet[2715]: E0310 18:17:23.318935    2715 pod_workers.go:191] Error syncing pod dc366141-09ff-40a7-9b7e-b6b17138462f ("kubernetes-dashboard-79d9cd965-kwj5l_kubernetes-dashboard(dc366141-09ff-40a7-9b7e-b6b17138462f)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79d9cd965-kwj5l_kubernetes-dashboard(dc366141-09ff-40a7-9b7e-b6b17138462f)"
Mar 10 18:17:29 minikube kubelet[2715]: W0310 18:17:29.364154    2715 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/20f124c1-11df-4541-af60-9b245701ad10/volumes/kubernetes.io~secret/ssh-pubkey-volume and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Mar 10 18:17:29 minikube kubelet[2715]: W0310 18:17:29.364315    2715 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/20f124c1-11df-4541-af60-9b245701ad10/volumes/kubernetes.io~secret/default-token-k59d2 and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Mar 10 18:17:33 minikube kubelet[2715]: I0310 18:17:33.036947    2715 log.go:172] http: TLS handshake error from 172.17.0.12:55992: remote error: tls: bad certificate
Mar 10 18:17:38 minikube kubelet[2715]: W0310 18:17:38.051716    2715 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-79d9cd965-kwj5l through plugin: invalid network status for
Mar 10 18:17:39 minikube kubelet[2715]: W0310 18:17:39.501991    2715 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-79d9cd965-kwj5l through plugin: invalid network status for
Mar 10 18:17:46 minikube kubelet[2715]: W0310 18:17:46.740533    2715 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/schedule-checker through plugin: invalid network status for
Mar 10 18:17:46 minikube kubelet[2715]: E0310 18:17:46.756336    2715 pod_workers.go:191] Error syncing pod b478336b-1495-4a6a-af87-ea6036f133cd ("schedule-checker_default(b478336b-1495-4a6a-af87-ea6036f133cd)"), skipping: failed to "StartContainer" for "schedule-checker" with CrashLoopBackOff: "back-off 20s restarting failed container=schedule-checker pod=schedule-checker_default(b478336b-1495-4a6a-af87-ea6036f133cd)"
Mar 10 18:17:47 minikube kubelet[2715]: I0310 18:17:47.477060    2715 log.go:172] http: TLS handshake error from 172.17.0.12:56100: remote error: tls: bad certificate
Mar 10 18:17:47 minikube kubelet[2715]: W0310 18:17:47.778160    2715 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/schedule-checker through plugin: invalid network status for
Mar 10 18:17:48 minikube kubelet[2715]: I0310 18:17:48.435429    2715 log.go:172] http: TLS handshake error from 172.17.0.12:56112: remote error: tls: bad certificate
Mar 10 18:18:00 minikube kubelet[2715]: E0310 18:18:00.868537    2715 pod_workers.go:191] Error syncing pod b478336b-1495-4a6a-af87-ea6036f133cd ("schedule-checker_default(b478336b-1495-4a6a-af87-ea6036f133cd)"), skipping: failed to "StartContainer" for "schedule-checker" with CrashLoopBackOff: "back-off 20s restarting failed container=schedule-checker pod=schedule-checker_default(b478336b-1495-4a6a-af87-ea6036f133cd)"
Mar 10 18:18:11 minikube kubelet[2715]: W0310 18:18:11.511045    2715 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-79d9cd965-kwj5l through plugin: invalid network status for
Mar 10 18:18:12 minikube kubelet[2715]: W0310 18:18:12.576410    2715 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/schedule-checker through plugin: invalid network status for
Mar 10 18:18:16 minikube kubelet[2715]: W0310 18:18:16.894499    2715 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/20f124c1-11df-4541-af60-9b245701ad10/volumes/kubernetes.io~secret/ssh-pubkey-volume and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Mar 10 18:18:16 minikube kubelet[2715]: W0310 18:18:16.894527    2715 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/20f124c1-11df-4541-af60-9b245701ad10/volumes/kubernetes.io~secret/default-token-k59d2 and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Mar 10 18:18:26 minikube kubelet[2715]: I0310 18:18:26.446424    2715 log.go:172] http: TLS handshake error from 172.17.0.12:56410: remote error: tls: bad certificate
Mar 10 18:18:26 minikube kubelet[2715]: I0310 18:18:26.469574    2715 log.go:172] http: TLS handshake error from 172.17.0.12:56414: remote error: tls: bad certificate
Mar 10 18:18:26 minikube kubelet[2715]: I0310 18:18:26.485112    2715 log.go:172] http: TLS handshake error from 172.17.0.12:56418: remote error: tls: bad certificate
Mar 10 18:18:29 minikube kubelet[2715]: I0310 18:18:29.128028    2715 log.go:172] http: TLS handshake error from 172.17.0.12:56452: remote error: tls: bad certificate
Mar 10 18:18:29 minikube kubelet[2715]: I0310 18:18:29.140416    2715 log.go:172] http: TLS handshake error from 172.17.0.12:56456: remote error: tls: bad certificate
Mar 10 18:18:29 minikube kubelet[2715]: I0310 18:18:29.151666    2715 log.go:172] http: TLS handshake error from 172.17.0.12:56460: remote error: tls: bad certificate
Mar 10 18:18:32 minikube kubelet[2715]: I0310 18:18:32.069527    2715 log.go:172] http: TLS handshake error from 172.17.0.12:56502: remote error: tls: bad certificate
Mar 10 18:18:32 minikube kubelet[2715]: I0310 18:18:32.082879    2715 log.go:172] http: TLS handshake error from 172.17.0.12:56506: remote error: tls: bad certificate
Mar 10 18:18:32 minikube kubelet[2715]: I0310 18:18:32.101960    2715 log.go:172] http: TLS handshake error from 172.17.0.12:56510: remote error: tls: bad certificate
Mar 10 18:18:56 minikube kubelet[2715]: I0310 18:18:56.437882    2715 log.go:172] http: TLS handshake error from 172.17.0.12:56706: remote error: tls: bad certificate
Mar 10 18:18:56 minikube kubelet[2715]: I0310 18:18:56.445403    2715 log.go:172] http: TLS handshake error from 172.17.0.12:56710: remote error: tls: bad certificate
Mar 10 18:18:56 minikube kubelet[2715]: I0310 18:18:56.457868    2715 log.go:172] http: TLS handshake error from 172.17.0.12:56714: remote error: tls: bad certificate
Mar 10 18:18:59 minikube kubelet[2715]: I0310 18:18:59.159986    2715 log.go:172] http: TLS handshake error from 172.17.0.12:56750: remote error: tls: bad certificate
Mar 10 18:18:59 minikube kubelet[2715]: I0310 18:18:59.169691    2715 log.go:172] http: TLS handshake error from 172.17.0.12:56754: remote error: tls: bad certificate
Mar 10 18:18:59 minikube kubelet[2715]: I0310 18:18:59.189754    2715 log.go:172] http: TLS handshake error from 172.17.0.12:56758: remote error: tls: bad certificate
Mar 10 18:19:23 minikube kubelet[2715]: W0310 18:19:23.962472    2715 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/20f124c1-11df-4541-af60-9b245701ad10/volumes/kubernetes.io~secret/ssh-pubkey-volume and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Mar 10 18:19:23 minikube kubelet[2715]: W0310 18:19:23.962963    2715 volume_linux.go:45] Setting volume ownership for /var/lib/kubelet/pods/20f124c1-11df-4541-af60-9b245701ad10/volumes/kubernetes.io~secret/default-token-k59d2 and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Mar 10 18:19:26 minikube kubelet[2715]: I0310 18:19:26.437715    2715 log.go:172] http: TLS handshake error from 172.17.0.12:56968: remote error: tls: bad certificate
Mar 10 18:19:26 minikube kubelet[2715]: I0310 18:19:26.446407    2715 log.go:172] http: TLS handshake error from 172.17.0.12:56972: remote error: tls: bad certificate
Mar 10 18:19:26 minikube kubelet[2715]: I0310 18:19:26.456229    2715 log.go:172] http: TLS handshake error from 172.17.0.12:56976: remote error: tls: bad certificate
Mar 10 18:19:29 minikube kubelet[2715]: I0310 18:19:29.123099    2715 log.go:172] http: TLS handshake error from 172.17.0.12:57012: remote error: tls: bad certificate
Mar 10 18:19:29 minikube kubelet[2715]: I0310 18:19:29.131315    2715 log.go:172] http: TLS handshake error from 172.17.0.12:57016: remote error: tls: bad certificate
Mar 10 18:19:29 minikube kubelet[2715]: I0310 18:19:29.146088    2715 log.go:172] http: TLS handshake error from 172.17.0.12:57020: remote error: tls: bad certificate
Mar 10 18:19:31 minikube kubelet[2715]: I0310 18:19:31.046397    2715 log.go:172] http: TLS handshake error from 172.17.0.12:57038: remote error: tls: bad certificate
Mar 10 18:19:31 minikube kubelet[2715]: I0310 18:19:31.054518    2715 log.go:172] http: TLS handshake error from 172.17.0.12:57042: remote error: tls: bad certificate
Mar 10 18:19:31 minikube kubelet[2715]: I0310 18:19:31.063695    2715 log.go:172] http: TLS handshake error from 172.17.0.12:57046: remote error: tls: bad certificate
Mar 10 18:19:56 minikube kubelet[2715]: I0310 18:19:56.437819    2715 log.go:172] http: TLS handshake error from 172.17.0.12:57266: remote error: tls: bad certificate
Mar 10 18:19:56 minikube kubelet[2715]: I0310 18:19:56.445666    2715 log.go:172] http: TLS handshake error from 172.17.0.12:57270: remote error: tls: bad certificate
Mar 10 18:19:56 minikube kubelet[2715]: I0310 18:19:56.454810    2715 log.go:172] http: TLS handshake error from 172.17.0.12:57274: remote error: tls: bad certificate

==> kubernetes-dashboard [315614d83ce8] <==
2020/03/10 18:16:45 Starting overwatch
2020/03/10 18:16:46 Using namespace: kubernetes-dashboard
2020/03/10 18:16:46 Using in-cluster config to connect to apiserver
2020/03/10 18:16:46 Using secret token for csrf signing
2020/03/10 18:16:46 Initializing csrf token from kubernetes-dashboard-csrf secret
panic: Get https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf: dial tcp 10.96.0.1:443: i/o timeout

goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc00013d200)
	/home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:40 +0x3b4
github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
	/home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:65
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc000345b80)
	/home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:494 +0xc7
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc000345b80)
	/home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:462 +0x47
github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
	/home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:543
main.main()
	/home/travis/build/kubernetes/dashboard/src/app/backend/dashboard.go:105 +0x212

==> kubernetes-dashboard [8548a4e4ba2f] <==
2020/03/10 18:17:38 Starting overwatch
2020/03/10 18:17:38 Using namespace: kubernetes-dashboard
2020/03/10 18:17:38 Using in-cluster config to connect to apiserver
2020/03/10 18:17:38 Using secret token for csrf signing
2020/03/10 18:17:38 Initializing csrf token from kubernetes-dashboard-csrf secret
2020/03/10 18:17:38 Successful initial request to the apiserver, version: v1.17.3
2020/03/10 18:17:38 Generating JWE encryption key
2020/03/10 18:17:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2020/03/10 18:17:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2020/03/10 18:17:38 Initializing JWE encryption key from synchronized object
2020/03/10 18:17:38 Creating in-cluster Sidecar client
2020/03/10 18:17:38 Serving insecurely on HTTP port: 9090
2020/03/10 18:17:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2020/03/10 18:18:08 Successful request to sidecar

==> storage-provisioner [49fa768f5c30] <==
E0310 18:15:00.367401       1 reflector.go:315] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:379: Failed to watch *v1.StorageClass: Get https://10.96.0.1:443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=942276&timeoutSeconds=452&watch=true: dial tcp 10.96.0.1:443: getsockopt: connection refused
E0310 18:15:00.367759       1 reflector.go:315] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:411: Failed to watch *v1.PersistentVolumeClaim: Get https://10.96.0.1:443/api/v1/persistentvolumeclaims?resourceVersion=942276&timeoutSeconds=545&watch=true: dial tcp 10.96.0.1:443: getsockopt: connection refused
E0310 18:15:00.367894       1 reflector.go:315] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:412: Failed to watch *v1.PersistentVolume: Get https://10.96.0.1:443/api/v1/persistentvolumes?resourceVersion=942276&timeoutSeconds=324&watch=true: dial tcp 10.96.0.1:443: getsockopt: connection refused

==> storage-provisioner [d3104d4ce1bc] <==

The operating system version: macOS 10.15.3 (19D76)

It looks like tar is being called with an argument that isn't supported by BusyBox:

if rr, err := r.Runner.RunCmd(exec.Command("sudo", "tar", "-I", "lz4", "-C", "/var", "-xvf", dest)); err != nil {
return errors.Wrapf(err, "extracting tarball: %s", rr.Output())
}

According to BusyBox's documentation, its implementation of tar doesn't appear to support the LZ4 archive format.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 10, 2020

In the most recent ISO, the "tar" program was replaced with GNU tar (that does have such an option)

See d0b0dce

Most likely, the "lz4" program was also missing from the ISO anyway - so wouldn't have helped...

lz4 -d preloaded-images-k8s-v1-v1.17.3-docker-overlay2.tar.lz4 | tar

The error was hidden better in fea9fd3

But can be made to more silently fallback

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 10, 2020

Same as #6938

It should still have worked ("falling back to caching images"), but the output was a bit ugly...

@afbjorklund afbjorklund added area/guest-vm General configuration issues with the minikube guest VM triage/duplicate Indicates an issue is a duplicate of other open issue. labels Mar 10, 2020
@noelleleigh
Copy link
Author

Same as #6938

It should still have worked ("falling back to caching images"), but the output was a bit ugly...

So once #6941 is released, this error will be handled more gracefully? Is there a way to upgrade the ISO without having to minikube delete first?

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 10, 2020

Not really "gracefully", more like silently ? Theoretically you could copy the tar and lz4 binaries

Latest output looks like #6978

@tstromberg
Copy link
Contributor

Resolved by minikube v1.8.2.

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/guest-vm General configuration issues with the minikube guest VM triage/duplicate Indicates an issue is a duplicate of other open issue.
Projects
None yet
Development

No branches or pull requests

3 participants