Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

restarting minikube with flannel CNI: cannot stat '/etc/cni/net.d/100-crio-bridge.conf': No such file or directory #9481

Closed
YuriyKrasilnikov opened this issue Oct 17, 2020 · 1 comment Β· Fixed by #9505
Assignees
Labels
area/cni CNI support kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@YuriyKrasilnikov
Copy link

YuriyKrasilnikov commented Oct 17, 2020

Steps to reproduce the issue:

  1. minikube start --network-plugin=cni --cni=flannel
  2. minikube stop
  3. minikube start

Full output of failed command:
3. minikube start

πŸ˜„  minikube v1.14.0 on Microsoft Windows 10 Pro 10.0.19042 Build 19042
✨  Using the docker driver based on existing profile
πŸ‘  Starting control plane node minikube in cluster minikube
πŸ”„  Restarting existing docker container for "minikube" ...
🐳  Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
πŸ”—  Configuring Flannel (Container Networking Interface) ...
E1017 11:05:17.891097   21588 flannel.go:656] unable to disable /etc/cni/net.d/100-crio-bridge.conf: sudo mv /etc/cni/net.d/100-crio-bridge.conf \etc\cni\net.d\DISABLED-100-crio-bridge.conf: Process exited with status 1
stdout:

stderr:
mv: cannot stat '/etc/cni/net.d/100-crio-bridge.conf': No such file or directory
πŸ”Ž  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
πŸ„  Done! kubectl is now configured to use "minikube" by default

Full output of failed command with --alsologtostderr:
3*. minikube start --alsologtostderr

I1017 11:08:04.111782    5228 out.go:191] Setting JSON to false
I1017 11:08:04.123772    5228 start.go:103] hostinfo: {"hostname":"LAPTOP-FELNAPVO","uptime":4188,"bootTime":1602917896,"procs":331,"os":"windows","platform":"Microsoft Windows 10 Pro","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"","virtualizationSystem":"","virtualizationRole":"","hostid":"a24ab74d-2552-4ad5-8710-c5433dc7e8d1"} 
W1017 11:08:04.135775    5228 start.go:111] gopshost.Virtualization returned error: not implemented yet
I1017 11:08:04.139764    5228 out.go:109] πŸ˜„  minikube v1.14.0 on Microsoft Windows 10 Pro 10.0.19042 Build 19042
I1017 11:08:04.152774    5228 notify.go:126] Checking for updates...
I1017 11:08:04.152774    5228 driver.go:288] Setting default libvirt URI to qemu:///system
I1017 11:08:04.375794    5228 docker.go:117] docker version: linux-19.03.13
I1017 11:08:04.382802    5228 cli_runner.go:110] Run: docker system info --format "{{json .}}"
I1017 11:08:04.795433    5228 info.go:253] docker info: {ID:THNU:V3ED:YNWI:X5E3:O5Y5:QUBO:YRYV:NPIM:4EFP:DWOF:Y4EM:EA4B Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:79 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:true NFd:41 OomKillDisable:true NGoroutines:45 SystemTime:2020-10-17 08:08:04.1896217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:2 KernelVersion:4.19.128-microsoft-standard OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33181171712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fba4e9a7d01810a393d5d25a3621dc101981175 Expected:8fba4e9a7d01810a393d5d25a3621dc101981175} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:[WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I1017 11:08:04.819442    5228 out.go:109] ✨  Using the docker driver based on existing profile
I1017 11:08:04.819442    5228 start.go:272] selected driver: docker
I1017 11:08:04.820443    5228 start.go:680] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f Memory:10000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio 
NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.19.2 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ExposedPorts:[]}
I1017 11:08:04.845459    5228 start.go:691] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Fix: Doc:}
I1017 11:08:04.858441    5228 cli_runner.go:110] Run: docker system info --format "{{json .}}"
I1017 11:08:05.289094    5228 info.go:253] docker info: {ID:THNU:V3ED:YNWI:X5E3:O5Y5:QUBO:YRYV:NPIM:4EFP:DWOF:Y4EM:EA4B Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:79 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:true NFd:41 OomKillDisable:true NGoroutines:45 SystemTime:2020-10-17 08:08:04.6685355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:2 KernelVersion:4.19.128-microsoft-standard OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33181171712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fba4e9a7d01810a393d5d25a3621dc101981175 Expected:8fba4e9a7d01810a393d5d25a3621dc101981175} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:[WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I1017 11:08:06.519778    5228 start_flags.go:353] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f Memory:10000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.19.2 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ExposedPorts:[]}
I1017 11:08:06.550776    5228 out.go:109] πŸ‘  Starting control plane node minikube in cluster minikube
I1017 11:08:06.860335    5228 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f in local docker daemon, skipping pullI1017 11:08:06.866338    5228 cache.go:115] gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f exists in daemon, skipping pull
I1017 11:08:06.910846    5228 preload.go:97] Checking if preload exists for k8s version v1.19.2 and runtime docker
I1017 11:08:06.918844    5228 preload.go:105] Found local preload: C:\Users\yuryk\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4
I1017 11:08:06.918844    5228 cache.go:53] Caching tarball of preloaded images
I1017 11:08:06.918844    5228 preload.go:131] Found C:\Users\yuryk\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download    
I1017 11:08:06.918844    5228 cache.go:56] Finished verifying existence of preloaded tar for  v1.19.2 on docker
I1017 11:08:06.919847    5228 profile.go:150] Saving config to C:\Users\yuryk\.minikube\profiles\minikube\config.json ...
I1017 11:08:06.925843    5228 cache.go:182] Successfully downloaded all kic artifacts
I1017 11:08:06.967845    5228 start.go:314] acquiring machines lock for minikube: {Name:mka2f7579fbab2d9916e2685ea29adc789d08429 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1017 11:08:07.003847    5228 start.go:318] acquired machines lock for "minikube" in 1.0018ms
I1017 11:08:07.010845    5228 start.go:94] Skipping create...Using existing machine configuration
I1017 11:08:07.010845    5228 fix.go:54] fixHost starting:
I1017 11:08:07.025846    5228 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I1017 11:08:07.346105    5228 fix.go:107] recreateIfNeeded on minikube: state=Stopped err=<nil>
W1017 11:08:07.346105    5228 fix.go:133] unexpected machine state, will restart: <nil>
I1017 11:08:07.349103    5228 out.go:109] πŸ”„  Restarting existing docker container for "minikube" ...
I1017 11:08:07.356105    5228 cli_runner.go:110] Run: docker start minikube
I1017 11:08:07.910817    5228 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I1017 11:08:08.209170    5228 kic.go:356] container "minikube" state is running.
I1017 11:08:08.220171    5228 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I1017 11:08:08.501185    5228 profile.go:150] Saving config to C:\Users\yuryk\.minikube\profiles\minikube\config.json ...
I1017 11:08:08.513179    5228 machine.go:88] provisioning docker machine ...
I1017 11:08:08.533195    5228 ubuntu.go:166] provisioning hostname "minikube"
I1017 11:08:08.540175    5228 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1017 11:08:08.812739    5228 main.go:118] libmachine: Using SSH client type: native
I1017 11:08:08.813731    5228 main.go:118] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b7420] 0x7b73f0 <nil>  [] 0s} 127.0.0.1 32827 <nil> <nil>}
I1017 11:08:08.813731    5228 main.go:118] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I1017 11:08:08.964657    5228 main.go:118] libmachine: SSH cmd err, output: <nil>: minikube

I1017 11:08:08.971657    5228 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1017 11:08:09.258709    5228 main.go:118] libmachine: Using SSH client type: native
I1017 11:08:09.259709    5228 main.go:118] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b7420] 0x7b73f0 <nil>  [] 0s} 127.0.0.1 32827 <nil> <nil>}
I1017 11:08:09.259709    5228 main.go:118] libmachine: About to run SSH command:

                if ! grep -xq '.*\sminikube' /etc/hosts; then
                        if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
                        else
                                echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts;
                        fi
                fi
I1017 11:08:09.423931    5228 main.go:118] libmachine: SSH cmd err, output: <nil>: 
I1017 11:08:09.423931    5228 ubuntu.go:172] set auth options {CertDir:C:\Users\yuryk\.minikube CaCertPath:C:\Users\yuryk\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\yuryk\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\yuryk\.minikube\machines\server.pem ServerKeyPath:C:\Users\yuryk\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\yuryk\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\yuryk\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\yuryk\.minikube}
I1017 11:08:09.424925    5228 ubuntu.go:174] setting up certificates
I1017 11:08:09.424925    5228 provision.go:82] configureAuth start
I1017 11:08:09.430939    5228 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I1017 11:08:09.699953    5228 provision.go:131] copyHostCerts
I1017 11:08:09.699953    5228 exec_runner.go:91] found C:\Users\yuryk\.minikube/ca.pem, removing ...
I1017 11:08:09.700969    5228 exec_runner.go:98] cp: C:\Users\yuryk\.minikube\certs\ca.pem --> C:\Users\yuryk\.minikube/ca.pem (1034 bytes)
I1017 11:08:09.702952    5228 exec_runner.go:91] found C:\Users\yuryk\.minikube/cert.pem, removing ...
I1017 11:08:09.714948    5228 exec_runner.go:98] cp: C:\Users\yuryk\.minikube\certs\cert.pem --> C:\Users\yuryk\.minikube/cert.pem (1074 bytes)
I1017 11:08:09.716963    5228 exec_runner.go:91] found C:\Users\yuryk\.minikube/key.pem, removing ...
I1017 11:08:09.718953    5228 exec_runner.go:98] cp: C:\Users\yuryk\.minikube\certs\key.pem --> C:\Users\yuryk\.minikube/key.pem (1675 bytes)
I1017 11:08:09.719953    5228 provision.go:105] generating server cert: C:\Users\yuryk\.minikube\machines\server.pem ca-key=C:\Users\yuryk\.minikube\certs\ca.pem private-key=C:\Users\yuryk\.minikube\certs\ca-key.pem org=yuryk.minikube san=[192.168.49.2 localhost 127.0.0.1 minikube minikube]
I1017 11:08:09.807972    5228 provision.go:159] copyRemoteCerts
I1017 11:08:09.819962    5228 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1017 11:08:09.828951    5228 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1017 11:08:10.094972    5228 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32827 SSHKeyPath:C:\Users\yuryk\.minikube\machines\minikube\id_rsa Username:docker}
I1017 11:08:10.197636    5228 ssh_runner.go:215] scp C:\Users\yuryk\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1034 bytes)
I1017 11:08:10.215625    5228 ssh_runner.go:215] scp C:\Users\yuryk\.minikube\machines\server.pem --> /etc/docker/server.pem (1143 bytes)
I1017 11:08:10.236625    5228 ssh_runner.go:215] scp C:\Users\yuryk\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1017 11:08:10.262627    5228 provision.go:85] duration metric: configureAuth took 837.7023ms
I1017 11:08:10.262627    5228 ubuntu.go:190] setting minikube options for container-runtime
I1017 11:08:10.269626    5228 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1017 11:08:10.540760    5228 main.go:118] libmachine: Using SSH client type: native
I1017 11:08:10.541758    5228 main.go:118] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b7420] 0x7b73f0 <nil>  [] 0s} 127.0.0.1 32827 <nil> <nil>}
I1017 11:08:10.541758    5228 main.go:118] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1017 11:08:10.682859    5228 main.go:118] libmachine: SSH cmd err, output: <nil>: overlay

I1017 11:08:10.683855    5228 ubuntu.go:71] root file system type: overlay
I1017 11:08:10.683855    5228 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I1017 11:08:10.690856    5228 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1017 11:08:10.960721    5228 main.go:118] libmachine: Using SSH client type: native
I1017 11:08:10.960721    5228 main.go:118] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b7420] 0x7b73f0 <nil>  [] 0s} 127.0.0.1 32827 <nil> <nil>}
I1017 11:08:10.961726    5228 main.go:118] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1017 11:08:11.141752    5228 main.go:118] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I1017 11:08:11.225750    5228 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1017 11:08:11.505193    5228 main.go:118] libmachine: Using SSH client type: native
I1017 11:08:11.506204    5228 main.go:118] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b7420] 0x7b73f0 <nil>  [] 0s} 127.0.0.1 32827 <nil> <nil>}
I1017 11:08:11.506204    5228 main.go:118] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1017 11:08:11.647731    5228 main.go:118] libmachine: SSH cmd err, output: <nil>: 
I1017 11:08:11.647731    5228 machine.go:91] provisioned docker machine in 3.1145359s
I1017 11:08:11.648733    5228 start.go:268] post-start starting for "minikube" (driver="docker")
I1017 11:08:11.648733    5228 start.go:278] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1017 11:08:11.660732    5228 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1017 11:08:11.671747    5228 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1017 11:08:11.941252    5228 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32827 SSHKeyPath:C:\Users\yuryk\.minikube\machines\minikube\id_rsa Username:docker}
I1017 11:08:12.049154    5228 ssh_runner.go:148] Run: cat /etc/os-release
I1017 11:08:12.054159    5228 main.go:118] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1017 11:08:12.064155    5228 main.go:118] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1017 11:08:12.064155    5228 main.go:118] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1017 11:08:12.065159    5228 info.go:97] Remote host: Ubuntu 20.04 LTS
I1017 11:08:12.066156    5228 filesync.go:118] Scanning C:\Users\yuryk\.minikube\addons for local assets ...
I1017 11:08:12.066156    5228 filesync.go:118] Scanning C:\Users\yuryk\.minikube\files for local assets ...
I1017 11:08:12.067155    5228 start.go:271] post-start completed in 418.4218ms
I1017 11:08:12.079189    5228 ssh_runner.go:148] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1017 11:08:12.089184    5228 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1017 11:08:12.357221    5228 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32827 SSHKeyPath:C:\Users\yuryk\.minikube\machines\minikube\id_rsa Username:docker}
I1017 11:08:12.454673    5228 fix.go:56] fixHost completed within 5.4438279s
I1017 11:08:12.454673    5228 start.go:81] releasing machines lock for "minikube", held for 5.4438279s
I1017 11:08:12.461662    5228 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I1017 11:08:12.727750    5228 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
I1017 11:08:12.733742    5228 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1017 11:08:12.738750    5228 ssh_runner.go:148] Run: systemctl --version
I1017 11:08:12.746745    5228 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1017 11:08:13.015741    5228 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32827 SSHKeyPath:C:\Users\yuryk\.minikube\machines\minikube\id_rsa Username:docker}
I1017 11:08:13.026743    5228 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32827 SSHKeyPath:C:\Users\yuryk\.minikube\machines\minikube\id_rsa Username:docker}
I1017 11:08:13.376090    5228 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service containerd
I1017 11:08:13.400615    5228 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I1017 11:08:13.412619    5228 cruntime.go:193] skipping containerd shutdown because we are bound to it
I1017 11:08:13.434617    5228 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
I1017 11:08:13.465619    5228 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I1017 11:08:13.488618    5228 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I1017 11:08:13.547616    5228 ssh_runner.go:148] Run: sudo systemctl start docker
I1017 11:08:13.565621    5228 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
I1017 11:08:13.610616    5228 out.go:109] 🐳  Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
I1017 11:08:13.616619    5228 cli_runner.go:110] Run: docker exec -t minikube dig +short host.docker.internal
I1017 11:08:13.931625    5228 network.go:67] got host ip for mount in container by digging dns: 192.168.65.2
I1017 11:08:13.945628    5228 ssh_runner.go:148] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I1017 11:08:13.950619    5228 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2                                                         host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I1017 11:08:13.969616    5228 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I1017 11:08:14.229616    5228 preload.go:97] Checking if preload exists for k8s version v1.19.2 and runtime docker
I1017 11:08:14.230623    5228 preload.go:105] Found local preload: C:\Users\yuryk\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4
I1017 11:08:14.236620    5228 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I1017 11:08:14.278606    5228 docker.go:381] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.19.2
k8s.gcr.io/kube-apiserver:v1.19.2
k8s.gcr.io/kube-controller-manager:v1.19.2
k8s.gcr.io/kube-scheduler:v1.19.2
gcr.io/k8s-minikube/storage-provisioner:v3
k8s.gcr.io/etcd:3.4.13-0
kubernetesui/dashboard:v2.0.3
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
quay.io/coreos/flannel:v0.12.0-amd64
k8s.gcr.io/pause:3.2

-- /stdout --
I1017 11:08:14.281605    5228 docker.go:319] Images already preloaded, skipping extraction
I1017 11:08:14.288604    5228 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I1017 11:08:14.334611    5228 docker.go:381] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.19.2
k8s.gcr.io/kube-apiserver:v1.19.2
k8s.gcr.io/kube-controller-manager:v1.19.2
k8s.gcr.io/kube-scheduler:v1.19.2
gcr.io/k8s-minikube/storage-provisioner:v3
k8s.gcr.io/etcd:3.4.13-0
kubernetesui/dashboard:v2.0.3
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
quay.io/coreos/flannel:v0.12.0-amd64
k8s.gcr.io/pause:3.2

-- /stdout --
I1017 11:08:14.347613    5228 cache_images.go:74] Images are preloaded, skipping loading
I1017 11:08:14.354187    5228 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}}
I1017 11:08:14.410188    5228 cni.go:74] Creating CNI manager for "flannel"
I1017 11:08:14.410188    5228 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I1017 11:08:14.410188    5228 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 
KubernetesVersion:v1.19.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I1017 11:08:14.411796    5228 kubeadm.go:154] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.49.2
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 192.168.49.2
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
  extraArgs:
    leader-elect: "false"
scheduler:
  extraArgs:
    leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
    extraArgs:
      proxy-refresh-interval: "70000"
kubernetesVersion: v1.19.2
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 192.168.49.2:10249

I1017 11:08:14.441780    5228 kubeadm.go:805] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.19.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2

[Install]
 config:
{KubernetesVersion:v1.19.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:}
I1017 11:08:14.455784    5228 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.19.2
I1017 11:08:14.466789    5228 binaries.go:43] Found k8s binaries, skipping transfer
I1017 11:08:14.485785    5228 ssh_runner.go:148] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1017 11:08:14.498781    5228 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
I1017 11:08:14.512783    5228 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
I1017 11:08:14.529784    5228 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1813 bytes)
I1017 11:08:14.557779    5228 ssh_runner.go:148] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1017 11:08:14.563783    5228 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.2                                                control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I1017 11:08:14.584785    5228 certs.go:52] Setting up C:\Users\yuryk\.minikube\profiles\minikube for IP: 192.168.49.2
I1017 11:08:14.590788    5228 certs.go:169] skipping minikubeCA CA generation: C:\Users\yuryk\.minikube\ca.key
I1017 11:08:14.590788    5228 certs.go:169] skipping proxyClientCA CA generation: C:\Users\yuryk\.minikube\proxy-client-ca.key
I1017 11:08:14.591784    5228 certs.go:269] skipping minikube-user signed cert generation: C:\Users\yuryk\.minikube\profiles\minikube\client.key
I1017 11:08:14.591784    5228 certs.go:269] skipping minikube signed cert generation: C:\Users\yuryk\.minikube\profiles\minikube\apiserver.key.dd3b5fb2
I1017 11:08:14.591784    5228 certs.go:269] skipping aggregator signed cert generation: C:\Users\yuryk\.minikube\profiles\minikube\proxy-client.key
I1017 11:08:14.592787    5228 certs.go:348] found cert: C:\Users\yuryk\.minikube\certs\C:\Users\yuryk\.minikube\certs\ca-key.pem (1675 bytes)
I1017 11:08:14.593784    5228 certs.go:348] found cert: C:\Users\yuryk\.minikube\certs\C:\Users\yuryk\.minikube\certs\ca.pem (1034 bytes)
I1017 11:08:14.593784    5228 certs.go:348] found cert: C:\Users\yuryk\.minikube\certs\C:\Users\yuryk\.minikube\certs\cert.pem (1074 bytes)
I1017 11:08:14.593784    5228 certs.go:348] found cert: C:\Users\yuryk\.minikube\certs\C:\Users\yuryk\.minikube\certs\key.pem (1675 bytes)
I1017 11:08:14.594780    5228 ssh_runner.go:215] scp C:\Users\yuryk\.minikube\profiles\minikube\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1350 bytes)
I1017 11:08:14.614782    5228 ssh_runner.go:215] scp C:\Users\yuryk\.minikube\profiles\minikube\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1017 11:08:14.641781    5228 ssh_runner.go:215] scp C:\Users\yuryk\.minikube\profiles\minikube\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1103 bytes)
I1017 11:08:14.661779    5228 ssh_runner.go:215] scp C:\Users\yuryk\.minikube\profiles\minikube\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1017 11:08:14.682783    5228 ssh_runner.go:215] scp C:\Users\yuryk\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes)
I1017 11:08:14.709781    5228 ssh_runner.go:215] scp C:\Users\yuryk\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1017 11:08:14.729785    5228 ssh_runner.go:215] scp C:\Users\yuryk\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes)
I1017 11:08:14.765786    5228 ssh_runner.go:215] scp C:\Users\yuryk\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1017 11:08:14.792784    5228 ssh_runner.go:215] scp C:\Users\yuryk\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes)
I1017 11:08:14.810781    5228 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
I1017 11:08:14.835780    5228 ssh_runner.go:148] Run: openssl version
I1017 11:08:14.854781    5228 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1017 11:08:14.876780    5228 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1017 11:08:14.882781    5228 certs.go:389] hashing: -rw-r--r-- 1 root root 1066 Sep 10 16:18 /usr/share/ca-certificates/minikubeCA.pem
I1017 11:08:14.897782    5228 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1017 11:08:14.924788    5228 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1017 11:08:14.935782    5228 kubeadm.go:324] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f Memory:10000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false 
KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.19.2 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ExposedPorts:[]}
I1017 11:08:14.948781    5228 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1017 11:08:15.002783    5228 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1017 11:08:15.013785    5228 kubeadm.go:335] found existing configuration files, will attempt cluster restart
I1017 11:08:15.017786    5228 kubeadm.go:509] restartCluster start
I1017 11:08:15.029792    5228 ssh_runner.go:148] Run: sudo test -d /data/minikube
I1017 11:08:15.040784    5228 kubeadm.go:122] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:

stderr:
I1017 11:08:15.055784    5228 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I1017 11:08:15.333672    5228 kubeconfig.go:117] verify returned: extract IP: "minikube" does not appear in C:\Users\yuryk/.kube/config
I1017 11:08:15.334667    5228 kubeconfig.go:128] "minikube" context is missing from C:\Users\yuryk/.kube/config - will repair!
I1017 11:08:15.335669    5228 lock.go:35] WriteFile acquiring C:\Users\yuryk/.kube/config: {Name:mkfe125e186c6b89df95ae923e4e966294de8ece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1017 11:08:15.358683    5228 ssh_runner.go:148] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1017 11:08:15.373664    5228 api_server.go:146] Checking apiserver status ...
I1017 11:08:15.392664    5228 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1017 11:08:15.416666    5228 api_server.go:150] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

stderr:
I1017 11:08:15.430662    5228 kubeadm.go:488] needs reconfigure: apiserver in state Stopped
I1017 11:08:15.432672    5228 kubeadm.go:928] stopping kube-system containers ...
I1017 11:08:15.438670    5228 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1017 11:08:15.487664    5228 docker.go:229] Stopping containers: [e4b8ba3f7ba1 157242ee891b f863ed523a46 579a7c5dee6d 3ea9d70490bb 9d9d5d5d8446 991c1482875d be73595c626a 6ac150d86ff2 ad2698803fd2 59043f55ca4d cf195cf5d665 ad5dfe422296 814d70929630 4d2227304be6 993a0527fb1b 038da63d7db3 949e017ed18d 8b01d655fab0]
I1017 11:08:15.494664    5228 ssh_runner.go:148] Run: docker stop e4b8ba3f7ba1 157242ee891b f863ed523a46 579a7c5dee6d 3ea9d70490bb 9d9d5d5d8446 991c1482875d be73595c626a 6ac150d86ff2 ad2698803fd2 59043f55ca4d cf195cf5d665 ad5dfe422296 814d70929630 4d2227304be6 993a0527fb1b 038da63d7db3 949e017ed18d 8b01d655fab0
I1017 11:08:15.545662    5228 ssh_runner.go:148] Run: sudo systemctl stop kubelet
I1017 11:08:15.569440    5228 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1017 11:08:15.579893    5228 kubeadm.go:150] found existing configuration files:
-rw------- 1 root root 5495 Oct 17 08:06 /etc/kubernetes/admin.conf
-rw------- 1 root root 5508 Oct 17 08:06 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 1911 Oct 17 08:07 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5456 Oct 17 08:06 /etc/kubernetes/scheduler.conf

I1017 11:08:15.601910    5228 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1017 11:08:15.627896    5228 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1017 11:08:15.650897    5228 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1017 11:08:15.660892    5228 kubeadm.go:161] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:

stderr:
I1017 11:08:15.676889    5228 ssh_runner.go:148] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1017 11:08:15.705895    5228 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1017 11:08:15.715902    5228 kubeadm.go:161] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:

stderr:
I1017 11:08:15.727889    5228 ssh_runner.go:148] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1017 11:08:15.751896    5228 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1017 11:08:15.761891    5228 kubeadm.go:585] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I1017 11:08:15.761891    5228 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"  
I1017 11:08:15.860702    5228 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I1017 11:08:16.642134    5228 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I1017 11:08:16.785159    5228 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I1017 11:08:17.051533    5228 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I1017 11:08:17.152753    5228 api_server.go:48] waiting for apiserver process to appear ...
I1017 11:08:17.216752    5228 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1017 11:08:17.752194    5228 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1017 11:08:18.256823    5228 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1017 11:08:18.754126    5228 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1017 11:08:18.919183    5228 api_server.go:68] duration metric: took 1.7664303s to wait for apiserver process to appear ...
I1017 11:08:18.938854    5228 api_server.go:84] waiting for apiserver healthz status ...
I1017 11:08:18.938854    5228 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:32824/healthz ...
I1017 11:08:22.525403    5228 api_server.go:241] https://127.0.0.1:32824/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}   
W1017 11:08:22.533397    5228 api_server.go:99] status: https://127.0.0.1:32824/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}   
I1017 11:08:23.034937    5228 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:32824/healthz ...
I1017 11:08:23.043938    5228 api_server.go:241] https://127.0.0.1:32824/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W1017 11:08:23.138109    5228 api_server.go:99] status: https://127.0.0.1:32824/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I1017 11:08:23.543861    5228 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:32824/healthz ...
I1017 11:08:23.618658    5228 api_server.go:241] https://127.0.0.1:32824/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W1017 11:08:23.659482    5228 api_server.go:99] status: https://127.0.0.1:32824/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I1017 11:08:24.047854    5228 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:32824/healthz ...
I1017 11:08:24.111803    5228 api_server.go:241] https://127.0.0.1:32824/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W1017 11:08:24.159800    5228 api_server.go:99] status: https://127.0.0.1:32824/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I1017 11:08:24.534931    5228 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:32824/healthz ...
I1017 11:08:24.543167    5228 api_server.go:241] https://127.0.0.1:32824/healthz returned 200:
ok
I1017 11:08:24.557166    5228 api_server.go:137] control plane version: v1.19.2
I1017 11:08:24.564165    5228 api_server.go:127] duration metric: took 5.6253107s to wait for apiserver health ...
I1017 11:08:24.565165    5228 cni.go:74] Creating CNI manager for "flannel"
I1017 11:08:24.567165    5228 out.go:109] πŸ”—  Configuring Flannel (Container Networking Interface) ...
I1017 11:08:24.580620    5228 ssh_runner.go:148] Run: stat /opt/cni/bin/portmap
I1017 11:08:24.629218    5228 ssh_runner.go:148] Run: sudo mv /etc/cni/net.d/100-crio-bridge.conf \etc\cni\net.d\DISABLED-100-crio-bridge.conf
E1017 11:08:24.640866    5228 flannel.go:656] unable to disable /etc/cni/net.d/100-crio-bridge.conf: sudo mv /etc/cni/net.d/100-crio-bridge.conf \etc\cni\net.d\DISABLED-100-crio-bridge.conf: Process exited with status 1
stdout:

stderr:
mv: cannot stat '/etc/cni/net.d/100-crio-bridge.conf': No such file or directory
I1017 11:08:24.652850    5228 cni.go:137] applying CNI manifest using /var/lib/minikube/binaries/v1.19.2/kubectl ...
I1017 11:08:24.664851    5228 ssh_runner.go:215] scp memory --> /var/tmp/minikube/cni.yaml (14366 bytes)
I1017 11:08:24.739765    5228 ssh_runner.go:148] Run: sudo /var/lib/minikube/binaries/v1.19.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I1017 11:08:25.337911    5228 system_pods.go:43] waiting for kube-system pods to appear ...
I1017 11:08:25.347908    5228 system_pods.go:59] 8 kube-system pods found
I1017 11:08:25.352909    5228 system_pods.go:61] "coredns-f9fd979d6-49srb" [fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1017 11:08:25.353909    5228 system_pods.go:61] "etcd-minikube" [92bb23f4-24cf-431d-baee-af2e87cd4768] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1017 11:08:25.353909    5228 system_pods.go:61] "kube-apiserver-minikube" [c89380c6-6831-4b63-ab5d-975290c706d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1017 11:08:25.358909    5228 system_pods.go:61] "kube-controller-manager-minikube" [2ffcf210-87a5-4eab-aac5-db860e199e65] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1017 11:08:25.359912    5228 system_pods.go:61] "kube-flannel-ds-amd64-rnpls" [f3031cb9-8f41-4669-a00c-97be66594238] Running / Ready:ContainersNotReady (containers with unready status: [kube-flannel]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-flannel])
I1017 11:08:25.359912    5228 system_pods.go:61] "kube-proxy-rj4gn" [4bc69e28-b4f7-41ab-8923-dfd7c8b3452c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1017 11:08:25.359912    5228 system_pods.go:61] "kube-scheduler-minikube" [640b7942-c3db-432b-988f-9374c9cd287f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1017 11:08:25.382911    5228 system_pods.go:61] "storage-provisioner" [25037416-fff6-468b-93dc-5e626115ac90] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1017 11:08:25.382911    5228 system_pods.go:74] duration metric: took 43.9984ms to wait for pod list to return data ...
I1017 11:08:25.383914    5228 node_conditions.go:101] verifying NodePressure condition ...
I1017 11:08:25.408913    5228 node_conditions.go:121] node storage ephemeral capacity is 263174212Ki
I1017 11:08:25.423915    5228 node_conditions.go:122] node cpu capacity is 16
I1017 11:08:25.424911    5228 node_conditions.go:104] duration metric: took 20.999ms to run NodePressure ...
I1017 11:08:25.424911    5228 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"  
I1017 11:08:25.820226    5228 ssh_runner.go:148] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1017 11:08:25.850226    5228 ops.go:34] apiserver oom_adj: -16
I1017 11:08:25.851233    5228 kubeadm.go:513] restartCluster took 10.8324502s
I1017 11:08:25.851233    5228 kubeadm.go:326] StartCluster complete in 10.9154508s
I1017 11:08:25.851233    5228 settings.go:123] acquiring lock: {Name:mk96c792d51ed78632fd5832847d0e274e92279b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1017 11:08:25.852225    5228 settings.go:131] Updating kubeconfig:  C:\Users\yuryk/.kube/config
I1017 11:08:25.853226    5228 lock.go:35] WriteFile acquiring C:\Users\yuryk/.kube/config: {Name:mkfe125e186c6b89df95ae923e4e966294de8ece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1017 11:08:25.880273    5228 start.go:199] Will wait wait-timeout for node ...
I1017 11:08:25.880273    5228 addons.go:370] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
I1017 11:08:25.892274    5228 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.2/kubectl scale deployment --replicas=1 coredns -n=kube-systemI1017 11:08:25.897274    5228 addons.go:55] Setting storage-provisioner=true in profile "minikube"
I1017 11:08:25.897274    5228 addons.go:55] Setting default-storageclass=true in profile "minikube"
I1017 11:08:25.900452    5228 out.go:109] πŸ”Ž  Verifying Kubernetes components...
I1017 11:08:25.919472    5228 addons.go:131] Setting addon storage-provisioner=true in "minikube"
I1017 11:08:25.920469    5228 addons.go:274] enableOrDisableStorageClasses default-storageclass=true on "minikube"
W1017 11:08:25.920469    5228 addons.go:140] addon storage-provisioner should already be in state true
I1017 11:08:25.927471    5228 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I1017 11:08:25.933476    5228 host.go:65] Checking if "minikube" exists ...
I1017 11:08:25.936320    5228 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I1017 11:08:25.967217    5228 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I1017 11:08:26.030219    5228 start.go:553] successfully scaled coredns replicas to 1
I1017 11:08:26.246552    5228 api_server.go:48] waiting for apiserver process to appear ...
I1017 11:08:26.261554    5228 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1017 11:08:26.264555    5228 addons.go:243] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1017 11:08:26.265551    5228 ssh_runner.go:215] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2709 bytes)
I1017 11:08:26.271562    5228 addons.go:131] Setting addon default-storageclass=true in "minikube"
I1017 11:08:26.274567    5228 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W1017 11:08:26.285560    5228 addons.go:140] addon default-storageclass should already be in state true
I1017 11:08:26.286559    5228 host.go:65] Checking if "minikube" exists ...
I1017 11:08:26.292557    5228 api_server.go:68] duration metric: took 396.2789ms to wait for apiserver process to appear ...
I1017 11:08:26.306554    5228 api_server.go:84] waiting for apiserver healthz status ...
I1017 11:08:26.306554    5228 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:32824/healthz ...
I1017 11:08:26.322554    5228 api_server.go:241] https://127.0.0.1:32824/healthz returned 200:
ok
I1017 11:08:26.324554    5228 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I1017 11:08:26.336552    5228 api_server.go:137] control plane version: v1.19.2
I1017 11:08:26.349552    5228 api_server.go:127] duration metric: took 42.9976ms to wait for apiserver health ...
I1017 11:08:26.349552    5228 system_pods.go:43] waiting for kube-system pods to appear ...
I1017 11:08:26.357552    5228 system_pods.go:59] 8 kube-system pods found
I1017 11:08:26.370552    5228 system_pods.go:61] "coredns-f9fd979d6-49srb" [fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1017 11:08:26.371554    5228 system_pods.go:61] "etcd-minikube" [92bb23f4-24cf-431d-baee-af2e87cd4768] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1017 11:08:26.371554    5228 system_pods.go:61] "kube-apiserver-minikube" [c89380c6-6831-4b63-ab5d-975290c706d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1017 11:08:26.377554    5228 system_pods.go:61] "kube-controller-manager-minikube" [2ffcf210-87a5-4eab-aac5-db860e199e65] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1017 11:08:26.377554    5228 system_pods.go:61] "kube-flannel-ds-amd64-rnpls" [f3031cb9-8f41-4669-a00c-97be66594238] Running / Ready:ContainersNotReady (containers with unready status: [kube-flannel]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-flannel])
I1017 11:08:26.378553    5228 system_pods.go:61] "kube-proxy-rj4gn" [4bc69e28-b4f7-41ab-8923-dfd7c8b3452c] Running
I1017 11:08:26.398559    5228 system_pods.go:61] "kube-scheduler-minikube" [640b7942-c3db-432b-988f-9374c9cd287f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1017 11:08:26.399555    5228 system_pods.go:61] "storage-provisioner" [25037416-fff6-468b-93dc-5e626115ac90] Running
I1017 11:08:26.399555    5228 system_pods.go:74] duration metric: took 50.0028ms to wait for pod list to return data ...
I1017 11:08:26.419556    5228 kubeadm.go:465] duration metric: took 523.2784ms to wait for : map[apiserver:true system_pods:true] ...
I1017 11:08:26.420562    5228 node_conditions.go:101] verifying NodePressure condition ...
I1017 11:08:26.424552    5228 node_conditions.go:121] node storage ephemeral capacity is 263174212Ki
I1017 11:08:26.439551    5228 node_conditions.go:122] node cpu capacity is 16
I1017 11:08:26.439551    5228 node_conditions.go:104] duration metric: took 18.9894ms to run NodePressure ...
I1017 11:08:26.440563    5228 start.go:204] waiting for startup goroutines ...
I1017 11:08:26.664164    5228 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32827 SSHKeyPath:C:\Users\yuryk\.minikube\machines\minikube\id_rsa Username:docker}
I1017 11:08:26.677158    5228 addons.go:243] installing /etc/kubernetes/addons/storageclass.yaml
I1017 11:08:26.683164    5228 ssh_runner.go:215] scp deploy/addons/storageclass/storageclass.yaml.tmpl --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1017 11:08:26.691158    5228 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1017 11:08:26.787161    5228 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1017 11:08:26.989153    5228 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32827 SSHKeyPath:C:\Users\yuryk\.minikube\machines\minikube\id_rsa Username:docker}
I1017 11:08:27.103819    5228 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1017 11:08:27.260822    5228 out.go:109] 🌟  Enabled addons: storage-provisioner, default-storageclass
I1017 11:08:27.261822    5228 addons.go:372] enableAddons completed in 1.3815481s
I1017 11:08:27.348623    5228 start.go:461] kubectl: 1.19.2, cluster: 1.19.2 (minor skew: 0)
I1017 11:08:27.350612    5228 out.go:109] πŸ„  Done! kubectl is now configured to use "minikube" by default

Optional: Full output of minikube logs command:

==> Docker <==
-- Logs begin at Sat 2020-10-17 08:08:07 UTC, end at Sat 2020-10-17 08:09:53 UTC. --
Oct 17 08:08:07 minikube systemd[1]: Starting Docker Application Container Engine...
Oct 17 08:08:07 minikube dockerd[158]: time="2020-10-17T08:08:07.672332500Z" level=info msg="Starting up"
Oct 17 08:08:07 minikube dockerd[158]: time="2020-10-17T08:08:07.674065300Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Oct 17 08:08:07 minikube dockerd[158]: time="2020-10-17T08:08:07.674120500Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Oct 17 08:08:07 minikube dockerd[158]: time="2020-10-17T08:08:07.674144300Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Oct 17 08:08:07 minikube dockerd[158]: time="2020-10-17T08:08:07.674155700Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Oct 17 08:08:07 minikube dockerd[158]: time="2020-10-17T08:08:07.675594700Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Oct 17 08:08:07 minikube dockerd[158]: time="2020-10-17T08:08:07.675639000Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Oct 17 08:08:07 minikube dockerd[158]: time="2020-10-17T08:08:07.675659300Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Oct 17 08:08:07 minikube dockerd[158]: time="2020-10-17T08:08:07.675665500Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Oct 17 08:08:07 minikube dockerd[158]: time="2020-10-17T08:08:07.682572400Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Oct 17 08:08:07 minikube dockerd[158]: time="2020-10-17T08:08:07.691290100Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Oct 17 08:08:07 minikube dockerd[158]: time="2020-10-17T08:08:07.691331700Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Oct 17 08:08:07 minikube dockerd[158]: time="2020-10-17T08:08:07.691339400Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
Oct 17 08:08:07 minikube dockerd[158]: time="2020-10-17T08:08:07.691343600Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
Oct 17 08:08:07 minikube dockerd[158]: time="2020-10-17T08:08:07.691347400Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
Oct 17 08:08:07 minikube dockerd[158]: time="2020-10-17T08:08:07.691350900Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
Oct 17 08:08:07 minikube dockerd[158]: time="2020-10-17T08:08:07.691509700Z" level=info msg="Loading containers: start."
Oct 17 08:08:07 minikube dockerd[158]: time="2020-10-17T08:08:07.768790400Z" level=warning msg="Running modprobe bridge br_netfilter failed with message: modprobe: WARNING: Module bridge not found in directory /lib/modules/4.19.128-microsoft-standard\nmodprobe: WARNING: Module br_netfilter not found in directory /lib/modules/4.19.128-microsoft-standard\n, error: exit status 1"
Oct 17 08:08:07 minikube dockerd[158]: time="2020-10-17T08:08:07.843889900Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Oct 17 08:08:07 minikube dockerd[158]: time="2020-10-17T08:08:07.880049800Z" level=info msg="Loading containers: done."
Oct 17 08:08:07 minikube dockerd[158]: time="2020-10-17T08:08:07.890893700Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8
Oct 17 08:08:07 minikube dockerd[158]: time="2020-10-17T08:08:07.890983700Z" level=info msg="Daemon has completed initialization"
Oct 17 08:08:07 minikube dockerd[158]: time="2020-10-17T08:08:07.908736900Z" level=info msg="API listen on /var/run/docker.sock"
Oct 17 08:08:07 minikube dockerd[158]: time="2020-10-17T08:08:07.908742900Z" level=info msg="API listen on [::]:2376"
Oct 17 08:08:07 minikube systemd[1]: Started Docker Application Container Engine.
Oct 17 08:08:23 minikube dockerd[158]: time="2020-10-17T08:08:23.385619100Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 17 08:08:23 minikube dockerd[158]: time="2020-10-17T08:08:23.581163100Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 17 08:08:24 minikube dockerd[158]: time="2020-10-17T08:08:24.576347100Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 17 08:08:25 minikube dockerd[158]: time="2020-10-17T08:08:25.511336900Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 17 08:08:26 minikube dockerd[158]: time="2020-10-17T08:08:26.367053100Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"

==> container status <==
CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
ab0c5411cc088       bfe3a36ebd252       About a minute ago   Running             coredns                   1                   322399c872c4e
d07c4b8b994e6       bad58561c4be7       About a minute ago   Running             storage-provisioner       2                   981b0fe77ab8f
4af8eaa1fc2fc       4e9f801d2217e       About a minute ago   Running             kube-flannel              1                   80671e5113764
8a7c7a70e9072       4e9f801d2217e       About a minute ago   Exited              install-cni               1                   80671e5113764
c25e818f5bc33       d373dd5a8593a       About a minute ago   Running             kube-proxy                1                   e37ebc715a86b
bc822a4a0b964       2f32d66b884f8       About a minute ago   Running             kube-scheduler            1                   96bd8026ecf75
f807f415af3e9       8603821e1a7a5       About a minute ago   Running             kube-controller-manager   1                   d5c24aca313d3
24fc6858a584c       607331163122e       About a minute ago   Running             kube-apiserver            1                   213baa9a1d8e9
73b45823a57a0       0369cf4303ffd       About a minute ago   Running             etcd                      1                   70221658b154d
e4b8ba3f7ba1e       bad58561c4be7       2 minutes ago        Exited              storage-provisioner       1                   59043f55ca4d7
157242ee891ba       bfe3a36ebd252       2 minutes ago        Exited              coredns                   0                   f863ed523a46a
579a7c5dee6d8       4e9f801d2217e       2 minutes ago        Exited              kube-flannel              0                   6ac150d86ff2b
991c1482875d1       d373dd5a8593a       2 minutes ago        Exited              kube-proxy                0                   ad2698803fd2d
cf195cf5d665b       2f32d66b884f8       2 minutes ago        Exited              kube-scheduler            0                   993a0527fb1bc
ad5dfe422296b       607331163122e       2 minutes ago        Exited              kube-apiserver            0                   949e017ed18d1
814d70929630a       8603821e1a7a5       2 minutes ago        Exited              kube-controller-manager   0                   038da63d7db39
4d2227304be61       0369cf4303ffd       2 minutes ago        Exited              etcd                      0                   8b01d655fab0f

==> coredns [157242ee891b] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s

==> coredns [ab0c5411cc08] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d

==> describe nodes <==
Name:               minikube
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=minikube
                    kubernetes.io/os=linux
                    minikube.k8s.io/commit=b09ee50ec047410326a85435f4d99026f9c4f5c4
                    minikube.k8s.io/name=minikube
                    minikube.k8s.io/updated_at=2020_10_17T11_07_05_0700
                    minikube.k8s.io/version=v1.14.0
                    node-role.kubernetes.io/master=
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"7e:89:e8:5f:0c:16"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.49.2
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sat, 17 Oct 2020 08:07:01 +0000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  minikube
  AcquireTime:     <unset>
  RenewTime:       Sat, 17 Oct 2020 08:09:52 +0000
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Sat, 17 Oct 2020 08:08:26 +0000   Sat, 17 Oct 2020 08:08:26 +0000   FlannelIsUp                  Flannel is running on this node
  MemoryPressure       False   Sat, 17 Oct 2020 08:08:22 +0000   Sat, 17 Oct 2020 08:06:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Sat, 17 Oct 2020 08:08:22 +0000   Sat, 17 Oct 2020 08:06:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Sat, 17 Oct 2020 08:08:22 +0000   Sat, 17 Oct 2020 08:06:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Sat, 17 Oct 2020 08:08:22 +0000   Sat, 17 Oct 2020 08:07:15 +0000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.49.2
  Hostname:    minikube
Capacity:
  cpu:                16
  ephemeral-storage:  263174212Ki
  hugepages-2Mi:      0
  memory:             32403488Ki
  pods:               110
Allocatable:
  cpu:                16
  ephemeral-storage:  263174212Ki
  hugepages-2Mi:      0
  memory:             32403488Ki
  pods:               110
System Info:
  Machine ID:                 51cf29d4a2bc4a6b82333f7645fef5fc
  System UUID:                51cf29d4a2bc4a6b82333f7645fef5fc
  Boot ID:                    4b404851-3d24-46f8-b997-06616a7967bd
  Kernel Version:             4.19.128-microsoft-standard
  OS Image:                   Ubuntu 20.04 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://19.3.8
  Kubelet Version:            v1.19.2
  Kube-Proxy Version:         v1.19.2
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (8 in total)
  Namespace                   Name                                CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                ------------  ----------  ---------------  -------------  ---
  kube-system                 coredns-f9fd979d6-49srb             100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m34s
  kube-system                 etcd-minikube                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m48s
  kube-system                 kube-apiserver-minikube             250m (1%)     0 (0%)      0 (0%)           0 (0%)         2m48s
  kube-system                 kube-controller-manager-minikube    200m (1%)     0 (0%)      0 (0%)           0 (0%)         2m48s
  kube-system                 kube-flannel-ds-amd64-rnpls         100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      2m34s
  kube-system                 kube-proxy-rj4gn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m34s
  kube-system                 kube-scheduler-minikube             100m (0%)     0 (0%)      0 (0%)           0 (0%)         2m47s
  kube-system                 storage-provisioner                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m47s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                750m (4%)   100m (0%)
  memory             120Mi (0%)  220Mi (0%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:
  Type     Reason                   Age                    From        Message
  ----     ------                   ----                   ----        -------
  Normal   NodeHasSufficientMemory  2m57s (x5 over 2m57s)  kubelet     Node minikube status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    2m57s (x5 over 2m57s)  kubelet     Node minikube status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     2m57s (x4 over 2m57s)  kubelet     Node minikube status is now: NodeHasSufficientPID
  Normal   Starting                 2m48s                  kubelet     Starting kubelet.
  Normal   NodeHasSufficientMemory  2m48s                  kubelet     Node minikube status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    2m48s                  kubelet     Node minikube status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     2m48s                  kubelet     Node minikube status is now: NodeHasSufficientPID
  Normal   NodeNotReady             2m48s                  kubelet     Node minikube status is now: NodeNotReady
  Normal   NodeAllocatableEnforced  2m48s                  kubelet     Updated Node Allocatable limit across pods
  Normal   NodeReady                2m38s                  kubelet     Node minikube status is now: NodeReady
  Warning  readOnlySysFS            2m33s                  kube-proxy  CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)
  Normal   Starting                 2m33s                  kube-proxy  Starting kube-proxy.
  Normal   Starting                 97s                    kubelet     Starting kubelet.
  Normal   NodeHasSufficientMemory  96s (x8 over 97s)      kubelet     Node minikube status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    96s (x8 over 97s)      kubelet     Node minikube status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     96s (x7 over 97s)      kubelet     Node minikube status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  96s                    kubelet     Updated Node Allocatable limit across pods
  Warning  readOnlySysFS            90s                    kube-proxy  CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)
  Normal   Starting                 90s                    kube-proxy  Starting kube-proxy.

==> dmesg <==
[  +0.000003]  blk_update_request+0xc0/0x260
[  +0.000005]  scsi_end_request+0x2c/0x240
[  +0.000002]  scsi_io_completion+0x81/0x620
[  +0.000002]  blk_done_softirq+0x8e/0xc0
[  +0.000005]  __do_softirq+0xde/0x2de
[  +0.000005]  irq_exit+0xba/0xc0
[  +0.000002]  hyperv_vector_handler+0x5f/0x80
[  +0.000002]  hyperv_callback_vector+0xf/0x20
[  +0.000001]  </IRQ>
[  +0.000002] RIP: 0010:_raw_spin_unlock_irqrestore+0xd/0x20
[  +0.000001] Code: 48 8b 00 a8 08 74 08 fb 66 0f 1f 44 00 00 c3 e9 c9 98 5a ff 90 90 90 90 90 90 90 90 90 0f 1f 44 00 00 c6 07 00 48 89 f7 57 9d <0f> 1f 44 00 00 c3 66 66 2e 0f 1f 84 00 00 00 00 00 66 90 0f 1f 44
[  +0.000000] RSP: 0018:ffffc90003d4bcf0 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff0c
[  +0.000001] RAX: 000000000001e28c RBX: ffff888745eb2a00 RCX: 00000000000f4240
[  +0.000001] RDX: 0000000000000004 RSI: 0000000000000246 RDI: 0000000000000246
[  +0.000000] RBP: ffffc90003d4bd50 R08: ffff888107f06200 R09: 0000000000000032
[  +0.000001] R10: 0000000000000001 R11: 0000000000000000 R12: ffff888745eb3134
[  +0.000001] R13: 0000000000000004 R14: ffff8887ca59f600 R15: 0000000000000001
[  +0.000002]  try_to_wake_up+0x59/0x480
[  +0.000001]  wake_up_q+0x59/0x80
[  +0.000003]  futex_wake+0x142/0x160
[  +0.000001]  do_futex+0xca/0xb70
[  +0.000004]  ? __dentry_kill+0x102/0x150
[  +0.000001]  ? _cond_resched+0x19/0x30
[  +0.000001]  ? dentry_kill+0x47/0x170
[  +0.000001]  ? dput.part.37+0x2d/0x110
[  +0.000003]  ? do_renameat2+0x16e/0x540
[  +0.000001]  __x64_sys_futex+0x143/0x180
[  +0.000042]  do_syscall_64+0x55/0x110
[  +0.000003]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[  +0.000002] RIP: 0033:0x55d3dabb6003
[  +0.000001] Code: 24 20 c3 cc cc cc cc 48 8b 7c 24 08 8b 74 24 10 8b 54 24 14 4c 8b 54 24 18 4c 8b 44 24 20 44 8b 4c 24 28 b8 ca 00 00 00 0f 05 <89> 44 24 30 c3 cc cc cc cc cc cc cc cc 8b 7c 24 08 48 8b 74 24 10
[  +0.000000] RSP: 002b:00007f1133ffeba0 EFLAGS: 00000202 ORIG_RAX: 00000000000000ca
[  +0.000001] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 000055d3dabb6003
[  +0.000000] RDX: 0000000000000001 RSI: 0000000000000081 RDI: 000000c000516848
[  +0.000001] RBP: 00007f1133ffebf0 R08: 0000000000000000 R09: 0000000000000000
[  +0.000001] R10: 0000000000000000 R11: 0000000000000202 R12: 00000000000000ff
[  +0.000000] R13: 0000000000000000 R14: 000055d3dc7c2436 R15: 0000000000000000
[  +0.000001] ---[ end trace caa19eb39da98dff ]---
[Oct17 07:45] WSL2: Performing memory compaction.
[Oct17 07:46] WSL2: Performing memory compaction.
[Oct17 07:49] WSL2: Performing memory compaction.
[Oct17 07:50] WSL2: Performing memory compaction.
[Oct17 07:51] WSL2: Performing memory compaction.
[Oct17 07:52] WSL2: Performing memory compaction.
[Oct17 07:53] WSL2: Performing memory compaction.
[Oct17 07:54] WSL2: Performing memory compaction.
[Oct17 07:55] WSL2: Performing memory compaction.
[Oct17 07:56] WSL2: Performing memory compaction.
[Oct17 07:58] WSL2: Performing memory compaction.
[Oct17 07:59] WSL2: Performing memory compaction.
[Oct17 08:00] WSL2: Performing memory compaction.
[Oct17 08:01] WSL2: Performing memory compaction.
[Oct17 08:02] WSL2: Performing memory compaction.
[Oct17 08:03] WSL2: Performing memory compaction.
[Oct17 08:04] WSL2: Performing memory compaction.
[Oct17 08:05] WSL2: Performing memory compaction.
[Oct17 08:06] WSL2: Performing memory compaction.
[Oct17 08:07] WSL2: Performing memory compaction.
[Oct17 08:08] WSL2: Performing memory compaction.
[Oct17 08:09] WSL2: Performing memory compaction.

==> etcd [4d2227304be6] <==
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-10-17 08:06:57.772466 I | etcdmain: etcd Version: 3.4.13
2020-10-17 08:06:57.772496 I | etcdmain: Git SHA: ae9734ed2
2020-10-17 08:06:57.772500 I | etcdmain: Go Version: go1.12.17
2020-10-17 08:06:57.772503 I | etcdmain: Go OS/Arch: linux/amd64
2020-10-17 08:06:57.772505 I | etcdmain: setting maximum number of CPUs to 16, total number of available CPUs is 16
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-10-17 08:06:57.772584 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
2020-10-17 08:06:57.773006 I | embed: name = minikube
2020-10-17 08:06:57.773030 I | embed: data dir = /var/lib/minikube/etcd
2020-10-17 08:06:57.773034 I | embed: member dir = /var/lib/minikube/etcd/member
2020-10-17 08:06:57.773036 I | embed: heartbeat = 100ms
2020-10-17 08:06:57.773043 I | embed: election = 1000ms
2020-10-17 08:06:57.773045 I | embed: snapshot count = 10000
2020-10-17 08:06:57.773050 I | embed: advertise client URLs = https://192.168.49.2:2379
2020-10-17 08:06:57.783728 I | etcdserver: starting member aec36adc501070cc in cluster fa54960ea34d58be
raft2020/10/17 08:06:57 INFO: aec36adc501070cc switched to configuration voters=()
raft2020/10/17 08:06:57 INFO: aec36adc501070cc became follower at term 0
raft2020/10/17 08:06:57 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
raft2020/10/17 08:06:57 INFO: aec36adc501070cc became follower at term 1
raft2020/10/17 08:06:57 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
2020-10-17 08:06:57.863917 W | auth: simple token is not cryptographically signed
2020-10-17 08:06:57.873245 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
2020-10-17 08:06:57.873583 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
raft2020/10/17 08:06:57 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
2020-10-17 08:06:57.873827 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
2020-10-17 08:06:57.874950 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
2020-10-17 08:06:57.875190 I | embed: listening for metrics on http://127.0.0.1:2381
2020-10-17 08:06:57.875303 I | embed: listening for peers on 192.168.49.2:2380
raft2020/10/17 08:06:58 INFO: aec36adc501070cc is starting a new election at term 1
raft2020/10/17 08:06:58 INFO: aec36adc501070cc became candidate at term 2
raft2020/10/17 08:06:58 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
raft2020/10/17 08:06:58 INFO: aec36adc501070cc became leader at term 2
raft2020/10/17 08:06:58 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
2020-10-17 08:06:58.758684 I | etcdserver: published {Name:minikube ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
2020-10-17 08:06:58.758713 I | embed: ready to serve client requests
2020-10-17 08:06:58.758727 I | embed: ready to serve client requests
2020-10-17 08:06:58.758803 I | etcdserver: setting up the initial cluster version to 3.4
2020-10-17 08:06:58.759706 N | etcdserver/membership: set the initial cluster version to 3.4
2020-10-17 08:06:58.759798 I | etcdserver/api: enabled capabilities for version 3.4
2020-10-17 08:06:58.760717 I | embed: serving client requests on 192.168.49.2:2379
2020-10-17 08:06:58.760765 I | embed: serving client requests on 127.0.0.1:2379
2020-10-17 08:07:16.300910 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-17 08:07:22.388831 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-17 08:07:32.388575 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-17 08:07:42.389158 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-17 08:07:45.065949 N | pkg/osutil: received terminated signal, shutting down...
WARNING: 2020/10/17 08:07:45 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
2020-10-17 08:07:45.161270 I | etcdserver: skipped leadership transfer for single voting member cluster

==> etcd [73b45823a57a] <==
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-10-17 08:08:18.166619 I | etcdmain: etcd Version: 3.4.13
2020-10-17 08:08:18.166689 I | etcdmain: Git SHA: ae9734ed2
2020-10-17 08:08:18.166693 I | etcdmain: Go Version: go1.12.17
2020-10-17 08:08:18.166696 I | etcdmain: Go OS/Arch: linux/amd64
2020-10-17 08:08:18.166700 I | etcdmain: setting maximum number of CPUs to 16, total number of available CPUs is 16
2020-10-17 08:08:18.166773 N | etcdmain: the server is already initialized as member before, starting as etcd member...
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-10-17 08:08:18.166836 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
2020-10-17 08:08:18.167572 I | embed: name = minikube
2020-10-17 08:08:18.167614 I | embed: data dir = /var/lib/minikube/etcd
2020-10-17 08:08:18.167621 I | embed: member dir = /var/lib/minikube/etcd/member
2020-10-17 08:08:18.167624 I | embed: heartbeat = 100ms
2020-10-17 08:08:18.167627 I | embed: election = 1000ms
2020-10-17 08:08:18.167629 I | embed: snapshot count = 10000
2020-10-17 08:08:18.167643 I | embed: advertise client URLs = https://192.168.49.2:2379
2020-10-17 08:08:18.167646 I | embed: initial advertise peer URLs = https://192.168.49.2:2380
2020-10-17 08:08:18.167651 I | embed: initial cluster = 
2020-10-17 08:08:18.176459 I | etcdserver: restarting member aec36adc501070cc in cluster fa54960ea34d58be at commit index 474
raft2020/10/17 08:08:18 INFO: aec36adc501070cc switched to configuration voters=()
raft2020/10/17 08:08:18 INFO: aec36adc501070cc became follower at term 2
raft2020/10/17 08:08:18 INFO: newRaft aec36adc501070cc [peers: [], term: 2, commit: 474, applied: 0, lastindex: 474, lastterm: 2]
2020-10-17 08:08:18.179428 W | auth: simple token is not cryptographically signed
2020-10-17 08:08:18.262109 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
raft2020/10/17 08:08:18 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
2020-10-17 08:08:18.262791 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
2020-10-17 08:08:18.262868 N | etcdserver/membership: set the initial cluster version to 3.4
2020-10-17 08:08:18.262900 I | etcdserver/api: enabled capabilities for version 3.4
2020-10-17 08:08:18.266671 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
2020-10-17 08:08:18.266725 I | embed: listening for peers on 192.168.49.2:2380
2020-10-17 08:08:18.267120 I | embed: listening for metrics on http://127.0.0.1:2381
raft2020/10/17 08:08:19 INFO: aec36adc501070cc is starting a new election at term 2
raft2020/10/17 08:08:19 INFO: aec36adc501070cc became candidate at term 3
raft2020/10/17 08:08:19 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3
raft2020/10/17 08:08:19 INFO: aec36adc501070cc became leader at term 3
raft2020/10/17 08:08:19 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3
2020-10-17 08:08:19.278914 I | etcdserver: published {Name:minikube ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
2020-10-17 08:08:19.278948 I | embed: ready to serve client requests
2020-10-17 08:08:19.278954 I | embed: ready to serve client requests
2020-10-17 08:08:19.279735 I | embed: serving client requests on 192.168.49.2:2379
2020-10-17 08:08:19.279766 I | embed: serving client requests on 127.0.0.1:2379
2020-10-17 08:08:31.640372 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-17 08:08:31.716834 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-17 08:08:41.717115 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-17 08:08:51.717232 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-17 08:09:01.717135 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-17 08:09:11.716827 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-17 08:09:21.717873 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-17 08:09:31.716660 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-17 08:09:41.716720 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-17 08:09:51.716880 I | etcdserver/api/etcdhttp: /health OK (status code 200)

==> kernel <==
 08:09:53 up  1:10,  0 users,  load average: 0.78, 0.68, 0.60
Linux minikube 4.19.128-microsoft-standard #1 SMP Tue Jun 23 12:58:10 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04 LTS"

==> kube-apiserver [24fc6858a584] <==
W1017 08:08:20.359917       1 genericapiserver.go:412] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1017 08:08:20.364425       1 genericapiserver.go:412] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1017 08:08:20.377604       1 genericapiserver.go:412] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1017 08:08:20.390773       1 genericapiserver.go:412] Skipping API apps/v1beta2 because it has no resources.
W1017 08:08:20.390815       1 genericapiserver.go:412] Skipping API apps/v1beta1 because it has no resources.
I1017 08:08:20.398613       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1017 08:08:20.398647       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1017 08:08:20.400658       1 client.go:360] parsed scheme: "endpoint"
I1017 08:08:20.400704       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1017 08:08:20.406028       1 client.go:360] parsed scheme: "endpoint"
I1017 08:08:20.406084       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1017 08:08:22.039187       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I1017 08:08:22.039242       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I1017 08:08:22.039387       1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I1017 08:08:22.039504       1 secure_serving.go:197] Serving securely on [::]:8443
I1017 08:08:22.039533       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1017 08:08:22.039556       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
I1017 08:08:22.039562       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1017 08:08:22.039590       1 controller.go:83] Starting OpenAPI AggregationController
I1017 08:08:22.039595       1 available_controller.go:404] Starting AvailableConditionController
I1017 08:08:22.039736       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1017 08:08:22.040001       1 customresource_discovery_controller.go:209] Starting DiscoveryController
I1017 08:08:22.040120       1 controller.go:86] Starting OpenAPI controller
I1017 08:08:22.040159       1 naming_controller.go:291] Starting NamingConditionController
I1017 08:08:22.040164       1 establishing_controller.go:76] Starting EstablishingController
I1017 08:08:22.040210       1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I1017 08:08:22.040221       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1017 08:08:22.040238       1 crd_finalizer.go:266] Starting CRDFinalizer
I1017 08:08:22.040420       1 autoregister_controller.go:141] Starting autoregister controller
I1017 08:08:22.040432       1 cache.go:32] Waiting for caches to sync for autoregister controller
I1017 08:08:22.040450       1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key
I1017 08:08:22.040607       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I1017 08:08:22.040620       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I1017 08:08:22.040645       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I1017 08:08:22.040667       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I1017 08:08:22.040676       1 crdregistration_controller.go:111] Starting crd-autoregister controller
I1017 08:08:22.040684       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
I1017 08:08:22.155955       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
I1017 08:08:22.157536       1 cache.go:39] Caches are synced for AvailableConditionController controller
I1017 08:08:22.157553       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1017 08:08:22.157678       1 shared_informer.go:247] Caches are synced for crd-autoregister 
I1017 08:08:22.157695       1 cache.go:39] Caches are synced for autoregister controller
I1017 08:08:22.162554       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
E1017 08:08:22.268550       1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I1017 08:08:23.056177       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1017 08:08:23.056622       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1017 08:08:23.061362       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I1017 08:08:25.172572       1 controller.go:606] quota admission added evaluator for: serviceaccounts
I1017 08:08:25.185804       1 controller.go:606] quota admission added evaluator for: deployments.apps
I1017 08:08:25.275878       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I1017 08:08:25.292206       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1017 08:08:25.298902       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1017 08:08:26.530922       1 controller.go:606] quota admission added evaluator for: endpoints
I1017 08:08:37.459446       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1017 08:09:02.033286       1 client.go:360] parsed scheme: "passthrough"
I1017 08:09:02.033367       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1017 08:09:02.033379       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1017 08:09:34.283239       1 client.go:360] parsed scheme: "passthrough"
I1017 08:09:34.283290       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1017 08:09:34.283297       1 clientconn.go:948] ClientConn switching balancer to "pick_first"

==> kube-apiserver [ad5dfe422296] <==
I1017 08:07:45.157449       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
W1017 08:07:45.157465       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1017 08:07:45.157465       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1017 08:07:45.157494       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1017 08:07:45.157504       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1017 08:07:45.157528       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1017 08:07:45.157532       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I1017 08:07:45.157537       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
W1017 08:07:45.157550       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1017 08:07:45.157564       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I1017 08:07:45.157573       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
W1017 08:07:45.157587       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1017 08:07:45.157591       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1017 08:07:45.157599       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1017 08:07:45.157603       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1017 08:07:45.157613       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I1017 08:07:45.157618       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
W1017 08:07:45.157635       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1017 08:07:45.157684       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I1017 08:07:45.157685       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I1017 08:07:45.157705       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I1017 08:07:45.157725       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
W1017 08:07:45.157763       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I1017 08:07:45.157762       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I1017 08:07:45.157815       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
W1017 08:07:45.157836       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I1017 08:07:45.157845       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I1017 08:07:45.157888       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
W1017 08:07:45.157938       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1017 08:07:45.157964       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1017 08:07:45.157980       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1017 08:07:45.157989       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1017 08:07:45.158016       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1017 08:07:45.157545       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1017 08:07:45.158011       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1017 08:07:45.158047       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1017 08:07:45.158050       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I1017 08:07:45.157342       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
W1017 08:07:45.157909       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1017 08:07:45.157912       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I1017 08:07:45.158083       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I1017 08:07:45.157927       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I1017 08:07:45.158126       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I1017 08:07:45.158166       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
W1017 08:07:45.158155       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I1017 08:07:45.158251       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
W1017 08:07:45.158316       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1017 08:07:45.158327       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1017 08:07:45.158361       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I1017 08:07:45.158384       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
W1017 08:07:45.158399       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I1017 08:07:45.158469       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
W1017 08:07:45.158490       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1017 08:07:45.158503       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
E1017 08:07:45.158478       1 controller.go:184] rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field
I1017 08:07:45.158511       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
W1017 08:07:45.158536       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1017 08:07:45.158737       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1017 08:07:45.158754       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1017 08:07:45.158764       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...

==> kube-controller-manager [814d70929630] <==
I1017 08:07:19.118206       1 resource_quota_monitor.go:303] QuotaMonitor running
I1017 08:07:19.131319       1 controllermanager.go:549] Started "serviceaccount"
I1017 08:07:19.131360       1 serviceaccounts_controller.go:117] Starting service account controller
I1017 08:07:19.131368       1 shared_informer.go:240] Waiting for caches to sync for service account
I1017 08:07:19.266046       1 controllermanager.go:549] Started "tokencleaner"
I1017 08:07:19.266087       1 tokencleaner.go:118] Starting token cleaner controller
I1017 08:07:19.266098       1 shared_informer.go:240] Waiting for caches to sync for token_cleaner
I1017 08:07:19.266103       1 shared_informer.go:247] Caches are synced for token_cleaner 
W1017 08:07:19.272081       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I1017 08:07:19.277710       1 shared_informer.go:247] Caches are synced for PV protection 
I1017 08:07:19.289320       1 shared_informer.go:247] Caches are synced for endpoint 
I1017 08:07:19.308886       1 shared_informer.go:247] Caches are synced for TTL 
I1017 08:07:19.316182       1 shared_informer.go:247] Caches are synced for PVC protection 
I1017 08:07:19.316506       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
I1017 08:07:19.316611       1 shared_informer.go:247] Caches are synced for ReplicaSet 
I1017 08:07:19.316819       1 shared_informer.go:247] Caches are synced for ReplicationController 
I1017 08:07:19.317033       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
I1017 08:07:19.319593       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
I1017 08:07:19.319606       1 shared_informer.go:247] Caches are synced for job 
I1017 08:07:19.355989       1 shared_informer.go:247] Caches are synced for persistent volume 
I1017 08:07:19.356026       1 shared_informer.go:247] Caches are synced for service account 
I1017 08:07:19.357779       1 shared_informer.go:247] Caches are synced for HPA 
I1017 08:07:19.363005       1 shared_informer.go:247] Caches are synced for namespace 
I1017 08:07:19.363650       1 shared_informer.go:247] Caches are synced for node 
I1017 08:07:19.363695       1 range_allocator.go:172] Starting range CIDR allocator
I1017 08:07:19.363699       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
I1017 08:07:19.363703       1 shared_informer.go:247] Caches are synced for cidrallocator 
I1017 08:07:19.365558       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
I1017 08:07:19.365596       1 shared_informer.go:247] Caches are synced for deployment 
I1017 08:07:19.365880       1 shared_informer.go:247] Caches are synced for endpoint_slice 
I1017 08:07:19.366231       1 shared_informer.go:247] Caches are synced for attach detach 
I1017 08:07:19.366264       1 shared_informer.go:247] Caches are synced for GC 
I1017 08:07:19.366524       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
I1017 08:07:19.366853       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
I1017 08:07:19.366896       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
I1017 08:07:19.367143       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
I1017 08:07:19.369130       1 shared_informer.go:247] Caches are synced for taint 
I1017 08:07:19.369162       1 shared_informer.go:247] Caches are synced for daemon sets 
I1017 08:07:19.369221       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
I1017 08:07:19.369232       1 taint_manager.go:187] Starting NoExecuteTaintManager
W1017 08:07:19.369260       1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp.
I1017 08:07:19.369321       1 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
I1017 08:07:19.369379       1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller"
I1017 08:07:19.369640       1 shared_informer.go:247] Caches are synced for expand 
I1017 08:07:19.370625       1 range_allocator.go:373] Set node minikube PodCIDR to [10.244.0.0/24]
I1017 08:07:19.371043       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-f9fd979d6 to 1"
I1017 08:07:19.377325       1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-f9fd979d6-49srb"
I1017 08:07:19.464667       1 event.go:291] "Event occurred" object="kube-system/kube-flannel-ds-amd64" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-flannel-ds-amd64-rnpls"
I1017 08:07:19.464695       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rj4gn"
I1017 08:07:19.466628       1 shared_informer.go:247] Caches are synced for stateful set 
I1017 08:07:19.582948       1 shared_informer.go:247] Caches are synced for disruption 
I1017 08:07:19.582980       1 disruption.go:339] Sending events to api server.
I1017 08:07:19.655855       1 shared_informer.go:247] Caches are synced for resource quota 
I1017 08:07:19.659156       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I1017 08:07:19.882310       1 shared_informer.go:247] Caches are synced for garbage collector 
I1017 08:07:19.882345       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1017 08:07:19.959451       1 shared_informer.go:247] Caches are synced for garbage collector 
I1017 08:07:20.255987       1 request.go:645] Throttling request took 1.0383517s, request: GET:https://192.168.49.2:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
I1017 08:07:20.967852       1 shared_informer.go:240] Waiting for caches to sync for resource quota
I1017 08:07:20.967895       1 shared_informer.go:247] Caches are synced for resource quota 

==> kube-controller-manager [f807f415af3e] <==
I1017 08:08:37.300470       1 controllermanager.go:549] Started "clusterrole-aggregation"
W1017 08:08:37.300499       1 controllermanager.go:541] Skipping "root-ca-cert-publisher"
I1017 08:08:37.300603       1 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator
I1017 08:08:37.300623       1 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator
I1017 08:08:37.305855       1 controllermanager.go:549] Started "endpoint"
W1017 08:08:37.305884       1 controllermanager.go:541] Skipping "ephemeral-volume"
I1017 08:08:37.305899       1 endpoints_controller.go:184] Starting endpoint controller
I1017 08:08:37.305906       1 shared_informer.go:240] Waiting for caches to sync for endpoint
I1017 08:08:37.310797       1 controllermanager.go:549] Started "replicationcontroller"
I1017 08:08:37.310884       1 replica_set.go:182] Starting replicationcontroller controller
I1017 08:08:37.310893       1 shared_informer.go:240] Waiting for caches to sync for ReplicationController
I1017 08:08:37.315703       1 controllermanager.go:549] Started "replicaset"
I1017 08:08:37.315753       1 replica_set.go:182] Starting replicaset controller
I1017 08:08:37.315761       1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
I1017 08:08:37.316103       1 shared_informer.go:240] Waiting for caches to sync for resource quota
W1017 08:08:37.357141       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I1017 08:08:37.359960       1 shared_informer.go:247] Caches are synced for service account 
I1017 08:08:37.360518       1 shared_informer.go:247] Caches are synced for PV protection 
I1017 08:08:37.360913       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
I1017 08:08:37.360925       1 shared_informer.go:247] Caches are synced for persistent volume 
I1017 08:08:37.361095       1 shared_informer.go:247] Caches are synced for job 
I1017 08:08:37.361172       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
I1017 08:08:37.361386       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
I1017 08:08:37.361664       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
I1017 08:08:37.363402       1 shared_informer.go:247] Caches are synced for node 
I1017 08:08:37.363448       1 range_allocator.go:172] Starting range CIDR allocator
I1017 08:08:37.363453       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
I1017 08:08:37.363455       1 shared_informer.go:247] Caches are synced for cidrallocator 
I1017 08:08:37.368645       1 shared_informer.go:247] Caches are synced for taint 
I1017 08:08:37.368723       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
I1017 08:08:37.368732       1 taint_manager.go:187] Starting NoExecuteTaintManager
W1017 08:08:37.368760       1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp.
I1017 08:08:37.368797       1 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
I1017 08:08:37.368906       1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller"
I1017 08:08:37.375239       1 shared_informer.go:247] Caches are synced for namespace 
I1017 08:08:37.380893       1 shared_informer.go:247] Caches are synced for daemon sets 
I1017 08:08:37.380981       1 shared_informer.go:247] Caches are synced for HPA 
I1017 08:08:37.382115       1 shared_informer.go:247] Caches are synced for expand 
I1017 08:08:37.386459       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
I1017 08:08:37.400917       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
I1017 08:08:37.406196       1 shared_informer.go:247] Caches are synced for endpoint 
I1017 08:08:37.410520       1 shared_informer.go:247] Caches are synced for TTL 
I1017 08:08:37.410716       1 shared_informer.go:247] Caches are synced for deployment 
I1017 08:08:37.410957       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
I1017 08:08:37.411697       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
I1017 08:08:37.411721       1 shared_informer.go:247] Caches are synced for PVC protection 
I1017 08:08:37.415996       1 shared_informer.go:247] Caches are synced for ReplicaSet 
I1017 08:08:37.423130       1 shared_informer.go:247] Caches are synced for stateful set 
I1017 08:08:37.457478       1 shared_informer.go:247] Caches are synced for endpoint_slice 
I1017 08:08:37.458211       1 shared_informer.go:247] Caches are synced for GC 
I1017 08:08:37.561264       1 shared_informer.go:247] Caches are synced for disruption 
I1017 08:08:37.561318       1 disruption.go:339] Sending events to api server.
I1017 08:08:37.575122       1 shared_informer.go:247] Caches are synced for attach detach 
I1017 08:08:37.611024       1 shared_informer.go:247] Caches are synced for ReplicationController 
I1017 08:08:37.615805       1 shared_informer.go:247] Caches are synced for resource quota 
I1017 08:08:37.616252       1 shared_informer.go:247] Caches are synced for resource quota 
I1017 08:08:37.669078       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I1017 08:08:37.969314       1 shared_informer.go:247] Caches are synced for garbage collector 
I1017 08:08:37.976110       1 shared_informer.go:247] Caches are synced for garbage collector 
I1017 08:08:37.976179       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage

==> kube-proxy [991c1482875d] <==
I1017 08:07:20.480469       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
I1017 08:07:20.480522       1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation
W1017 08:07:20.563448       1 proxier.go:639] Failed to read file /lib/modules/4.19.128-microsoft-standard/modules.builtin with error open /lib/modules/4.19.128-microsoft-standard/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1017 08:07:20.564781       1 proxier.go:649] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1017 08:07:20.566632       1 proxier.go:649] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1017 08:07:20.568877       1 proxier.go:649] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1017 08:07:20.570637       1 proxier.go:649] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1017 08:07:20.572266       1 proxier.go:649] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1017 08:07:20.572453       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
I1017 08:07:20.572589       1 server_others.go:186] Using iptables Proxier.
I1017 08:07:20.573033       1 server.go:650] Version: v1.19.2
I1017 08:07:20.573450       1 conntrack.go:52] Setting nf_conntrack_max to 524288
E1017 08:07:20.573815       1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime])
I1017 08:07:20.573936       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I1017 08:07:20.573981       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I1017 08:07:20.574184       1 config.go:315] Starting service config controller
I1017 08:07:20.574210       1 shared_informer.go:240] Waiting for caches to sync for service config
I1017 08:07:20.574275       1 config.go:224] Starting endpoint slice config controller
I1017 08:07:20.574305       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I1017 08:07:20.674395       1 shared_informer.go:247] Caches are synced for service config 
I1017 08:07:20.674427       1 shared_informer.go:247] Caches are synced for endpoint slice config 

==> kube-proxy [c25e818f5bc3] <==
I1017 08:08:23.174209       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
I1017 08:08:23.174306       1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation
W1017 08:08:23.275810       1 proxier.go:639] Failed to read file /lib/modules/4.19.128-microsoft-standard/modules.builtin with error open /lib/modules/4.19.128-microsoft-standard/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1017 08:08:23.278376       1 proxier.go:649] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1017 08:08:23.281791       1 proxier.go:649] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1017 08:08:23.283277       1 proxier.go:649] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1017 08:08:23.285238       1 proxier.go:649] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1017 08:08:23.287110       1 proxier.go:649] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1017 08:08:23.287295       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
I1017 08:08:23.287387       1 server_others.go:186] Using iptables Proxier.
I1017 08:08:23.287732       1 server.go:650] Version: v1.19.2
I1017 08:08:23.288161       1 conntrack.go:52] Setting nf_conntrack_max to 524288
E1017 08:08:23.288475       1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime])
I1017 08:08:23.288598       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I1017 08:08:23.288687       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I1017 08:08:23.288942       1 config.go:315] Starting service config controller
I1017 08:08:23.288977       1 config.go:224] Starting endpoint slice config controller
I1017 08:08:23.288978       1 shared_informer.go:240] Waiting for caches to sync for service config
I1017 08:08:23.288987       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I1017 08:08:23.389197       1 shared_informer.go:247] Caches are synced for endpoint slice config 
I1017 08:08:23.389196       1 shared_informer.go:247] Caches are synced for service config 

==> kube-scheduler [bc822a4a0b96] <==
I1017 08:08:18.573478       1 registry.go:173] Registering SelectorSpread plugin
I1017 08:08:18.573543       1 registry.go:173] Registering SelectorSpread plugin
I1017 08:08:19.257473       1 serving.go:331] Generated self-signed cert in-memory
W1017 08:08:22.064298       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1017 08:08:22.064342       1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1017 08:08:22.064396       1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous.
W1017 08:08:22.064404       1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1017 08:08:22.162048       1 registry.go:173] Registering SelectorSpread plugin
I1017 08:08:22.162083       1 registry.go:173] Registering SelectorSpread plugin
I1017 08:08:22.167796       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1017 08:08:22.167858       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1017 08:08:22.256113       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I1017 08:08:22.256278       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1017 08:08:22.268098       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 

==> kube-scheduler [cf195cf5d665] <==
I1017 08:06:57.975500       1 registry.go:173] Registering SelectorSpread plugin
I1017 08:06:57.975564       1 registry.go:173] Registering SelectorSpread plugin
I1017 08:06:58.491333       1 serving.go:331] Generated self-signed cert in-memory
W1017 08:07:01.258824       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1017 08:07:01.258873       1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1017 08:07:01.258900       1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous.
W1017 08:07:01.258906       1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1017 08:07:01.267411       1 registry.go:173] Registering SelectorSpread plugin
I1017 08:07:01.267446       1 registry.go:173] Registering SelectorSpread plugin
I1017 08:07:01.269608       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I1017 08:07:01.269675       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1017 08:07:01.269680       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1017 08:07:01.269688       1 tlsconfig.go:240] Starting DynamicServingCertificateController
E1017 08:07:01.358092       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1017 08:07:01.358850       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1017 08:07:01.358925       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1017 08:07:01.358952       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1017 08:07:01.359076       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1017 08:07:01.359082       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1017 08:07:01.359079       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1017 08:07:01.359083       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1017 08:07:01.359156       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1017 08:07:01.360164       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1017 08:07:01.360398       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1017 08:07:01.360160       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1017 08:07:01.360752       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1017 08:07:02.316428       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1017 08:07:02.389012       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1017 08:07:02.409265       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1017 08:07:02.456913       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1017 08:07:02.498831       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
I1017 08:07:02.969987       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 

==> kubelet <==
-- Logs begin at Sat 2020-10-17 08:08:07 UTC, end at Sat 2020-10-17 08:09:54 UTC. --
Oct 17 08:08:22 minikube kubelet[817]: I1017 08:08:22.357173     817 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "run" (UniqueName: "kubernetes.io/host-path/f3031cb9-8f41-4669-a00c-97be66594238-run") pod "kube-flannel-ds-amd64-rnpls" (UID: "f3031cb9-8f41-4669-a00c-97be66594238")
Oct 17 08:08:22 minikube kubelet[817]: I1017 08:08:22.357250     817 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni" (UniqueName: "kubernetes.io/host-path/f3031cb9-8f41-4669-a00c-97be66594238-cni") pod "kube-flannel-ds-amd64-rnpls" (UID: "f3031cb9-8f41-4669-a00c-97be66594238")
Oct 17 08:08:22 minikube kubelet[817]: I1017 08:08:22.357245     817 kubelet_node_status.go:108] Node minikube was previously registered
Oct 17 08:08:22 minikube kubelet[817]: I1017 08:08:22.357279     817 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-mgxbq" (UniqueName: "kubernetes.io/secret/4bc69e28-b4f7-41ab-8923-dfd7c8b3452c-kube-proxy-token-mgxbq") pod "kube-proxy-rj4gn" (UID: "4bc69e28-b4f7-41ab-8923-dfd7c8b3452c")
Oct 17 08:08:22 minikube kubelet[817]: I1017 08:08:22.357302     817 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/f3031cb9-8f41-4669-a00c-97be66594238-flannel-cfg") pod "kube-flannel-ds-amd64-rnpls" (UID: "f3031cb9-8f41-4669-a00c-97be66594238")
Oct 17 08:08:22 minikube kubelet[817]: I1017 08:08:22.357326     817 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/4bc69e28-b4f7-41ab-8923-dfd7c8b3452c-lib-modules") pod "kube-proxy-rj4gn" (UID: "4bc69e28-b4f7-41ab-8923-dfd7c8b3452c")
Oct 17 08:08:22 minikube kubelet[817]: I1017 08:08:22.357364     817 kubelet_node_status.go:73] Successfully registered node minikube
Oct 17 08:08:22 minikube kubelet[817]: I1017 08:08:22.357367     817 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/4bc69e28-b4f7-41ab-8923-dfd7c8b3452c-kube-proxy") pod "kube-proxy-rj4gn" (UID: "4bc69e28-b4f7-41ab-8923-dfd7c8b3452c")
Oct 17 08:08:22 minikube kubelet[817]: I1017 08:08:22.357876     817 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7-config-volume") pod "coredns-f9fd979d6-49srb" (UID: "fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7")
Oct 17 08:08:22 minikube kubelet[817]: I1017 08:08:22.357958     817 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flannel-token-2d2x2" (UniqueName: "kubernetes.io/secret/f3031cb9-8f41-4669-a00c-97be66594238-flannel-token-2d2x2") pod "kube-flannel-ds-amd64-rnpls" (UID: "f3031cb9-8f41-4669-a00c-97be66594238")
Oct 17 08:08:22 minikube kubelet[817]: I1017 08:08:22.357991     817 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-ml9zb" (UniqueName: "kubernetes.io/secret/fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7-coredns-token-ml9zb") pod "coredns-f9fd979d6-49srb" (UID: "fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7")
Oct 17 08:08:22 minikube kubelet[817]: I1017 08:08:22.458369     817 reconciler.go:157] Reconciler: start to sync state
Oct 17 08:08:22 minikube kubelet[817]: I1017 08:08:22.575697     817 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 3ea9d70490bbd1fed39e97492a4e39b93a8ab23572a634940b2ae482257790b6
Oct 17 08:08:22 minikube kubelet[817]: W1017 08:08:22.664962     817 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "f863ed523a46af3ee103ab96a1b8857c05b2f4a7b8a26f6f9711f734c4d2194b"
Oct 17 08:08:22 minikube kubelet[817]: W1017 08:08:22.972289     817 pod_container_deletor.go:79] Container "80671e51137646030cc8452b01a6b3da7b2467fe3cc2614c1961505a4058ab89" not found in pod's containers
Oct 17 08:08:23 minikube kubelet[817]: E1017 08:08:23.075326     817 cni.go:366] Error adding kube-system_coredns-f9fd979d6-49srb/c218833d115fee9d5d8b707509174277cbb2d63718d17ffbd48322fe5b63b54b to network flannel/cbr0: open /run/flannel/subnet.env: no such file or directory
Oct 17 08:08:23 minikube kubelet[817]: E1017 08:08:23.357788     817 secret.go:195] Couldn't get secret kube-system/storage-provisioner-token-682tm: failed to sync secret cache: timed out waiting for the condition
Oct 17 08:08:23 minikube kubelet[817]: E1017 08:08:23.357992     817 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/25037416-fff6-468b-93dc-5e626115ac90-storage-provisioner-token-682tm podName:25037416-fff6-468b-93dc-5e626115ac90 nodeName:}" failed. No retries permitted until 2020-10-17 08:08:23.8579151 +0000 UTC m=+7.527064301 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"storage-provisioner-token-682tm\" (UniqueName: \"kubernetes.io/secret/25037416-fff6-468b-93dc-5e626115ac90-storage-provisioner-token-682tm\") pod \"storage-provisioner\" (UID: \"25037416-fff6-468b-93dc-5e626115ac90\") : failed to sync secret cache: timed out waiting for the condition"
Oct 17 08:08:23 minikube kubelet[817]: E1017 08:08:23.485931     817 remote_runtime.go:113] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to set up sandbox container "c218833d115fee9d5d8b707509174277cbb2d63718d17ffbd48322fe5b63b54b" network for pod "coredns-f9fd979d6-49srb": networkPlugin cni failed to set up pod "coredns-f9fd979d6-49srb_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Oct 17 08:08:23 minikube kubelet[817]: E1017 08:08:23.486010     817 kuberuntime_sandbox.go:69] CreatePodSandbox for pod "coredns-f9fd979d6-49srb_kube-system(fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7)" failed: rpc error: code = Unknown desc = failed to set up sandbox container "c218833d115fee9d5d8b707509174277cbb2d63718d17ffbd48322fe5b63b54b" network for pod "coredns-f9fd979d6-49srb": networkPlugin cni failed to set up pod "coredns-f9fd979d6-49srb_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Oct 17 08:08:23 minikube kubelet[817]: E1017 08:08:23.486020     817 kuberuntime_manager.go:730] createPodSandbox for pod "coredns-f9fd979d6-49srb_kube-system(fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7)" failed: rpc error: code = Unknown desc = failed to set up sandbox container "c218833d115fee9d5d8b707509174277cbb2d63718d17ffbd48322fe5b63b54b" network for pod "coredns-f9fd979d6-49srb": networkPlugin cni failed to set up pod "coredns-f9fd979d6-49srb_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Oct 17 08:08:23 minikube kubelet[817]: E1017 08:08:23.486083     817 pod_workers.go:191] Error syncing pod fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7 ("coredns-f9fd979d6-49srb_kube-system(fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7)"), skipping: failed to "CreatePodSandbox" for "coredns-f9fd979d6-49srb_kube-system(fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-f9fd979d6-49srb_kube-system(fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7)\" failed: rpc error: code = Unknown desc = failed to set up sandbox container \"c218833d115fee9d5d8b707509174277cbb2d63718d17ffbd48322fe5b63b54b\" network for pod \"coredns-f9fd979d6-49srb\": networkPlugin cni failed to set up pod \"coredns-f9fd979d6-49srb_kube-system\" network: open /run/flannel/subnet.env: no such file or directory"
Oct 17 08:08:23 minikube kubelet[817]: W1017 08:08:23.989015     817 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-f9fd979d6-49srb_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "c218833d115fee9d5d8b707509174277cbb2d63718d17ffbd48322fe5b63b54b"
Oct 17 08:08:23 minikube kubelet[817]: W1017 08:08:23.993067     817 pod_container_deletor.go:79] Container "c218833d115fee9d5d8b707509174277cbb2d63718d17ffbd48322fe5b63b54b" not found in pod's containers
Oct 17 08:08:23 minikube kubelet[817]: W1017 08:08:23.994896     817 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "c218833d115fee9d5d8b707509174277cbb2d63718d17ffbd48322fe5b63b54b"
Oct 17 08:08:24 minikube kubelet[817]: I1017 08:08:24.002604     817 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 579a7c5dee6d80974def7cad1d7c2d46ea460f84befac005cb3b3e7ef7f6fe3d
Oct 17 08:08:24 minikube kubelet[817]: E1017 08:08:24.389976     817 cni.go:366] Error adding kube-system_coredns-f9fd979d6-49srb/c9052a85144cebb6afa0c893dd62f14f7459a97be6d62192ceb3e05bf24dbbf4 to network flannel/cbr0: open /run/flannel/subnet.env: no such file or directory
Oct 17 08:08:24 minikube kubelet[817]: E1017 08:08:24.762246     817 remote_runtime.go:113] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to set up sandbox container "c9052a85144cebb6afa0c893dd62f14f7459a97be6d62192ceb3e05bf24dbbf4" network for pod "coredns-f9fd979d6-49srb": networkPlugin cni failed to set up pod "coredns-f9fd979d6-49srb_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Oct 17 08:08:24 minikube kubelet[817]: E1017 08:08:24.762317     817 kuberuntime_sandbox.go:69] CreatePodSandbox for pod "coredns-f9fd979d6-49srb_kube-system(fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7)" failed: rpc error: code = Unknown desc = failed to set up sandbox container "c9052a85144cebb6afa0c893dd62f14f7459a97be6d62192ceb3e05bf24dbbf4" network for pod "coredns-f9fd979d6-49srb": networkPlugin cni failed to set up pod "coredns-f9fd979d6-49srb_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Oct 17 08:08:24 minikube kubelet[817]: E1017 08:08:24.762328     817 kuberuntime_manager.go:730] createPodSandbox for pod "coredns-f9fd979d6-49srb_kube-system(fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7)" failed: rpc error: code = Unknown desc = failed to set up sandbox container "c9052a85144cebb6afa0c893dd62f14f7459a97be6d62192ceb3e05bf24dbbf4" network for pod "coredns-f9fd979d6-49srb": networkPlugin cni failed to set up pod "coredns-f9fd979d6-49srb_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Oct 17 08:08:24 minikube kubelet[817]: E1017 08:08:24.762386     817 pod_workers.go:191] Error syncing pod fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7 ("coredns-f9fd979d6-49srb_kube-system(fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7)"), skipping: failed to "CreatePodSandbox" for "coredns-f9fd979d6-49srb_kube-system(fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-f9fd979d6-49srb_kube-system(fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7)\" failed: rpc error: code = Unknown desc = failed to set up sandbox container \"c9052a85144cebb6afa0c893dd62f14f7459a97be6d62192ceb3e05bf24dbbf4\" network for pod \"coredns-f9fd979d6-49srb\": networkPlugin cni failed to set up pod \"coredns-f9fd979d6-49srb_kube-system\" network: open /run/flannel/subnet.env: no such file or directory"
Oct 17 08:08:25 minikube kubelet[817]: W1017 08:08:25.020738     817 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-f9fd979d6-49srb_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "c9052a85144cebb6afa0c893dd62f14f7459a97be6d62192ceb3e05bf24dbbf4"
Oct 17 08:08:25 minikube kubelet[817]: W1017 08:08:25.028328     817 pod_container_deletor.go:79] Container "c9052a85144cebb6afa0c893dd62f14f7459a97be6d62192ceb3e05bf24dbbf4" not found in pod's containers
Oct 17 08:08:25 minikube kubelet[817]: W1017 08:08:25.030118     817 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "c9052a85144cebb6afa0c893dd62f14f7459a97be6d62192ceb3e05bf24dbbf4"
Oct 17 08:08:25 minikube kubelet[817]: E1017 08:08:25.407996     817 cni.go:366] Error adding kube-system_coredns-f9fd979d6-49srb/f91de396ab874c1622027846d1eb4d08bc3ef48c41f34b224d3405c6f2cf3785 to network flannel/cbr0: open /run/flannel/subnet.env: no such file or directory
Oct 17 08:08:25 minikube kubelet[817]: E1017 08:08:25.581562     817 remote_runtime.go:113] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to set up sandbox container "f91de396ab874c1622027846d1eb4d08bc3ef48c41f34b224d3405c6f2cf3785" network for pod "coredns-f9fd979d6-49srb": networkPlugin cni failed to set up pod "coredns-f9fd979d6-49srb_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Oct 17 08:08:25 minikube kubelet[817]: E1017 08:08:25.581641     817 kuberuntime_sandbox.go:69] CreatePodSandbox for pod "coredns-f9fd979d6-49srb_kube-system(fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7)" failed: rpc error: code = Unknown desc = failed to set up sandbox container "f91de396ab874c1622027846d1eb4d08bc3ef48c41f34b224d3405c6f2cf3785" network for pod "coredns-f9fd979d6-49srb": networkPlugin cni failed to set up pod "coredns-f9fd979d6-49srb_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Oct 17 08:08:25 minikube kubelet[817]: E1017 08:08:25.581659     817 kuberuntime_manager.go:730] createPodSandbox for pod "coredns-f9fd979d6-49srb_kube-system(fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7)" failed: rpc error: code = Unknown desc = failed to set up sandbox container "f91de396ab874c1622027846d1eb4d08bc3ef48c41f34b224d3405c6f2cf3785" network for pod "coredns-f9fd979d6-49srb": networkPlugin cni failed to set up pod "coredns-f9fd979d6-49srb_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Oct 17 08:08:25 minikube kubelet[817]: E1017 08:08:25.581742     817 pod_workers.go:191] Error syncing pod fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7 ("coredns-f9fd979d6-49srb_kube-system(fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7)"), skipping: failed to "CreatePodSandbox" for "coredns-f9fd979d6-49srb_kube-system(fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-f9fd979d6-49srb_kube-system(fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7)\" failed: rpc error: code = Unknown desc = failed to set up sandbox container \"f91de396ab874c1622027846d1eb4d08bc3ef48c41f34b224d3405c6f2cf3785\" network for pod \"coredns-f9fd979d6-49srb\": networkPlugin cni failed to set up pod \"coredns-f9fd979d6-49srb_kube-system\" network: open /run/flannel/subnet.env: no such file or directory"
Oct 17 08:08:26 minikube kubelet[817]: W1017 08:08:26.067102     817 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-f9fd979d6-49srb_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "f91de396ab874c1622027846d1eb4d08bc3ef48c41f34b224d3405c6f2cf3785"
Oct 17 08:08:26 minikube kubelet[817]: W1017 08:08:26.074520     817 pod_container_deletor.go:79] Container "f91de396ab874c1622027846d1eb4d08bc3ef48c41f34b224d3405c6f2cf3785" not found in pod's containers
Oct 17 08:08:26 minikube kubelet[817]: W1017 08:08:26.076704     817 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "f91de396ab874c1622027846d1eb4d08bc3ef48c41f34b224d3405c6f2cf3785"
Oct 17 08:08:26 minikube kubelet[817]: E1017 08:08:26.272380     817 cni.go:366] Error adding kube-system_coredns-f9fd979d6-49srb/facf7eb251ee5d3533a614472924f42a816c5ecb97efdd2b5f8c0ea379356878 to network flannel/cbr0: open /run/flannel/subnet.env: no such file or directory
Oct 17 08:08:26 minikube kubelet[817]: E1017 08:08:26.391345     817 remote_runtime.go:113] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to set up sandbox container "facf7eb251ee5d3533a614472924f42a816c5ecb97efdd2b5f8c0ea379356878" network for pod "coredns-f9fd979d6-49srb": networkPlugin cni failed to set up pod "coredns-f9fd979d6-49srb_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Oct 17 08:08:26 minikube kubelet[817]: E1017 08:08:26.391428     817 kuberuntime_sandbox.go:69] CreatePodSandbox for pod "coredns-f9fd979d6-49srb_kube-system(fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7)" failed: rpc error: code = Unknown desc = failed to set up sandbox container "facf7eb251ee5d3533a614472924f42a816c5ecb97efdd2b5f8c0ea379356878" network for pod "coredns-f9fd979d6-49srb": networkPlugin cni failed to set up pod "coredns-f9fd979d6-49srb_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Oct 17 08:08:26 minikube kubelet[817]: E1017 08:08:26.391443     817 kuberuntime_manager.go:730] createPodSandbox for pod "coredns-f9fd979d6-49srb_kube-system(fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7)" failed: rpc error: code = Unknown desc = failed to set up sandbox container "facf7eb251ee5d3533a614472924f42a816c5ecb97efdd2b5f8c0ea379356878" network for pod "coredns-f9fd979d6-49srb": networkPlugin cni failed to set up pod "coredns-f9fd979d6-49srb_kube-system" network: open /run/flannel/subnet.env: no such file or directory
Oct 17 08:08:26 minikube kubelet[817]: E1017 08:08:26.391515     817 pod_workers.go:191] Error syncing pod fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7 ("coredns-f9fd979d6-49srb_kube-system(fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7)"), skipping: failed to "CreatePodSandbox" for "coredns-f9fd979d6-49srb_kube-system(fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-f9fd979d6-49srb_kube-system(fd4c9d5e-3ab3-4c9e-ab9d-e19705d370e7)\" failed: rpc error: code = Unknown desc = failed to set up sandbox container \"facf7eb251ee5d3533a614472924f42a816c5ecb97efdd2b5f8c0ea379356878\" network for pod \"coredns-f9fd979d6-49srb\": networkPlugin cni failed to set up pod \"coredns-f9fd979d6-49srb_kube-system\" network: open /run/flannel/subnet.env: no such file or directory"
Oct 17 08:08:27 minikube kubelet[817]: E1017 08:08:27.075157     817 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
Oct 17 08:08:27 minikube kubelet[817]: E1017 08:08:27.075204     817 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
Oct 17 08:08:27 minikube kubelet[817]: W1017 08:08:27.083183     817 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-f9fd979d6-49srb_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "facf7eb251ee5d3533a614472924f42a816c5ecb97efdd2b5f8c0ea379356878"
Oct 17 08:08:27 minikube kubelet[817]: W1017 08:08:27.089768     817 pod_container_deletor.go:79] Container "facf7eb251ee5d3533a614472924f42a816c5ecb97efdd2b5f8c0ea379356878" not found in pod's containers
Oct 17 08:08:27 minikube kubelet[817]: W1017 08:08:27.091400     817 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "facf7eb251ee5d3533a614472924f42a816c5ecb97efdd2b5f8c0ea379356878"
Oct 17 08:08:37 minikube kubelet[817]: E1017 08:08:37.084189     817 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
Oct 17 08:08:37 minikube kubelet[817]: E1017 08:08:37.084225     817 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
Oct 17 08:08:47 minikube kubelet[817]: E1017 08:08:47.092497     817 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
Oct 17 08:08:47 minikube kubelet[817]: E1017 08:08:47.092531     817 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
Oct 17 08:08:57 minikube kubelet[817]: E1017 08:08:57.102781     817 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
Oct 17 08:08:57 minikube kubelet[817]: E1017 08:08:57.102832     817 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
Oct 17 08:09:07 minikube kubelet[817]: E1017 08:09:07.110697     817 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
Oct 17 08:09:07 minikube kubelet[817]: E1017 08:09:07.110738     817 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics

==> storage-provisioner [d07c4b8b994e] <==
I1017 08:08:24.379333       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
I1017 08:08:41.775954       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1017 08:08:41.776110       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_1cacd13a-1764-4dd2-a51c-07553f9305dc!
I1017 08:08:41.776224       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1643489c-f64a-492e-8401-4c6669084789", APIVersion:"v1", ResourceVersion:"564", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_1cacd13a-1764-4dd2-a51c-07553f9305dc became leader
I1017 08:08:41.876490       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_1cacd13a-1764-4dd2-a51c-07553f9305dc!

==> storage-provisioner [e4b8ba3f7ba1] <==
I1017 08:07:41.790364       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
I1017 08:07:41.796747       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1017 08:07:41.796870       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1643489c-f64a-492e-8401-4c6669084789", APIVersion:"v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_76e51ec8-3a99-4783-b3d2-f7374a817305 became leader
I1017 08:07:41.796951       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_76e51ec8-3a99-4783-b3d2-f7374a817305!
I1017 08:07:41.897251       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_76e51ec8-3a99-4783-b3d2-f7374a817305!
@tstromberg tstromberg changed the title Bug: restarting minikube with --cni=flannel bug on Windows restarting minikube with flannel CNI: cannot stat '/etc/cni/net.d/100-crio-bridge.conf': No such file or directory Oct 21, 2020
@tstromberg
Copy link
Contributor

The good news is that this is simply a warning message output - it shouldn't impact the cluster otherwise. I'll send a PR in to address this nonetheless.

@tstromberg tstromberg added area/cni CNI support kind/bug Categorizes issue or PR as related to a bug. labels Oct 21, 2020
@tstromberg tstromberg self-assigned this Oct 21, 2020
@tstromberg tstromberg added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Oct 21, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cni CNI support kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants