* * ==> Audit <== * |--------------|------|----------|------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |--------------|------|----------|------|---------|-------------------------------|-------------------------------| | delete | | minikube | aman | v1.25.2 | Thu, 23 Jun 2022 12:21:18 IST | Thu, 23 Jun 2022 12:21:22 IST | | delete | | minikube | aman | v1.25.2 | Thu, 23 Jun 2022 15:30:12 IST | Thu, 23 Jun 2022 15:30:15 IST | | update-check | | minikube | aman | v1.25.2 | Thu, 23 Jun 2022 16:45:00 IST | Thu, 23 Jun 2022 16:45:01 IST | |--------------|------|----------|------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2022/06/23 21:34:45 Running on machine: aman-HP-Laptop-15g-br0xx Binary: Built with gc go1.17.7 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0623 21:34:45.661353 5800 out.go:297] Setting OutFile to fd 1 ... I0623 21:34:45.661435 5800 out.go:349] isatty.IsTerminal(1) = true I0623 21:34:45.661438 5800 out.go:310] Setting ErrFile to fd 2... I0623 21:34:45.661443 5800 out.go:349] isatty.IsTerminal(2) = true I0623 21:34:45.661549 5800 root.go:315] Updating PATH: /home/aman/.minikube/bin I0623 21:34:45.661804 5800 out.go:304] Setting JSON to false I0623 21:34:45.681480 5800 start.go:112] hostinfo: {"hostname":"aman-HP-Laptop-15g-br0xx","uptime":1637,"bootTime":1655998648,"procs":290,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-51-generic","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"31dadff6-4f8d-4416-a89f-5afc1cd23752"} I0623 21:34:45.681568 5800 start.go:122] virtualization: kvm host I0623 21:34:45.683010 5800 out.go:176] ๐Ÿ˜„ minikube v1.25.2 on Ubuntu 20.04 I0623 21:34:45.683170 5800 notify.go:193] Checking for updates... I0623 21:34:45.683539 5800 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3 I0623 21:34:45.683580 5800 driver.go:344] Setting default libvirt URI to qemu:///system I0623 21:34:45.767038 5800 docker.go:132] docker version: linux-20.10.16 I0623 21:34:45.767117 5800 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0623 21:34:45.909197 5800 info.go:263] docker info: {ID:MSU4:HBLW:WG25:CY7H:PFDS:LTXV:3DAY:VHAR:ESUT:VH7J:WUPR:LTLI Containers:28 ContainersRunning:0 ContainersPaused:0 ContainersStopped:28 Images:20 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:49 SystemTime:2022-06-23 16:04:45.826036913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:2607796224 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/libexec/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:}} I0623 21:34:45.909299 5800 docker.go:237] overlay module found I0623 21:34:45.911032 5800 out.go:176] โœจ Using the docker driver based on existing profile I0623 21:34:45.911052 5800 start.go:281] selected driver: docker I0623 21:34:45.911055 5800 start.go:798] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2438 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/aman:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0623 21:34:45.911111 5800 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0623 21:34:45.911243 5800 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0623 21:34:46.045673 5800 info.go:263] docker info: {ID:MSU4:HBLW:WG25:CY7H:PFDS:LTXV:3DAY:VHAR:ESUT:VH7J:WUPR:LTLI Containers:28 ContainersRunning:0 ContainersPaused:0 ContainersStopped:28 Images:20 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:49 SystemTime:2022-06-23 16:04:45.969981928 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:2607796224 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/libexec/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:}} I0623 21:34:46.066796 5800 cni.go:93] Creating CNI manager for "" I0623 21:34:46.066840 5800 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0623 21:34:46.066858 5800 start_flags.go:302] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2438 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/aman:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0623 21:34:46.069437 5800 out.go:176] ๐Ÿ‘ Starting control plane node minikube in cluster minikube I0623 21:34:46.069523 5800 cache.go:120] Beginning downloading kic base image for docker with docker I0623 21:34:46.070503 5800 out.go:176] ๐Ÿšœ Pulling base image ... I0623 21:34:46.070608 5800 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker I0623 21:34:46.070678 5800 preload.go:148] Found local preload: /home/aman/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 I0623 21:34:46.070691 5800 cache.go:57] Caching tarball of preloaded images I0623 21:34:46.070710 5800 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0623 21:34:46.071244 5800 preload.go:174] Found /home/aman/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0623 21:34:46.071274 5800 cache.go:60] Finished verifying existence of preloaded tar for v1.23.3 on docker I0623 21:34:46.071511 5800 profile.go:148] Saving config to /home/aman/.minikube/profiles/minikube/config.json ... I0623 21:34:46.155294 5800 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull I0623 21:34:46.155307 5800 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load I0623 21:34:46.155313 5800 cache.go:208] Successfully downloaded all kic artifacts I0623 21:34:46.155339 5800 start.go:313] acquiring machines lock for minikube: {Name:mk49bb07232a6be2c1f4ab0e316cc107172e726c Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0623 21:34:46.155432 5800 start.go:317] acquired machines lock for "minikube" in 81.212ยตs I0623 21:34:46.155450 5800 start.go:93] Skipping create...Using existing machine configuration I0623 21:34:46.155452 5800 fix.go:55] fixHost starting: I0623 21:34:46.155664 5800 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0623 21:34:46.218027 5800 fix.go:108] recreateIfNeeded on minikube: state=Stopped err= W0623 21:34:46.218041 5800 fix.go:134] unexpected machine state, will restart: I0623 21:34:46.219329 5800 out.go:176] ๐Ÿ”„ Restarting existing docker container for "minikube" ... I0623 21:34:46.219387 5800 cli_runner.go:133] Run: docker start minikube I0623 21:34:46.593619 5800 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0623 21:34:46.707584 5800 kic.go:420] container "minikube" state is running. I0623 21:34:46.707871 5800 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0623 21:34:46.794062 5800 profile.go:148] Saving config to /home/aman/.minikube/profiles/minikube/config.json ... I0623 21:34:46.794283 5800 machine.go:88] provisioning docker machine ... I0623 21:34:46.794296 5800 ubuntu.go:169] provisioning hostname "minikube" I0623 21:34:46.794331 5800 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0623 21:34:46.875344 5800 main.go:130] libmachine: Using SSH client type: native I0623 21:34:46.875512 5800 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a12c0] 0x7a43a0 [] 0s} 127.0.0.1 39887 } I0623 21:34:46.875522 5800 main.go:130] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0623 21:34:46.877412 5800 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: EOF I0623 21:34:50.114396 5800 main.go:130] libmachine: SSH cmd err, output: : minikube I0623 21:34:50.114483 5800 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0623 21:34:50.186945 5800 main.go:130] libmachine: Using SSH client type: native I0623 21:34:50.187063 5800 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a12c0] 0x7a43a0 [] 0s} 127.0.0.1 39887 } I0623 21:34:50.187076 5800 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0623 21:34:50.339909 5800 main.go:130] libmachine: SSH cmd err, output: : I0623 21:34:50.339959 5800 ubuntu.go:175] set auth options {CertDir:/home/aman/.minikube CaCertPath:/home/aman/.minikube/certs/ca.pem CaPrivateKeyPath:/home/aman/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/aman/.minikube/machines/server.pem ServerKeyPath:/home/aman/.minikube/machines/server-key.pem ClientKeyPath:/home/aman/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/aman/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/aman/.minikube} I0623 21:34:50.340008 5800 ubuntu.go:177] setting up certificates I0623 21:34:50.340029 5800 provision.go:83] configureAuth start I0623 21:34:50.340147 5800 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0623 21:34:50.429665 5800 provision.go:138] copyHostCerts I0623 21:34:50.429703 5800 exec_runner.go:144] found /home/aman/.minikube/ca.pem, removing ... I0623 21:34:50.429708 5800 exec_runner.go:207] rm: /home/aman/.minikube/ca.pem I0623 21:34:50.429754 5800 exec_runner.go:151] cp: /home/aman/.minikube/certs/ca.pem --> /home/aman/.minikube/ca.pem (1070 bytes) I0623 21:34:50.429815 5800 exec_runner.go:144] found /home/aman/.minikube/cert.pem, removing ... I0623 21:34:50.429818 5800 exec_runner.go:207] rm: /home/aman/.minikube/cert.pem I0623 21:34:50.429840 5800 exec_runner.go:151] cp: /home/aman/.minikube/certs/cert.pem --> /home/aman/.minikube/cert.pem (1115 bytes) I0623 21:34:50.429880 5800 exec_runner.go:144] found /home/aman/.minikube/key.pem, removing ... I0623 21:34:50.429882 5800 exec_runner.go:207] rm: /home/aman/.minikube/key.pem I0623 21:34:50.429903 5800 exec_runner.go:151] cp: /home/aman/.minikube/certs/key.pem --> /home/aman/.minikube/key.pem (1675 bytes) I0623 21:34:50.429937 5800 provision.go:112] generating server cert: /home/aman/.minikube/machines/server.pem ca-key=/home/aman/.minikube/certs/ca.pem private-key=/home/aman/.minikube/certs/ca-key.pem org=aman.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] I0623 21:34:50.515608 5800 provision.go:172] copyRemoteCerts I0623 21:34:50.515661 5800 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0623 21:34:50.515696 5800 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0623 21:34:50.579582 5800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:39887 SSHKeyPath:/home/aman/.minikube/machines/minikube/id_rsa Username:docker} I0623 21:34:50.706038 5800 ssh_runner.go:362] scp /home/aman/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1070 bytes) I0623 21:34:50.737584 5800 ssh_runner.go:362] scp /home/aman/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes) I0623 21:34:50.761802 5800 ssh_runner.go:362] scp /home/aman/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0623 21:34:50.786210 5800 provision.go:86] duration metric: configureAuth took 446.169875ms I0623 21:34:50.786225 5800 ubuntu.go:193] setting minikube options for container-runtime I0623 21:34:50.786370 5800 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3 I0623 21:34:50.786401 5800 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0623 21:34:50.852092 5800 main.go:130] libmachine: Using SSH client type: native I0623 21:34:50.852216 5800 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a12c0] 0x7a43a0 [] 0s} 127.0.0.1 39887 } I0623 21:34:50.852222 5800 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0623 21:34:51.019149 5800 main.go:130] libmachine: SSH cmd err, output: : overlay I0623 21:34:51.019176 5800 ubuntu.go:71] root file system type: overlay I0623 21:34:51.019654 5800 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0623 21:34:51.019771 5800 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0623 21:34:51.100222 5800 main.go:130] libmachine: Using SSH client type: native I0623 21:34:51.100351 5800 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a12c0] 0x7a43a0 [] 0s} 127.0.0.1 39887 } I0623 21:34:51.100406 5800 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0623 21:34:51.285664 5800 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0623 21:34:51.285739 5800 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0623 21:34:51.354099 5800 main.go:130] libmachine: Using SSH client type: native I0623 21:34:51.354216 5800 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a12c0] 0x7a43a0 [] 0s} 127.0.0.1 39887 } I0623 21:34:51.354228 5800 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0623 21:34:51.520712 5800 main.go:130] libmachine: SSH cmd err, output: : I0623 21:34:51.520733 5800 machine.go:91] provisioned docker machine in 4.72644155s I0623 21:34:51.520745 5800 start.go:267] post-start starting for "minikube" (driver="docker") I0623 21:34:51.520755 5800 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0623 21:34:51.520821 5800 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0623 21:34:51.520868 5800 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0623 21:34:51.590274 5800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:39887 SSHKeyPath:/home/aman/.minikube/machines/minikube/id_rsa Username:docker} I0623 21:34:51.711412 5800 ssh_runner.go:195] Run: cat /etc/os-release I0623 21:34:51.719341 5800 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0623 21:34:51.719367 5800 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0623 21:34:51.719383 5800 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0623 21:34:51.719389 5800 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0623 21:34:51.719400 5800 filesync.go:126] Scanning /home/aman/.minikube/addons for local assets ... I0623 21:34:51.719474 5800 filesync.go:126] Scanning /home/aman/.minikube/files for local assets ... I0623 21:34:51.719507 5800 start.go:270] post-start completed in 198.753214ms I0623 21:34:51.719548 5800 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0623 21:34:51.719583 5800 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0623 21:34:51.787992 5800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:39887 SSHKeyPath:/home/aman/.minikube/machines/minikube/id_rsa Username:docker} I0623 21:34:51.894941 5800 fix.go:57] fixHost completed within 5.739472254s I0623 21:34:51.894970 5800 start.go:80] releasing machines lock for "minikube", held for 5.739526852s I0623 21:34:51.895242 5800 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0623 21:34:51.986656 5800 ssh_runner.go:195] Run: systemctl --version I0623 21:34:51.986691 5800 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0623 21:34:51.986699 5800 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0623 21:34:51.986744 5800 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0623 21:34:52.069129 5800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:39887 SSHKeyPath:/home/aman/.minikube/machines/minikube/id_rsa Username:docker} I0623 21:34:52.076515 5800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:39887 SSHKeyPath:/home/aman/.minikube/machines/minikube/id_rsa Username:docker} I0623 21:34:52.590094 5800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0623 21:34:52.627588 5800 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0623 21:34:52.645200 5800 cruntime.go:272] skipping containerd shutdown because we are bound to it I0623 21:34:52.645257 5800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0623 21:34:52.660634 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0623 21:34:52.679136 5800 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0623 21:34:52.782710 5800 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0623 21:34:52.879067 5800 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0623 21:34:52.893878 5800 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0623 21:34:52.995593 5800 ssh_runner.go:195] Run: sudo systemctl start docker I0623 21:34:53.011067 5800 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0623 21:34:53.058504 5800 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0623 21:34:53.108118 5800 out.go:203] ๐Ÿณ Preparing Kubernetes v1.23.3 on Docker 20.10.12 ... I0623 21:34:53.108183 5800 cli_runner.go:133] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0623 21:34:53.167646 5800 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I0623 21:34:53.172785 5800 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0623 21:34:53.186837 5800 out.go:176] โ–ช kubelet.housekeeping-interval=5m I0623 21:34:53.186926 5800 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker I0623 21:34:53.186971 5800 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0623 21:34:53.227664 5800 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.3 k8s.gcr.io/kube-proxy:v1.23.3 k8s.gcr.io/kube-controller-manager:v1.23.3 k8s.gcr.io/kube-scheduler:v1.23.3 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0623 21:34:53.227682 5800 docker.go:537] Images already preloaded, skipping extraction I0623 21:34:53.227718 5800 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0623 21:34:53.267343 5800 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.3 k8s.gcr.io/kube-scheduler:v1.23.3 k8s.gcr.io/kube-controller-manager:v1.23.3 k8s.gcr.io/kube-proxy:v1.23.3 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0623 21:34:53.267360 5800 cache_images.go:84] Images are preloaded, skipping loading I0623 21:34:53.267401 5800 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0623 21:34:53.373395 5800 cni.go:93] Creating CNI manager for "" I0623 21:34:53.373408 5800 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0623 21:34:53.373417 5800 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0623 21:34:53.373431 5800 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0623 21:34:53.373527 5800 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.23.3 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0623 21:34:53.373591 5800 kubeadm.go:936] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.23.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0623 21:34:53.373634 5800 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.3 I0623 21:34:53.385688 5800 binaries.go:44] Found k8s binaries, skipping transfer I0623 21:34:53.385741 5800 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0623 21:34:53.395738 5800 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes) I0623 21:34:53.413611 5800 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0623 21:34:53.433563 5800 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2029 bytes) I0623 21:34:53.452561 5800 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0623 21:34:53.457423 5800 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0623 21:34:53.469999 5800 certs.go:54] Setting up /home/aman/.minikube/profiles/minikube for IP: 192.168.49.2 I0623 21:34:53.470074 5800 certs.go:182] skipping minikubeCA CA generation: /home/aman/.minikube/ca.key I0623 21:34:53.470126 5800 certs.go:182] skipping proxyClientCA CA generation: /home/aman/.minikube/proxy-client-ca.key I0623 21:34:53.470201 5800 certs.go:298] skipping minikube-user signed cert generation: /home/aman/.minikube/profiles/minikube/client.key I0623 21:34:53.470253 5800 certs.go:298] skipping minikube signed cert generation: /home/aman/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I0623 21:34:53.470297 5800 certs.go:298] skipping aggregator signed cert generation: /home/aman/.minikube/profiles/minikube/proxy-client.key I0623 21:34:53.470407 5800 certs.go:388] found cert: /home/aman/.minikube/certs/home/aman/.minikube/certs/ca-key.pem (1679 bytes) I0623 21:34:53.470437 5800 certs.go:388] found cert: /home/aman/.minikube/certs/home/aman/.minikube/certs/ca.pem (1070 bytes) I0623 21:34:53.470456 5800 certs.go:388] found cert: /home/aman/.minikube/certs/home/aman/.minikube/certs/cert.pem (1115 bytes) I0623 21:34:53.470473 5800 certs.go:388] found cert: /home/aman/.minikube/certs/home/aman/.minikube/certs/key.pem (1675 bytes) I0623 21:34:53.470946 5800 ssh_runner.go:362] scp /home/aman/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0623 21:34:53.495159 5800 ssh_runner.go:362] scp /home/aman/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0623 21:34:53.519866 5800 ssh_runner.go:362] scp /home/aman/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0623 21:34:53.543744 5800 ssh_runner.go:362] scp /home/aman/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0623 21:34:53.566995 5800 ssh_runner.go:362] scp /home/aman/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0623 21:34:53.593017 5800 ssh_runner.go:362] scp /home/aman/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0623 21:34:53.617574 5800 ssh_runner.go:362] scp /home/aman/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0623 21:34:53.642191 5800 ssh_runner.go:362] scp /home/aman/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0623 21:34:53.665865 5800 ssh_runner.go:362] scp /home/aman/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0623 21:34:53.691618 5800 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0623 21:34:53.710351 5800 ssh_runner.go:195] Run: openssl version I0623 21:34:53.717573 5800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0623 21:34:53.728127 5800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0623 21:34:53.733681 5800 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 22 19:58 /usr/share/ca-certificates/minikubeCA.pem I0623 21:34:53.733723 5800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0623 21:34:53.741357 5800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0623 21:34:53.752270 5800 kubeadm.go:391] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2438 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/aman:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0623 21:34:53.752383 5800 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0623 21:34:53.791109 5800 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0623 21:34:53.801699 5800 kubeadm.go:402] found existing configuration files, will attempt cluster restart I0623 21:34:53.801708 5800 kubeadm.go:601] restartCluster start I0623 21:34:53.801753 5800 ssh_runner.go:195] Run: sudo test -d /data/minikube I0623 21:34:53.811223 5800 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1 stdout: stderr: I0623 21:34:53.812176 5800 kubeconfig.go:92] found "minikube" server: "https://192.168.49.2:8443" I0623 21:34:53.813839 5800 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new I0623 21:34:53.824586 5800 api_server.go:165] Checking apiserver status ... I0623 21:34:53.824635 5800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0623 21:34:53.845763 5800 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0623 21:34:54.046241 5800 api_server.go:165] Checking apiserver status ... I0623 21:34:54.046344 5800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0623 21:34:54.093427 5800 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0623 21:34:54.245904 5800 api_server.go:165] Checking apiserver status ... I0623 21:34:54.246029 5800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0623 21:34:54.296733 5800 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0623 21:34:54.445987 5800 api_server.go:165] Checking apiserver status ... I0623 21:34:54.446119 5800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0623 21:34:54.501732 5800 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0623 21:34:54.646067 5800 api_server.go:165] Checking apiserver status ... I0623 21:34:54.646193 5800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0623 21:34:54.696069 5800 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0623 21:34:54.846410 5800 api_server.go:165] Checking apiserver status ... I0623 21:34:54.846552 5800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0623 21:34:54.901835 5800 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0623 21:34:55.045925 5800 api_server.go:165] Checking apiserver status ... I0623 21:34:55.046079 5800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0623 21:34:55.092521 5800 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0623 21:34:55.246852 5800 api_server.go:165] Checking apiserver status ... I0623 21:34:55.247001 5800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0623 21:34:55.306632 5800 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0623 21:34:55.446948 5800 api_server.go:165] Checking apiserver status ... I0623 21:34:55.447088 5800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0623 21:34:55.496445 5800 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0623 21:34:55.646373 5800 api_server.go:165] Checking apiserver status ... I0623 21:34:55.646503 5800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0623 21:34:55.696602 5800 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0623 21:34:55.846961 5800 api_server.go:165] Checking apiserver status ... I0623 21:34:55.847110 5800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0623 21:34:55.897749 5800 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0623 21:34:56.045958 5800 api_server.go:165] Checking apiserver status ... I0623 21:34:56.046077 5800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0623 21:34:56.095463 5800 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0623 21:34:56.246018 5800 api_server.go:165] Checking apiserver status ... I0623 21:34:56.246130 5800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0623 21:34:56.296440 5800 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0623 21:34:56.446799 5800 api_server.go:165] Checking apiserver status ... I0623 21:34:56.446940 5800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0623 21:34:56.494338 5800 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0623 21:34:56.646758 5800 api_server.go:165] Checking apiserver status ... I0623 21:34:56.646930 5800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0623 21:34:56.702664 5800 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0623 21:34:56.846068 5800 api_server.go:165] Checking apiserver status ... I0623 21:34:56.846226 5800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0623 21:34:56.896876 5800 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0623 21:34:56.896887 5800 api_server.go:165] Checking apiserver status ... I0623 21:34:56.896923 5800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0623 21:34:56.918499 5800 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0623 21:34:56.918513 5800 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition I0623 21:34:56.918517 5800 kubeadm.go:1067] stopping kube-system containers ... I0623 21:34:56.918555 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0623 21:34:56.961511 5800 docker.go:438] Stopping containers: [1cd9db27ce46 b61e539bbd5b 3954234da2d6 d335c6de28f8 1a5c5f15693b 1237cf45ecff d1ba496e720f ea9b1ae503f5 0c9aaab491c9 9ec311ba44fb a191ce07e7e5 7cdd209913af 03a43288f116 dca33a8fbec9 9401e3571ded 81614c4bfc92] I0623 21:34:56.961554 5800 ssh_runner.go:195] Run: docker stop 1cd9db27ce46 b61e539bbd5b 3954234da2d6 d335c6de28f8 1a5c5f15693b 1237cf45ecff d1ba496e720f ea9b1ae503f5 0c9aaab491c9 9ec311ba44fb a191ce07e7e5 7cdd209913af 03a43288f116 dca33a8fbec9 9401e3571ded 81614c4bfc92 I0623 21:34:57.002480 5800 ssh_runner.go:195] Run: sudo systemctl stop kubelet I0623 21:34:57.015294 5800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0623 21:34:57.025472 5800 kubeadm.go:155] found existing configuration files: -rw------- 1 root root 5639 Jun 23 15:48 /etc/kubernetes/admin.conf -rw------- 1 root root 5656 Jun 23 15:48 /etc/kubernetes/controller-manager.conf -rw------- 1 root root 1971 Jun 23 15:48 /etc/kubernetes/kubelet.conf -rw------- 1 root root 5604 Jun 23 15:48 /etc/kubernetes/scheduler.conf I0623 21:34:57.025521 5800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf I0623 21:34:57.036760 5800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf I0623 21:34:57.046512 5800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf I0623 21:34:57.056806 5800 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1 stdout: stderr: I0623 21:34:57.056852 5800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf I0623 21:34:57.066792 5800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf I0623 21:34:57.076625 5800 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1 stdout: stderr: I0623 21:34:57.076667 5800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf I0623 21:34:57.085935 5800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0623 21:34:57.095842 5800 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml I0623 21:34:57.095856 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml" I0623 21:34:57.144882 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml" I0623 21:34:57.653567 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml" I0623 21:34:57.825256 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml" I0623 21:34:57.883151 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml" I0623 21:34:57.962717 5800 api_server.go:51] waiting for apiserver process to appear ... I0623 21:34:57.962766 5800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0623 21:34:58.510778 5800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0623 21:34:59.010238 5800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0623 21:34:59.510218 5800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0623 21:34:59.629529 5800 api_server.go:71] duration metric: took 1.666816966s to wait for apiserver process to appear ... I0623 21:34:59.629551 5800 api_server.go:87] waiting for apiserver healthz status ... I0623 21:34:59.629561 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:35:04.629811 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:35:05.130150 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:35:10.131134 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:35:10.630315 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:35:15.631394 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:35:16.130117 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:35:21.130857 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:35:21.630708 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:35:26.631672 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:35:27.130127 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:35:32.130975 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:35:32.630761 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:35:37.631231 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:35:38.130948 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:35:43.131943 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:35:43.630697 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:35:48.631528 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:35:49.130080 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:35:54.130526 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:35:54.630312 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:35:59.630994 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:35:59.631333 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:35:59.702440 5800 logs.go:274] 2 containers: [095db28adc0e 0c9aaab491c9] I0623 21:35:59.702482 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:35:59.737450 5800 logs.go:274] 2 containers: [ce7800fbbbf6 7cdd209913af] I0623 21:35:59.737500 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:35:59.777859 5800 logs.go:274] 4 containers: [9cbb6d3e39b8 d54ef8e17c7b d335c6de28f8 1a5c5f15693b] I0623 21:35:59.777912 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:35:59.813027 5800 logs.go:274] 2 containers: [fedc7c601ca7 9ec311ba44fb] I0623 21:35:59.813076 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:35:59.852135 5800 logs.go:274] 2 containers: [d9a43cbd6c2f 3954234da2d6] I0623 21:35:59.852199 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:35:59.891303 5800 logs.go:274] 0 containers: [] W0623 21:35:59.891315 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:35:59.891352 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:35:59.929729 5800 logs.go:274] 2 containers: [37a0a473e538 1ab35c44f2a8] I0623 21:35:59.929775 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:35:59.966076 5800 logs.go:274] 2 containers: [7e3a1765c4db a191ce07e7e5] I0623 21:35:59.966096 5800 logs.go:123] Gathering logs for kube-apiserver [095db28adc0e] ... I0623 21:35:59.966103 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095db28adc0e" I0623 21:36:00.014879 5800 logs.go:123] Gathering logs for kube-apiserver [0c9aaab491c9] ... I0623 21:36:00.014892 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c9aaab491c9" I0623 21:36:00.065545 5800 logs.go:123] Gathering logs for kube-scheduler [fedc7c601ca7] ... I0623 21:36:00.065558 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fedc7c601ca7" I0623 21:36:00.103028 5800 logs.go:123] Gathering logs for kube-proxy [d9a43cbd6c2f] ... I0623 21:36:00.103040 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a43cbd6c2f" I0623 21:36:00.140349 5800 logs.go:123] Gathering logs for storage-provisioner [37a0a473e538] ... I0623 21:36:00.140362 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37a0a473e538" I0623 21:36:00.177062 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:36:00.177077 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0623 21:36:00.223743 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:36:00.223757 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:36:00.502294 5800 logs.go:123] Gathering logs for coredns [d54ef8e17c7b] ... I0623 21:36:00.502309 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54ef8e17c7b" I0623 21:36:00.545795 5800 logs.go:123] Gathering logs for coredns [d335c6de28f8] ... I0623 21:36:00.545808 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d335c6de28f8" I0623 21:36:00.584875 5800 logs.go:123] Gathering logs for coredns [1a5c5f15693b] ... I0623 21:36:00.584890 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a5c5f15693b" I0623 21:36:00.626539 5800 logs.go:123] Gathering logs for kube-scheduler [9ec311ba44fb] ... I0623 21:36:00.626552 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ec311ba44fb" I0623 21:36:00.672348 5800 logs.go:123] Gathering logs for storage-provisioner [1ab35c44f2a8] ... I0623 21:36:00.672360 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab35c44f2a8" I0623 21:36:00.709674 5800 logs.go:123] Gathering logs for container status ... I0623 21:36:00.709685 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:36:00.746905 5800 logs.go:123] Gathering logs for etcd [7cdd209913af] ... I0623 21:36:00.746916 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd209913af" I0623 21:36:00.795642 5800 logs.go:123] Gathering logs for coredns [9cbb6d3e39b8] ... I0623 21:36:00.795654 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cbb6d3e39b8" I0623 21:36:00.833539 5800 logs.go:123] Gathering logs for kube-controller-manager [a191ce07e7e5] ... I0623 21:36:00.833551 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a191ce07e7e5" I0623 21:36:00.884279 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:36:00.884293 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:36:00.898360 5800 logs.go:123] Gathering logs for kube-controller-manager [7e3a1765c4db] ... I0623 21:36:00.898375 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3a1765c4db" I0623 21:36:00.948832 5800 logs.go:123] Gathering logs for Docker ... I0623 21:36:00.948845 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:36:00.968670 5800 logs.go:123] Gathering logs for etcd [ce7800fbbbf6] ... I0623 21:36:00.968682 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce7800fbbbf6" I0623 21:36:01.020465 5800 logs.go:123] Gathering logs for kube-proxy [3954234da2d6] ... I0623 21:36:01.020476 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3954234da2d6" I0623 21:36:03.559495 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:36:08.560793 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:36:08.630206 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:36:08.789014 5800 logs.go:274] 2 containers: [095db28adc0e 0c9aaab491c9] I0623 21:36:08.789073 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:36:08.824512 5800 logs.go:274] 2 containers: [ce7800fbbbf6 7cdd209913af] I0623 21:36:08.824560 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:36:08.865239 5800 logs.go:274] 4 containers: [9cbb6d3e39b8 d54ef8e17c7b d335c6de28f8 1a5c5f15693b] I0623 21:36:08.865284 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:36:08.908506 5800 logs.go:274] 2 containers: [fedc7c601ca7 9ec311ba44fb] I0623 21:36:08.908554 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:36:08.944998 5800 logs.go:274] 2 containers: [d9a43cbd6c2f 3954234da2d6] I0623 21:36:08.945052 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:36:08.984794 5800 logs.go:274] 0 containers: [] W0623 21:36:08.984807 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:36:08.984839 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:36:09.020169 5800 logs.go:274] 2 containers: [37a0a473e538 1ab35c44f2a8] I0623 21:36:09.020209 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:36:09.059916 5800 logs.go:274] 2 containers: [7e3a1765c4db a191ce07e7e5] I0623 21:36:09.059932 5800 logs.go:123] Gathering logs for etcd [ce7800fbbbf6] ... I0623 21:36:09.059938 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce7800fbbbf6" I0623 21:36:09.103199 5800 logs.go:123] Gathering logs for etcd [7cdd209913af] ... I0623 21:36:09.103211 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd209913af" I0623 21:36:09.144872 5800 logs.go:123] Gathering logs for coredns [9cbb6d3e39b8] ... I0623 21:36:09.144884 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cbb6d3e39b8" I0623 21:36:09.186213 5800 logs.go:123] Gathering logs for storage-provisioner [37a0a473e538] ... I0623 21:36:09.186227 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37a0a473e538" I0623 21:36:09.225138 5800 logs.go:123] Gathering logs for Docker ... I0623 21:36:09.225152 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:36:09.245146 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:36:09.245158 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:36:09.258214 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:36:09.258225 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:36:09.349797 5800 logs.go:123] Gathering logs for coredns [d54ef8e17c7b] ... I0623 21:36:09.349810 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54ef8e17c7b" I0623 21:36:09.405414 5800 logs.go:123] Gathering logs for kube-proxy [d9a43cbd6c2f] ... I0623 21:36:09.405430 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a43cbd6c2f" I0623 21:36:09.464896 5800 logs.go:123] Gathering logs for kube-proxy [3954234da2d6] ... I0623 21:36:09.464908 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3954234da2d6" I0623 21:36:09.503357 5800 logs.go:123] Gathering logs for kube-controller-manager [7e3a1765c4db] ... I0623 21:36:09.503369 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3a1765c4db" I0623 21:36:09.558841 5800 logs.go:123] Gathering logs for kube-controller-manager [a191ce07e7e5] ... I0623 21:36:09.558858 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a191ce07e7e5" I0623 21:36:09.615116 5800 logs.go:123] Gathering logs for container status ... I0623 21:36:09.615133 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:36:09.667741 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:36:09.667754 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0623 21:36:09.715220 5800 logs.go:123] Gathering logs for kube-apiserver [0c9aaab491c9] ... I0623 21:36:09.715234 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c9aaab491c9" I0623 21:36:09.759115 5800 logs.go:123] Gathering logs for kube-scheduler [fedc7c601ca7] ... I0623 21:36:09.759127 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fedc7c601ca7" I0623 21:36:09.796792 5800 logs.go:123] Gathering logs for kube-scheduler [9ec311ba44fb] ... I0623 21:36:09.796808 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ec311ba44fb" I0623 21:36:09.857578 5800 logs.go:123] Gathering logs for kube-apiserver [095db28adc0e] ... I0623 21:36:09.857591 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095db28adc0e" I0623 21:36:09.907796 5800 logs.go:123] Gathering logs for coredns [d335c6de28f8] ... I0623 21:36:09.907813 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d335c6de28f8" I0623 21:36:09.949567 5800 logs.go:123] Gathering logs for coredns [1a5c5f15693b] ... I0623 21:36:09.949581 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a5c5f15693b" I0623 21:36:09.991258 5800 logs.go:123] Gathering logs for storage-provisioner [1ab35c44f2a8] ... I0623 21:36:09.991271 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab35c44f2a8" I0623 21:36:12.530603 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:36:17.531829 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:36:17.630428 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:36:17.700634 5800 logs.go:274] 2 containers: [095db28adc0e 0c9aaab491c9] I0623 21:36:17.700689 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:36:17.737195 5800 logs.go:274] 2 containers: [ce7800fbbbf6 7cdd209913af] I0623 21:36:17.737244 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:36:17.776777 5800 logs.go:274] 4 containers: [9cbb6d3e39b8 d54ef8e17c7b d335c6de28f8 1a5c5f15693b] I0623 21:36:17.776830 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:36:17.811316 5800 logs.go:274] 2 containers: [fedc7c601ca7 9ec311ba44fb] I0623 21:36:17.811363 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:36:17.846800 5800 logs.go:274] 2 containers: [d9a43cbd6c2f 3954234da2d6] I0623 21:36:17.846851 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:36:17.881787 5800 logs.go:274] 0 containers: [] W0623 21:36:17.881798 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:36:17.881835 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:36:17.916947 5800 logs.go:274] 2 containers: [37a0a473e538 1ab35c44f2a8] I0623 21:36:17.916992 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:36:17.956072 5800 logs.go:274] 2 containers: [7e3a1765c4db a191ce07e7e5] I0623 21:36:17.956094 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:36:17.956103 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:36:17.975134 5800 logs.go:123] Gathering logs for kube-apiserver [0c9aaab491c9] ... I0623 21:36:17.975150 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c9aaab491c9" I0623 21:36:18.038350 5800 logs.go:123] Gathering logs for etcd [7cdd209913af] ... I0623 21:36:18.038375 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd209913af" I0623 21:36:18.097650 5800 logs.go:123] Gathering logs for kube-proxy [3954234da2d6] ... I0623 21:36:18.097662 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3954234da2d6" I0623 21:36:18.136737 5800 logs.go:123] Gathering logs for storage-provisioner [1ab35c44f2a8] ... I0623 21:36:18.136750 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab35c44f2a8" I0623 21:36:18.178925 5800 logs.go:123] Gathering logs for Docker ... I0623 21:36:18.178937 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:36:18.198356 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:36:18.198369 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:36:18.287690 5800 logs.go:123] Gathering logs for kube-apiserver [095db28adc0e] ... I0623 21:36:18.287704 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095db28adc0e" I0623 21:36:18.332468 5800 logs.go:123] Gathering logs for coredns [d335c6de28f8] ... I0623 21:36:18.332481 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d335c6de28f8" I0623 21:36:18.369974 5800 logs.go:123] Gathering logs for coredns [1a5c5f15693b] ... I0623 21:36:18.369986 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a5c5f15693b" I0623 21:36:18.412682 5800 logs.go:123] Gathering logs for kube-controller-manager [7e3a1765c4db] ... I0623 21:36:18.412699 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3a1765c4db" I0623 21:36:18.470843 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:36:18.470861 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0623 21:36:18.517458 5800 logs.go:123] Gathering logs for etcd [ce7800fbbbf6] ... I0623 21:36:18.517472 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce7800fbbbf6" I0623 21:36:18.561711 5800 logs.go:123] Gathering logs for coredns [d54ef8e17c7b] ... I0623 21:36:18.561748 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54ef8e17c7b" I0623 21:36:18.604473 5800 logs.go:123] Gathering logs for kube-scheduler [fedc7c601ca7] ... I0623 21:36:18.604488 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fedc7c601ca7" I0623 21:36:18.642707 5800 logs.go:123] Gathering logs for kube-scheduler [9ec311ba44fb] ... I0623 21:36:18.642720 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ec311ba44fb" I0623 21:36:18.690123 5800 logs.go:123] Gathering logs for storage-provisioner [37a0a473e538] ... I0623 21:36:18.690136 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37a0a473e538" I0623 21:36:18.727911 5800 logs.go:123] Gathering logs for coredns [9cbb6d3e39b8] ... I0623 21:36:18.727923 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cbb6d3e39b8" I0623 21:36:18.771844 5800 logs.go:123] Gathering logs for kube-proxy [d9a43cbd6c2f] ... I0623 21:36:18.771860 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a43cbd6c2f" I0623 21:36:18.809284 5800 logs.go:123] Gathering logs for kube-controller-manager [a191ce07e7e5] ... I0623 21:36:18.809295 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a191ce07e7e5" I0623 21:36:18.865294 5800 logs.go:123] Gathering logs for container status ... I0623 21:36:18.865308 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:36:21.400409 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:36:26.400880 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:36:26.630608 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:36:26.695015 5800 logs.go:274] 2 containers: [095db28adc0e 0c9aaab491c9] I0623 21:36:26.695069 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:36:26.736072 5800 logs.go:274] 2 containers: [ce7800fbbbf6 7cdd209913af] I0623 21:36:26.736125 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:36:26.771845 5800 logs.go:274] 4 containers: [9cbb6d3e39b8 d54ef8e17c7b d335c6de28f8 1a5c5f15693b] I0623 21:36:26.771894 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:36:26.808246 5800 logs.go:274] 2 containers: [fedc7c601ca7 9ec311ba44fb] I0623 21:36:26.808305 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:36:26.843526 5800 logs.go:274] 2 containers: [d9a43cbd6c2f 3954234da2d6] I0623 21:36:26.843583 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:36:26.877102 5800 logs.go:274] 0 containers: [] W0623 21:36:26.877115 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:36:26.877154 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:36:26.913903 5800 logs.go:274] 2 containers: [37a0a473e538 1ab35c44f2a8] I0623 21:36:26.913962 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:36:26.957090 5800 logs.go:274] 2 containers: [7e3a1765c4db a191ce07e7e5] I0623 21:36:26.957112 5800 logs.go:123] Gathering logs for storage-provisioner [1ab35c44f2a8] ... I0623 21:36:26.957120 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab35c44f2a8" I0623 21:36:26.998829 5800 logs.go:123] Gathering logs for kube-apiserver [095db28adc0e] ... I0623 21:36:26.998845 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095db28adc0e" I0623 21:36:27.044906 5800 logs.go:123] Gathering logs for coredns [d335c6de28f8] ... I0623 21:36:27.044918 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d335c6de28f8" I0623 21:36:27.083744 5800 logs.go:123] Gathering logs for kube-proxy [d9a43cbd6c2f] ... I0623 21:36:27.083757 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a43cbd6c2f" I0623 21:36:27.122563 5800 logs.go:123] Gathering logs for etcd [7cdd209913af] ... I0623 21:36:27.122574 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd209913af" I0623 21:36:27.169838 5800 logs.go:123] Gathering logs for coredns [9cbb6d3e39b8] ... I0623 21:36:27.169850 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cbb6d3e39b8" I0623 21:36:27.211023 5800 logs.go:123] Gathering logs for kube-scheduler [9ec311ba44fb] ... I0623 21:36:27.211035 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ec311ba44fb" I0623 21:36:27.260680 5800 logs.go:123] Gathering logs for storage-provisioner [37a0a473e538] ... I0623 21:36:27.260695 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37a0a473e538" I0623 21:36:27.299770 5800 logs.go:123] Gathering logs for kube-controller-manager [7e3a1765c4db] ... I0623 21:36:27.299785 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3a1765c4db" I0623 21:36:27.369017 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:36:27.369031 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:36:27.386471 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:36:27.386484 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:36:27.471302 5800 logs.go:123] Gathering logs for etcd [ce7800fbbbf6] ... I0623 21:36:27.471315 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce7800fbbbf6" I0623 21:36:27.512152 5800 logs.go:123] Gathering logs for kube-controller-manager [a191ce07e7e5] ... I0623 21:36:27.512163 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a191ce07e7e5" I0623 21:36:27.562930 5800 logs.go:123] Gathering logs for container status ... I0623 21:36:27.562975 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:36:27.598696 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:36:27.598708 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0623 21:36:27.643331 5800 logs.go:123] Gathering logs for coredns [d54ef8e17c7b] ... I0623 21:36:27.643347 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54ef8e17c7b" I0623 21:36:27.682036 5800 logs.go:123] Gathering logs for coredns [1a5c5f15693b] ... I0623 21:36:27.682050 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a5c5f15693b" I0623 21:36:27.720748 5800 logs.go:123] Gathering logs for Docker ... I0623 21:36:27.720764 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:36:27.739732 5800 logs.go:123] Gathering logs for kube-apiserver [0c9aaab491c9] ... I0623 21:36:27.739745 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c9aaab491c9" I0623 21:36:27.783303 5800 logs.go:123] Gathering logs for kube-scheduler [fedc7c601ca7] ... I0623 21:36:27.783316 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fedc7c601ca7" I0623 21:36:27.822034 5800 logs.go:123] Gathering logs for kube-proxy [3954234da2d6] ... I0623 21:36:27.822045 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3954234da2d6" I0623 21:36:30.361267 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:36:35.362548 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:36:35.630860 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:36:35.694586 5800 logs.go:274] 2 containers: [095db28adc0e 0c9aaab491c9] I0623 21:36:35.694638 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:36:35.730377 5800 logs.go:274] 2 containers: [ce7800fbbbf6 7cdd209913af] I0623 21:36:35.730423 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:36:35.766557 5800 logs.go:274] 4 containers: [9cbb6d3e39b8 d54ef8e17c7b d335c6de28f8 1a5c5f15693b] I0623 21:36:35.766619 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:36:35.801817 5800 logs.go:274] 2 containers: [fedc7c601ca7 9ec311ba44fb] I0623 21:36:35.801871 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:36:35.836498 5800 logs.go:274] 2 containers: [d9a43cbd6c2f 3954234da2d6] I0623 21:36:35.836556 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:36:35.873137 5800 logs.go:274] 0 containers: [] W0623 21:36:35.873150 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:36:35.873184 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:36:35.909962 5800 logs.go:274] 2 containers: [37a0a473e538 1ab35c44f2a8] I0623 21:36:35.910015 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:36:35.953409 5800 logs.go:274] 2 containers: [7e3a1765c4db a191ce07e7e5] I0623 21:36:35.953428 5800 logs.go:123] Gathering logs for storage-provisioner [37a0a473e538] ... I0623 21:36:35.953436 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37a0a473e538" I0623 21:36:35.993066 5800 logs.go:123] Gathering logs for coredns [d54ef8e17c7b] ... I0623 21:36:35.993078 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54ef8e17c7b" I0623 21:36:36.030840 5800 logs.go:123] Gathering logs for kube-scheduler [fedc7c601ca7] ... I0623 21:36:36.030855 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fedc7c601ca7" I0623 21:36:36.072627 5800 logs.go:123] Gathering logs for kube-apiserver [0c9aaab491c9] ... I0623 21:36:36.072640 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c9aaab491c9" I0623 21:36:36.118828 5800 logs.go:123] Gathering logs for kube-proxy [3954234da2d6] ... I0623 21:36:36.118840 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3954234da2d6" I0623 21:36:36.169182 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:36:36.169194 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0623 21:36:36.215674 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:36:36.215688 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:36:36.229666 5800 logs.go:123] Gathering logs for etcd [7cdd209913af] ... I0623 21:36:36.229682 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd209913af" I0623 21:36:36.272126 5800 logs.go:123] Gathering logs for coredns [9cbb6d3e39b8] ... I0623 21:36:36.272141 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cbb6d3e39b8" I0623 21:36:36.310941 5800 logs.go:123] Gathering logs for coredns [d335c6de28f8] ... I0623 21:36:36.310954 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d335c6de28f8" I0623 21:36:36.350482 5800 logs.go:123] Gathering logs for storage-provisioner [1ab35c44f2a8] ... I0623 21:36:36.350497 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab35c44f2a8" I0623 21:36:36.396748 5800 logs.go:123] Gathering logs for kube-controller-manager [7e3a1765c4db] ... I0623 21:36:36.396762 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3a1765c4db" I0623 21:36:36.448938 5800 logs.go:123] Gathering logs for kube-controller-manager [a191ce07e7e5] ... I0623 21:36:36.448951 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a191ce07e7e5" I0623 21:36:36.514162 5800 logs.go:123] Gathering logs for kube-apiserver [095db28adc0e] ... I0623 21:36:36.514176 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095db28adc0e" I0623 21:36:36.564127 5800 logs.go:123] Gathering logs for etcd [ce7800fbbbf6] ... I0623 21:36:36.564141 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce7800fbbbf6" I0623 21:36:36.610873 5800 logs.go:123] Gathering logs for kube-scheduler [9ec311ba44fb] ... I0623 21:36:36.610890 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ec311ba44fb" I0623 21:36:36.658571 5800 logs.go:123] Gathering logs for kube-proxy [d9a43cbd6c2f] ... I0623 21:36:36.658585 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a43cbd6c2f" I0623 21:36:36.695882 5800 logs.go:123] Gathering logs for Docker ... I0623 21:36:36.695896 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:36:36.714948 5800 logs.go:123] Gathering logs for container status ... I0623 21:36:36.714963 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:36:36.749581 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:36:36.749592 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:36:36.845684 5800 logs.go:123] Gathering logs for coredns [1a5c5f15693b] ... I0623 21:36:36.845701 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a5c5f15693b" I0623 21:36:39.389972 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:36:44.391534 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:36:44.630054 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:36:44.689355 5800 logs.go:274] 2 containers: [095db28adc0e 0c9aaab491c9] I0623 21:36:44.689403 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:36:44.735377 5800 logs.go:274] 2 containers: [ce7800fbbbf6 7cdd209913af] I0623 21:36:44.735425 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:36:44.776403 5800 logs.go:274] 4 containers: [9cbb6d3e39b8 d54ef8e17c7b d335c6de28f8 1a5c5f15693b] I0623 21:36:44.776451 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:36:44.817656 5800 logs.go:274] 2 containers: [fedc7c601ca7 9ec311ba44fb] I0623 21:36:44.817707 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:36:44.853442 5800 logs.go:274] 2 containers: [d9a43cbd6c2f 3954234da2d6] I0623 21:36:44.853495 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:36:44.888871 5800 logs.go:274] 0 containers: [] W0623 21:36:44.888882 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:36:44.888918 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:36:44.925113 5800 logs.go:274] 2 containers: [37a0a473e538 1ab35c44f2a8] I0623 21:36:44.925159 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:36:44.960818 5800 logs.go:274] 2 containers: [7e3a1765c4db a191ce07e7e5] I0623 21:36:44.960839 5800 logs.go:123] Gathering logs for coredns [d335c6de28f8] ... I0623 21:36:44.960847 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d335c6de28f8" I0623 21:36:44.999170 5800 logs.go:123] Gathering logs for kube-scheduler [fedc7c601ca7] ... I0623 21:36:44.999185 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fedc7c601ca7" I0623 21:36:45.036955 5800 logs.go:123] Gathering logs for Docker ... I0623 21:36:45.036968 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:36:45.055882 5800 logs.go:123] Gathering logs for coredns [d54ef8e17c7b] ... I0623 21:36:45.055894 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54ef8e17c7b" I0623 21:36:45.100773 5800 logs.go:123] Gathering logs for coredns [9cbb6d3e39b8] ... I0623 21:36:45.100788 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cbb6d3e39b8" I0623 21:36:45.143268 5800 logs.go:123] Gathering logs for kube-proxy [3954234da2d6] ... I0623 21:36:45.143282 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3954234da2d6" I0623 21:36:45.183197 5800 logs.go:123] Gathering logs for storage-provisioner [1ab35c44f2a8] ... I0623 21:36:45.183209 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab35c44f2a8" I0623 21:36:45.230353 5800 logs.go:123] Gathering logs for kube-controller-manager [a191ce07e7e5] ... I0623 21:36:45.230369 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a191ce07e7e5" I0623 21:36:45.286202 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:36:45.286214 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:36:45.299325 5800 logs.go:123] Gathering logs for kube-apiserver [095db28adc0e] ... I0623 21:36:45.299336 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095db28adc0e" I0623 21:36:45.346762 5800 logs.go:123] Gathering logs for kube-apiserver [0c9aaab491c9] ... I0623 21:36:45.346775 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c9aaab491c9" I0623 21:36:45.394434 5800 logs.go:123] Gathering logs for etcd [ce7800fbbbf6] ... I0623 21:36:45.394446 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce7800fbbbf6" I0623 21:36:45.442757 5800 logs.go:123] Gathering logs for kube-scheduler [9ec311ba44fb] ... I0623 21:36:45.442769 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ec311ba44fb" I0623 21:36:45.492380 5800 logs.go:123] Gathering logs for kube-controller-manager [7e3a1765c4db] ... I0623 21:36:45.492392 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3a1765c4db" I0623 21:36:45.548046 5800 logs.go:123] Gathering logs for container status ... I0623 21:36:45.548058 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:36:45.582272 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:36:45.582285 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:36:45.682128 5800 logs.go:123] Gathering logs for etcd [7cdd209913af] ... I0623 21:36:45.682142 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd209913af" I0623 21:36:45.727503 5800 logs.go:123] Gathering logs for coredns [1a5c5f15693b] ... I0623 21:36:45.727516 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a5c5f15693b" I0623 21:36:45.772322 5800 logs.go:123] Gathering logs for kube-proxy [d9a43cbd6c2f] ... I0623 21:36:45.772339 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a43cbd6c2f" I0623 21:36:45.808843 5800 logs.go:123] Gathering logs for storage-provisioner [37a0a473e538] ... I0623 21:36:45.808857 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37a0a473e538" I0623 21:36:45.850305 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:36:45.850317 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0623 21:36:48.397467 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:36:53.398008 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:36:53.630766 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:36:53.694967 5800 logs.go:274] 2 containers: [095db28adc0e 0c9aaab491c9] I0623 21:36:53.695021 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:36:53.739984 5800 logs.go:274] 2 containers: [ce7800fbbbf6 7cdd209913af] I0623 21:36:53.740026 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:36:53.775651 5800 logs.go:274] 4 containers: [9cbb6d3e39b8 d54ef8e17c7b d335c6de28f8 1a5c5f15693b] I0623 21:36:53.775707 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:36:53.810799 5800 logs.go:274] 2 containers: [fedc7c601ca7 9ec311ba44fb] I0623 21:36:53.810850 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:36:53.849681 5800 logs.go:274] 2 containers: [d9a43cbd6c2f 3954234da2d6] I0623 21:36:53.849731 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:36:53.888777 5800 logs.go:274] 0 containers: [] W0623 21:36:53.888789 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:36:53.888828 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:36:53.924896 5800 logs.go:274] 2 containers: [37a0a473e538 1ab35c44f2a8] I0623 21:36:53.924939 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:36:53.964867 5800 logs.go:274] 2 containers: [7e3a1765c4db a191ce07e7e5] I0623 21:36:53.964887 5800 logs.go:123] Gathering logs for kube-scheduler [fedc7c601ca7] ... I0623 21:36:53.964896 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fedc7c601ca7" I0623 21:36:54.008652 5800 logs.go:123] Gathering logs for kube-scheduler [9ec311ba44fb] ... I0623 21:36:54.008664 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ec311ba44fb" I0623 21:36:54.073188 5800 logs.go:123] Gathering logs for storage-provisioner [37a0a473e538] ... I0623 21:36:54.073205 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37a0a473e538" I0623 21:36:54.115365 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:36:54.115377 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0623 21:36:54.161744 5800 logs.go:123] Gathering logs for kube-apiserver [0c9aaab491c9] ... I0623 21:36:54.161758 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c9aaab491c9" I0623 21:36:54.206031 5800 logs.go:123] Gathering logs for coredns [d54ef8e17c7b] ... I0623 21:36:54.206044 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54ef8e17c7b" I0623 21:36:54.250971 5800 logs.go:123] Gathering logs for coredns [1a5c5f15693b] ... I0623 21:36:54.250984 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a5c5f15693b" I0623 21:36:54.307535 5800 logs.go:123] Gathering logs for Docker ... I0623 21:36:54.307548 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:36:54.327020 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:36:54.327035 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:36:54.340448 5800 logs.go:123] Gathering logs for etcd [7cdd209913af] ... I0623 21:36:54.340461 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd209913af" I0623 21:36:54.391213 5800 logs.go:123] Gathering logs for storage-provisioner [1ab35c44f2a8] ... I0623 21:36:54.391226 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab35c44f2a8" I0623 21:36:54.428668 5800 logs.go:123] Gathering logs for kube-controller-manager [7e3a1765c4db] ... I0623 21:36:54.428685 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3a1765c4db" I0623 21:36:54.483054 5800 logs.go:123] Gathering logs for kube-proxy [3954234da2d6] ... I0623 21:36:54.483066 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3954234da2d6" I0623 21:36:54.528098 5800 logs.go:123] Gathering logs for kube-controller-manager [a191ce07e7e5] ... I0623 21:36:54.528114 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a191ce07e7e5" I0623 21:36:54.579631 5800 logs.go:123] Gathering logs for container status ... I0623 21:36:54.579645 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:36:54.614122 5800 logs.go:123] Gathering logs for kube-apiserver [095db28adc0e] ... I0623 21:36:54.614134 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095db28adc0e" I0623 21:36:54.657180 5800 logs.go:123] Gathering logs for coredns [9cbb6d3e39b8] ... I0623 21:36:54.657192 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cbb6d3e39b8" I0623 21:36:54.693791 5800 logs.go:123] Gathering logs for coredns [d335c6de28f8] ... I0623 21:36:54.693803 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d335c6de28f8" I0623 21:36:54.735675 5800 logs.go:123] Gathering logs for kube-proxy [d9a43cbd6c2f] ... I0623 21:36:54.735688 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a43cbd6c2f" I0623 21:36:54.776301 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:36:54.776316 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:36:54.861195 5800 logs.go:123] Gathering logs for etcd [ce7800fbbbf6] ... I0623 21:36:54.861207 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce7800fbbbf6" I0623 21:36:57.415541 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:37:02.415992 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:37:02.630605 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:37:02.701630 5800 logs.go:274] 2 containers: [095db28adc0e 0c9aaab491c9] I0623 21:37:02.701683 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:37:02.741512 5800 logs.go:274] 2 containers: [ce7800fbbbf6 7cdd209913af] I0623 21:37:02.741559 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:37:02.779773 5800 logs.go:274] 4 containers: [9cbb6d3e39b8 d54ef8e17c7b d335c6de28f8 1a5c5f15693b] I0623 21:37:02.779829 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:37:02.815317 5800 logs.go:274] 2 containers: [fedc7c601ca7 9ec311ba44fb] I0623 21:37:02.815367 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:37:02.859636 5800 logs.go:274] 2 containers: [d9a43cbd6c2f 3954234da2d6] I0623 21:37:02.859704 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:37:02.902762 5800 logs.go:274] 0 containers: [] W0623 21:37:02.902773 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:37:02.902811 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:37:02.946594 5800 logs.go:274] 2 containers: [37a0a473e538 1ab35c44f2a8] I0623 21:37:02.946648 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:37:03.002154 5800 logs.go:274] 2 containers: [7e3a1765c4db a191ce07e7e5] I0623 21:37:03.002172 5800 logs.go:123] Gathering logs for kube-scheduler [fedc7c601ca7] ... I0623 21:37:03.002181 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fedc7c601ca7" I0623 21:37:03.047683 5800 logs.go:123] Gathering logs for storage-provisioner [1ab35c44f2a8] ... I0623 21:37:03.047695 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab35c44f2a8" I0623 21:37:03.114703 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:37:03.114715 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0623 21:37:03.165620 5800 logs.go:123] Gathering logs for kube-apiserver [095db28adc0e] ... I0623 21:37:03.165637 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095db28adc0e" I0623 21:37:03.209491 5800 logs.go:123] Gathering logs for coredns [9cbb6d3e39b8] ... I0623 21:37:03.209503 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cbb6d3e39b8" I0623 21:37:03.247020 5800 logs.go:123] Gathering logs for kube-proxy [d9a43cbd6c2f] ... I0623 21:37:03.247037 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a43cbd6c2f" I0623 21:37:03.287072 5800 logs.go:123] Gathering logs for storage-provisioner [37a0a473e538] ... I0623 21:37:03.287084 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37a0a473e538" I0623 21:37:03.325024 5800 logs.go:123] Gathering logs for container status ... I0623 21:37:03.325036 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:37:03.364470 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:37:03.364483 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:37:03.381435 5800 logs.go:123] Gathering logs for coredns [d54ef8e17c7b] ... I0623 21:37:03.381451 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54ef8e17c7b" I0623 21:37:03.420471 5800 logs.go:123] Gathering logs for kube-scheduler [9ec311ba44fb] ... I0623 21:37:03.420492 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ec311ba44fb" I0623 21:37:03.466797 5800 logs.go:123] Gathering logs for kube-controller-manager [7e3a1765c4db] ... I0623 21:37:03.466809 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3a1765c4db" I0623 21:37:03.533184 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:37:03.533198 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:37:03.625680 5800 logs.go:123] Gathering logs for kube-apiserver [0c9aaab491c9] ... I0623 21:37:03.625694 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c9aaab491c9" I0623 21:37:03.673658 5800 logs.go:123] Gathering logs for kube-proxy [3954234da2d6] ... I0623 21:37:03.673670 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3954234da2d6" I0623 21:37:03.716869 5800 logs.go:123] Gathering logs for coredns [1a5c5f15693b] ... I0623 21:37:03.716881 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a5c5f15693b" I0623 21:37:03.758345 5800 logs.go:123] Gathering logs for kube-controller-manager [a191ce07e7e5] ... I0623 21:37:03.758358 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a191ce07e7e5" I0623 21:37:03.811184 5800 logs.go:123] Gathering logs for Docker ... I0623 21:37:03.811198 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:37:03.830683 5800 logs.go:123] Gathering logs for etcd [ce7800fbbbf6] ... I0623 21:37:03.830697 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce7800fbbbf6" I0623 21:37:03.881260 5800 logs.go:123] Gathering logs for etcd [7cdd209913af] ... I0623 21:37:03.881272 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd209913af" I0623 21:37:03.924059 5800 logs.go:123] Gathering logs for coredns [d335c6de28f8] ... I0623 21:37:03.924072 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d335c6de28f8" I0623 21:37:06.463311 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:37:11.464731 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:37:11.630222 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:37:11.700725 5800 logs.go:274] 2 containers: [095db28adc0e 0c9aaab491c9] I0623 21:37:11.700772 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:37:11.736560 5800 logs.go:274] 2 containers: [ce7800fbbbf6 7cdd209913af] I0623 21:37:11.736609 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:37:11.772099 5800 logs.go:274] 4 containers: [9cbb6d3e39b8 d54ef8e17c7b d335c6de28f8 1a5c5f15693b] I0623 21:37:11.772162 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:37:11.807673 5800 logs.go:274] 2 containers: [fedc7c601ca7 9ec311ba44fb] I0623 21:37:11.807716 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:37:11.845889 5800 logs.go:274] 2 containers: [d9a43cbd6c2f 3954234da2d6] I0623 21:37:11.845942 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:37:11.893608 5800 logs.go:274] 0 containers: [] W0623 21:37:11.893621 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:37:11.893657 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:37:11.937047 5800 logs.go:274] 2 containers: [37a0a473e538 1ab35c44f2a8] I0623 21:37:11.937094 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:37:11.977968 5800 logs.go:274] 2 containers: [7e3a1765c4db a191ce07e7e5] I0623 21:37:11.977993 5800 logs.go:123] Gathering logs for kube-apiserver [0c9aaab491c9] ... I0623 21:37:11.978002 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c9aaab491c9" I0623 21:37:12.032097 5800 logs.go:123] Gathering logs for kube-proxy [3954234da2d6] ... I0623 21:37:12.032113 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3954234da2d6" I0623 21:37:12.075835 5800 logs.go:123] Gathering logs for Docker ... I0623 21:37:12.075847 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:37:12.094851 5800 logs.go:123] Gathering logs for kube-scheduler [9ec311ba44fb] ... I0623 21:37:12.094865 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ec311ba44fb" I0623 21:37:12.149678 5800 logs.go:123] Gathering logs for kube-controller-manager [7e3a1765c4db] ... I0623 21:37:12.149690 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3a1765c4db" I0623 21:37:12.203198 5800 logs.go:123] Gathering logs for kube-controller-manager [a191ce07e7e5] ... I0623 21:37:12.203213 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a191ce07e7e5" I0623 21:37:12.264442 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:37:12.264456 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0623 21:37:12.312538 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:37:12.312552 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:37:12.325979 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:37:12.325995 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:37:12.416135 5800 logs.go:123] Gathering logs for kube-apiserver [095db28adc0e] ... I0623 21:37:12.416149 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095db28adc0e" I0623 21:37:12.458600 5800 logs.go:123] Gathering logs for etcd [ce7800fbbbf6] ... I0623 21:37:12.458611 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce7800fbbbf6" I0623 21:37:12.501718 5800 logs.go:123] Gathering logs for coredns [9cbb6d3e39b8] ... I0623 21:37:12.501731 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cbb6d3e39b8" I0623 21:37:12.538551 5800 logs.go:123] Gathering logs for coredns [d54ef8e17c7b] ... I0623 21:37:12.538565 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54ef8e17c7b" I0623 21:37:12.577661 5800 logs.go:123] Gathering logs for coredns [d335c6de28f8] ... I0623 21:37:12.577677 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d335c6de28f8" I0623 21:37:12.616090 5800 logs.go:123] Gathering logs for coredns [1a5c5f15693b] ... I0623 21:37:12.616104 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a5c5f15693b" I0623 21:37:12.658225 5800 logs.go:123] Gathering logs for kube-scheduler [fedc7c601ca7] ... I0623 21:37:12.658237 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fedc7c601ca7" I0623 21:37:12.698115 5800 logs.go:123] Gathering logs for storage-provisioner [37a0a473e538] ... I0623 21:37:12.698127 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37a0a473e538" I0623 21:37:12.735220 5800 logs.go:123] Gathering logs for container status ... I0623 21:37:12.735233 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:37:12.775521 5800 logs.go:123] Gathering logs for etcd [7cdd209913af] ... I0623 21:37:12.775533 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd209913af" I0623 21:37:12.823067 5800 logs.go:123] Gathering logs for kube-proxy [d9a43cbd6c2f] ... I0623 21:37:12.823079 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a43cbd6c2f" I0623 21:37:12.861655 5800 logs.go:123] Gathering logs for storage-provisioner [1ab35c44f2a8] ... I0623 21:37:12.861671 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab35c44f2a8" I0623 21:37:15.399236 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:37:20.400706 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:37:20.630561 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:37:20.707714 5800 logs.go:274] 2 containers: [095db28adc0e 0c9aaab491c9] I0623 21:37:20.707766 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:37:20.748208 5800 logs.go:274] 2 containers: [ce7800fbbbf6 7cdd209913af] I0623 21:37:20.748331 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:37:20.787623 5800 logs.go:274] 4 containers: [9cbb6d3e39b8 d54ef8e17c7b d335c6de28f8 1a5c5f15693b] I0623 21:37:20.787675 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:37:20.829101 5800 logs.go:274] 2 containers: [fedc7c601ca7 9ec311ba44fb] I0623 21:37:20.829150 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:37:20.866941 5800 logs.go:274] 2 containers: [d9a43cbd6c2f 3954234da2d6] I0623 21:37:20.866998 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:37:20.910172 5800 logs.go:274] 0 containers: [] W0623 21:37:20.910182 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:37:20.910212 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:37:20.951184 5800 logs.go:274] 2 containers: [37a0a473e538 1ab35c44f2a8] I0623 21:37:20.951234 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:37:20.995423 5800 logs.go:274] 2 containers: [7e3a1765c4db a191ce07e7e5] I0623 21:37:20.995446 5800 logs.go:123] Gathering logs for kube-apiserver [0c9aaab491c9] ... I0623 21:37:20.995455 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c9aaab491c9" I0623 21:37:21.052203 5800 logs.go:123] Gathering logs for coredns [d54ef8e17c7b] ... I0623 21:37:21.052216 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54ef8e17c7b" I0623 21:37:21.094897 5800 logs.go:123] Gathering logs for kube-scheduler [9ec311ba44fb] ... I0623 21:37:21.094912 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ec311ba44fb" I0623 21:37:21.149406 5800 logs.go:123] Gathering logs for kube-proxy [d9a43cbd6c2f] ... I0623 21:37:21.149419 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a43cbd6c2f" I0623 21:37:21.194173 5800 logs.go:123] Gathering logs for storage-provisioner [37a0a473e538] ... I0623 21:37:21.194185 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37a0a473e538" I0623 21:37:21.242502 5800 logs.go:123] Gathering logs for storage-provisioner [1ab35c44f2a8] ... I0623 21:37:21.242513 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab35c44f2a8" I0623 21:37:21.281776 5800 logs.go:123] Gathering logs for coredns [9cbb6d3e39b8] ... I0623 21:37:21.281790 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cbb6d3e39b8" I0623 21:37:21.319556 5800 logs.go:123] Gathering logs for coredns [d335c6de28f8] ... I0623 21:37:21.319570 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d335c6de28f8" I0623 21:37:21.362351 5800 logs.go:123] Gathering logs for kube-scheduler [fedc7c601ca7] ... I0623 21:37:21.362367 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fedc7c601ca7" I0623 21:37:21.425012 5800 logs.go:123] Gathering logs for kube-proxy [3954234da2d6] ... I0623 21:37:21.425028 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3954234da2d6" I0623 21:37:21.473286 5800 logs.go:123] Gathering logs for kube-controller-manager [7e3a1765c4db] ... I0623 21:37:21.473301 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3a1765c4db" I0623 21:37:21.539342 5800 logs.go:123] Gathering logs for kube-controller-manager [a191ce07e7e5] ... I0623 21:37:21.539356 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a191ce07e7e5" I0623 21:37:21.607714 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:37:21.607727 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0623 21:37:21.657003 5800 logs.go:123] Gathering logs for coredns [1a5c5f15693b] ... I0623 21:37:21.657018 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a5c5f15693b" I0623 21:37:21.702764 5800 logs.go:123] Gathering logs for Docker ... I0623 21:37:21.702781 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:37:21.722638 5800 logs.go:123] Gathering logs for container status ... I0623 21:37:21.722655 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:37:21.762072 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:37:21.762084 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:37:21.773830 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:37:21.773845 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:37:21.867892 5800 logs.go:123] Gathering logs for kube-apiserver [095db28adc0e] ... I0623 21:37:21.867906 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095db28adc0e" I0623 21:37:21.918208 5800 logs.go:123] Gathering logs for etcd [ce7800fbbbf6] ... I0623 21:37:21.918220 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce7800fbbbf6" I0623 21:37:21.964101 5800 logs.go:123] Gathering logs for etcd [7cdd209913af] ... I0623 21:37:21.964115 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd209913af" I0623 21:37:24.511141 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:37:29.512156 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:37:29.630739 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:37:29.699921 5800 logs.go:274] 2 containers: [095db28adc0e 0c9aaab491c9] I0623 21:37:29.699968 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:37:29.737085 5800 logs.go:274] 2 containers: [ce7800fbbbf6 7cdd209913af] I0623 21:37:29.737130 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:37:29.777720 5800 logs.go:274] 4 containers: [9cbb6d3e39b8 d54ef8e17c7b d335c6de28f8 1a5c5f15693b] I0623 21:37:29.777771 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:37:29.814384 5800 logs.go:274] 2 containers: [fedc7c601ca7 9ec311ba44fb] I0623 21:37:29.814432 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:37:29.858180 5800 logs.go:274] 2 containers: [d9a43cbd6c2f 3954234da2d6] I0623 21:37:29.858233 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:37:29.894251 5800 logs.go:274] 0 containers: [] W0623 21:37:29.894264 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:37:29.894304 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:37:29.931743 5800 logs.go:274] 2 containers: [37a0a473e538 1ab35c44f2a8] I0623 21:37:29.931786 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:37:29.968406 5800 logs.go:274] 2 containers: [7e3a1765c4db a191ce07e7e5] I0623 21:37:29.968426 5800 logs.go:123] Gathering logs for storage-provisioner [1ab35c44f2a8] ... I0623 21:37:29.968434 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab35c44f2a8" I0623 21:37:30.009917 5800 logs.go:123] Gathering logs for etcd [ce7800fbbbf6] ... I0623 21:37:30.009932 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce7800fbbbf6" I0623 21:37:30.061431 5800 logs.go:123] Gathering logs for etcd [7cdd209913af] ... I0623 21:37:30.061443 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd209913af" I0623 21:37:30.105210 5800 logs.go:123] Gathering logs for coredns [d54ef8e17c7b] ... I0623 21:37:30.105222 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54ef8e17c7b" I0623 21:37:30.147501 5800 logs.go:123] Gathering logs for kube-scheduler [fedc7c601ca7] ... I0623 21:37:30.147515 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fedc7c601ca7" I0623 21:37:30.185298 5800 logs.go:123] Gathering logs for kube-proxy [d9a43cbd6c2f] ... I0623 21:37:30.185310 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a43cbd6c2f" I0623 21:37:30.224593 5800 logs.go:123] Gathering logs for kube-proxy [3954234da2d6] ... I0623 21:37:30.224605 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3954234da2d6" I0623 21:37:30.268887 5800 logs.go:123] Gathering logs for storage-provisioner [37a0a473e538] ... I0623 21:37:30.268901 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37a0a473e538" I0623 21:37:30.321317 5800 logs.go:123] Gathering logs for coredns [1a5c5f15693b] ... I0623 21:37:30.321327 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a5c5f15693b" I0623 21:37:30.361149 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:37:30.361164 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0623 21:37:30.409613 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:37:30.409630 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:37:30.508089 5800 logs.go:123] Gathering logs for coredns [d335c6de28f8] ... I0623 21:37:30.508103 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d335c6de28f8" I0623 21:37:30.546728 5800 logs.go:123] Gathering logs for Docker ... I0623 21:37:30.546740 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:37:30.565688 5800 logs.go:123] Gathering logs for container status ... I0623 21:37:30.565702 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:37:30.603018 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:37:30.603031 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:37:30.616650 5800 logs.go:123] Gathering logs for kube-apiserver [095db28adc0e] ... I0623 21:37:30.616663 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095db28adc0e" I0623 21:37:30.663893 5800 logs.go:123] Gathering logs for kube-apiserver [0c9aaab491c9] ... I0623 21:37:30.663906 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c9aaab491c9" I0623 21:37:30.712749 5800 logs.go:123] Gathering logs for coredns [9cbb6d3e39b8] ... I0623 21:37:30.712762 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cbb6d3e39b8" I0623 21:37:30.754521 5800 logs.go:123] Gathering logs for kube-scheduler [9ec311ba44fb] ... I0623 21:37:30.754532 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ec311ba44fb" I0623 21:37:30.803649 5800 logs.go:123] Gathering logs for kube-controller-manager [7e3a1765c4db] ... I0623 21:37:30.803661 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3a1765c4db" I0623 21:37:30.864268 5800 logs.go:123] Gathering logs for kube-controller-manager [a191ce07e7e5] ... I0623 21:37:30.864280 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a191ce07e7e5" I0623 21:37:33.416572 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:37:38.417593 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:37:38.630126 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:37:38.707186 5800 logs.go:274] 2 containers: [095db28adc0e 0c9aaab491c9] I0623 21:37:38.707244 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:37:38.749994 5800 logs.go:274] 2 containers: [ce7800fbbbf6 7cdd209913af] I0623 21:37:38.750044 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:37:38.791907 5800 logs.go:274] 4 containers: [9cbb6d3e39b8 d54ef8e17c7b d335c6de28f8 1a5c5f15693b] I0623 21:37:38.791960 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:37:38.834153 5800 logs.go:274] 2 containers: [fedc7c601ca7 9ec311ba44fb] I0623 21:37:38.834208 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:37:38.877726 5800 logs.go:274] 2 containers: [d9a43cbd6c2f 3954234da2d6] I0623 21:37:38.877772 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:37:38.917448 5800 logs.go:274] 0 containers: [] W0623 21:37:38.917460 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:37:38.917497 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:37:38.956240 5800 logs.go:274] 2 containers: [37a0a473e538 1ab35c44f2a8] I0623 21:37:38.956300 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:37:38.998179 5800 logs.go:274] 2 containers: [7e3a1765c4db a191ce07e7e5] I0623 21:37:38.998201 5800 logs.go:123] Gathering logs for coredns [1a5c5f15693b] ... I0623 21:37:38.998209 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a5c5f15693b" I0623 21:37:39.040352 5800 logs.go:123] Gathering logs for kube-scheduler [9ec311ba44fb] ... I0623 21:37:39.040369 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ec311ba44fb" I0623 21:37:39.093395 5800 logs.go:123] Gathering logs for kube-proxy [3954234da2d6] ... I0623 21:37:39.093408 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3954234da2d6" I0623 21:37:39.136079 5800 logs.go:123] Gathering logs for storage-provisioner [1ab35c44f2a8] ... I0623 21:37:39.136093 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab35c44f2a8" I0623 21:37:39.185760 5800 logs.go:123] Gathering logs for kube-controller-manager [7e3a1765c4db] ... I0623 21:37:39.185775 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3a1765c4db" I0623 21:37:39.254651 5800 logs.go:123] Gathering logs for container status ... I0623 21:37:39.254664 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:37:39.298019 5800 logs.go:123] Gathering logs for kube-apiserver [095db28adc0e] ... I0623 21:37:39.298032 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095db28adc0e" I0623 21:37:39.347112 5800 logs.go:123] Gathering logs for coredns [9cbb6d3e39b8] ... I0623 21:37:39.347132 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cbb6d3e39b8" I0623 21:37:39.389330 5800 logs.go:123] Gathering logs for coredns [d54ef8e17c7b] ... I0623 21:37:39.389341 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54ef8e17c7b" I0623 21:37:39.434863 5800 logs.go:123] Gathering logs for kube-apiserver [0c9aaab491c9] ... I0623 21:37:39.434879 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c9aaab491c9" I0623 21:37:39.490554 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:37:39.490572 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:37:39.595489 5800 logs.go:123] Gathering logs for coredns [d335c6de28f8] ... I0623 21:37:39.595505 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d335c6de28f8" I0623 21:37:39.643045 5800 logs.go:123] Gathering logs for kube-scheduler [fedc7c601ca7] ... I0623 21:37:39.643057 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fedc7c601ca7" I0623 21:37:39.698345 5800 logs.go:123] Gathering logs for kube-controller-manager [a191ce07e7e5] ... I0623 21:37:39.698357 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a191ce07e7e5" I0623 21:37:39.753958 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:37:39.753974 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0623 21:37:39.803896 5800 logs.go:123] Gathering logs for etcd [ce7800fbbbf6] ... I0623 21:37:39.803909 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce7800fbbbf6" I0623 21:37:39.861483 5800 logs.go:123] Gathering logs for etcd [7cdd209913af] ... I0623 21:37:39.861500 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd209913af" I0623 21:37:39.921144 5800 logs.go:123] Gathering logs for kube-proxy [d9a43cbd6c2f] ... I0623 21:37:39.921157 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a43cbd6c2f" I0623 21:37:39.964857 5800 logs.go:123] Gathering logs for storage-provisioner [37a0a473e538] ... I0623 21:37:39.964872 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37a0a473e538" I0623 21:37:40.008839 5800 logs.go:123] Gathering logs for Docker ... I0623 21:37:40.008856 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:37:40.030007 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:37:40.030021 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:37:42.544452 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:37:47.545790 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:37:47.630271 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:37:47.713612 5800 logs.go:274] 2 containers: [095db28adc0e 0c9aaab491c9] I0623 21:37:47.713690 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:37:47.754377 5800 logs.go:274] 2 containers: [ce7800fbbbf6 7cdd209913af] I0623 21:37:47.754427 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:37:47.798393 5800 logs.go:274] 4 containers: [9cbb6d3e39b8 d54ef8e17c7b d335c6de28f8 1a5c5f15693b] I0623 21:37:47.798451 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:37:47.837096 5800 logs.go:274] 2 containers: [fedc7c601ca7 9ec311ba44fb] I0623 21:37:47.837147 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:37:47.872419 5800 logs.go:274] 2 containers: [d9a43cbd6c2f 3954234da2d6] I0623 21:37:47.872478 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:37:47.911864 5800 logs.go:274] 0 containers: [] W0623 21:37:47.911877 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:37:47.911917 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:37:47.959090 5800 logs.go:274] 2 containers: [37a0a473e538 1ab35c44f2a8] I0623 21:37:47.959139 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:37:48.004002 5800 logs.go:274] 2 containers: [7e3a1765c4db a191ce07e7e5] I0623 21:37:48.004017 5800 logs.go:123] Gathering logs for kube-controller-manager [7e3a1765c4db] ... I0623 21:37:48.004023 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3a1765c4db" I0623 21:37:48.059111 5800 logs.go:123] Gathering logs for Docker ... I0623 21:37:48.059124 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:37:48.078328 5800 logs.go:123] Gathering logs for kube-apiserver [095db28adc0e] ... I0623 21:37:48.078341 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095db28adc0e" I0623 21:37:48.125736 5800 logs.go:123] Gathering logs for etcd [7cdd209913af] ... I0623 21:37:48.125748 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd209913af" I0623 21:37:48.172513 5800 logs.go:123] Gathering logs for kube-scheduler [fedc7c601ca7] ... I0623 21:37:48.172526 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fedc7c601ca7" I0623 21:37:48.216049 5800 logs.go:123] Gathering logs for kube-scheduler [9ec311ba44fb] ... I0623 21:37:48.216061 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ec311ba44fb" I0623 21:37:48.262402 5800 logs.go:123] Gathering logs for coredns [d335c6de28f8] ... I0623 21:37:48.262420 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d335c6de28f8" I0623 21:37:48.310357 5800 logs.go:123] Gathering logs for coredns [1a5c5f15693b] ... I0623 21:37:48.310370 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a5c5f15693b" I0623 21:37:48.351453 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:37:48.351465 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:37:48.364500 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:37:48.364513 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:37:48.467797 5800 logs.go:123] Gathering logs for kube-apiserver [0c9aaab491c9] ... I0623 21:37:48.467815 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c9aaab491c9" I0623 21:37:48.513156 5800 logs.go:123] Gathering logs for etcd [ce7800fbbbf6] ... I0623 21:37:48.513169 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce7800fbbbf6" I0623 21:37:48.558477 5800 logs.go:123] Gathering logs for container status ... I0623 21:37:48.558489 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:37:48.596058 5800 logs.go:123] Gathering logs for kube-proxy [3954234da2d6] ... I0623 21:37:48.596073 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3954234da2d6" I0623 21:37:48.650178 5800 logs.go:123] Gathering logs for storage-provisioner [37a0a473e538] ... I0623 21:37:48.650191 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37a0a473e538" I0623 21:37:48.701539 5800 logs.go:123] Gathering logs for storage-provisioner [1ab35c44f2a8] ... I0623 21:37:48.701560 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab35c44f2a8" I0623 21:37:48.747810 5800 logs.go:123] Gathering logs for kube-controller-manager [a191ce07e7e5] ... I0623 21:37:48.747826 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a191ce07e7e5" I0623 21:37:48.806020 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:37:48.806033 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0623 21:37:48.856932 5800 logs.go:123] Gathering logs for coredns [9cbb6d3e39b8] ... I0623 21:37:48.856947 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cbb6d3e39b8" I0623 21:37:48.895640 5800 logs.go:123] Gathering logs for coredns [d54ef8e17c7b] ... I0623 21:37:48.895653 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54ef8e17c7b" I0623 21:37:48.934313 5800 logs.go:123] Gathering logs for kube-proxy [d9a43cbd6c2f] ... I0623 21:37:48.934328 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a43cbd6c2f" I0623 21:37:51.485370 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:37:56.486233 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:37:56.630843 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:37:56.750498 5800 logs.go:274] 2 containers: [095db28adc0e 0c9aaab491c9] I0623 21:37:56.750561 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:37:56.794631 5800 logs.go:274] 2 containers: [ce7800fbbbf6 7cdd209913af] I0623 21:37:56.794686 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:37:56.839585 5800 logs.go:274] 4 containers: [9cbb6d3e39b8 d54ef8e17c7b d335c6de28f8 1a5c5f15693b] I0623 21:37:56.839640 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:37:56.899221 5800 logs.go:274] 2 containers: [fedc7c601ca7 9ec311ba44fb] I0623 21:37:56.899264 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:37:56.946253 5800 logs.go:274] 2 containers: [d9a43cbd6c2f 3954234da2d6] I0623 21:37:56.946310 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:37:56.988927 5800 logs.go:274] 0 containers: [] W0623 21:37:56.988937 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:37:56.988971 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:37:57.033744 5800 logs.go:274] 2 containers: [37a0a473e538 1ab35c44f2a8] I0623 21:37:57.033794 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:37:57.078507 5800 logs.go:274] 2 containers: [7e3a1765c4db a191ce07e7e5] I0623 21:37:57.078527 5800 logs.go:123] Gathering logs for coredns [d54ef8e17c7b] ... I0623 21:37:57.078534 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54ef8e17c7b" I0623 21:37:57.119773 5800 logs.go:123] Gathering logs for coredns [1a5c5f15693b] ... I0623 21:37:57.119786 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a5c5f15693b" I0623 21:37:57.162254 5800 logs.go:123] Gathering logs for kube-scheduler [fedc7c601ca7] ... I0623 21:37:57.162265 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fedc7c601ca7" I0623 21:37:57.202575 5800 logs.go:123] Gathering logs for kube-proxy [3954234da2d6] ... I0623 21:37:57.202587 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3954234da2d6" I0623 21:37:57.243778 5800 logs.go:123] Gathering logs for kube-controller-manager [7e3a1765c4db] ... I0623 21:37:57.243790 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3a1765c4db" I0623 21:37:57.296726 5800 logs.go:123] Gathering logs for Docker ... I0623 21:37:57.296739 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:37:57.317383 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:37:57.317396 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0623 21:37:57.364870 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:37:57.364883 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:37:57.460938 5800 logs.go:123] Gathering logs for container status ... I0623 21:37:57.460954 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:37:57.503311 5800 logs.go:123] Gathering logs for kube-apiserver [0c9aaab491c9] ... I0623 21:37:57.503323 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c9aaab491c9" I0623 21:37:57.556790 5800 logs.go:123] Gathering logs for etcd [7cdd209913af] ... I0623 21:37:57.556803 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd209913af" I0623 21:37:57.612572 5800 logs.go:123] Gathering logs for coredns [d335c6de28f8] ... I0623 21:37:57.612587 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d335c6de28f8" I0623 21:37:57.664653 5800 logs.go:123] Gathering logs for kube-proxy [d9a43cbd6c2f] ... I0623 21:37:57.664672 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a43cbd6c2f" I0623 21:37:57.709147 5800 logs.go:123] Gathering logs for kube-controller-manager [a191ce07e7e5] ... I0623 21:37:57.709160 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a191ce07e7e5" I0623 21:37:57.769793 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:37:57.769806 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:37:57.784149 5800 logs.go:123] Gathering logs for kube-apiserver [095db28adc0e] ... I0623 21:37:57.784162 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095db28adc0e" I0623 21:37:57.831137 5800 logs.go:123] Gathering logs for etcd [ce7800fbbbf6] ... I0623 21:37:57.831150 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce7800fbbbf6" I0623 21:37:57.882929 5800 logs.go:123] Gathering logs for storage-provisioner [37a0a473e538] ... I0623 21:37:57.882942 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37a0a473e538" I0623 21:37:57.959846 5800 logs.go:123] Gathering logs for storage-provisioner [1ab35c44f2a8] ... I0623 21:37:57.959861 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab35c44f2a8" I0623 21:37:58.014601 5800 logs.go:123] Gathering logs for coredns [9cbb6d3e39b8] ... I0623 21:37:58.014618 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cbb6d3e39b8" I0623 21:37:58.075219 5800 logs.go:123] Gathering logs for kube-scheduler [9ec311ba44fb] ... I0623 21:37:58.075234 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ec311ba44fb" I0623 21:38:00.625749 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:38:05.626203 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:38:05.630738 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:38:05.705129 5800 logs.go:274] 2 containers: [095db28adc0e 0c9aaab491c9] I0623 21:38:05.705180 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:38:05.741848 5800 logs.go:274] 2 containers: [ce7800fbbbf6 7cdd209913af] I0623 21:38:05.741899 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:38:05.777726 5800 logs.go:274] 4 containers: [9cbb6d3e39b8 d54ef8e17c7b d335c6de28f8 1a5c5f15693b] I0623 21:38:05.777790 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:38:05.817209 5800 logs.go:274] 2 containers: [fedc7c601ca7 9ec311ba44fb] I0623 21:38:05.817252 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:38:05.853414 5800 logs.go:274] 2 containers: [d9a43cbd6c2f 3954234da2d6] I0623 21:38:05.853469 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:38:05.894640 5800 logs.go:274] 0 containers: [] W0623 21:38:05.894652 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:38:05.894688 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:38:05.931512 5800 logs.go:274] 2 containers: [37a0a473e538 1ab35c44f2a8] I0623 21:38:05.931553 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:38:05.976039 5800 logs.go:274] 2 containers: [7e3a1765c4db a191ce07e7e5] I0623 21:38:05.976062 5800 logs.go:123] Gathering logs for kube-apiserver [095db28adc0e] ... I0623 21:38:05.976069 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095db28adc0e" I0623 21:38:06.051684 5800 logs.go:123] Gathering logs for kube-proxy [3954234da2d6] ... I0623 21:38:06.051697 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3954234da2d6" I0623 21:38:06.090042 5800 logs.go:123] Gathering logs for kube-controller-manager [a191ce07e7e5] ... I0623 21:38:06.090055 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a191ce07e7e5" I0623 21:38:06.141256 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:38:06.141270 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:38:06.154699 5800 logs.go:123] Gathering logs for coredns [1a5c5f15693b] ... I0623 21:38:06.154712 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a5c5f15693b" I0623 21:38:06.195691 5800 logs.go:123] Gathering logs for kube-scheduler [fedc7c601ca7] ... I0623 21:38:06.195703 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fedc7c601ca7" I0623 21:38:06.234634 5800 logs.go:123] Gathering logs for kube-scheduler [9ec311ba44fb] ... I0623 21:38:06.234651 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ec311ba44fb" I0623 21:38:06.292210 5800 logs.go:123] Gathering logs for kube-proxy [d9a43cbd6c2f] ... I0623 21:38:06.292225 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a43cbd6c2f" I0623 21:38:06.333787 5800 logs.go:123] Gathering logs for coredns [d335c6de28f8] ... I0623 21:38:06.333798 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d335c6de28f8" I0623 21:38:06.374123 5800 logs.go:123] Gathering logs for storage-provisioner [37a0a473e538] ... I0623 21:38:06.374136 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37a0a473e538" I0623 21:38:06.412875 5800 logs.go:123] Gathering logs for storage-provisioner [1ab35c44f2a8] ... I0623 21:38:06.412888 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab35c44f2a8" I0623 21:38:06.450661 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:38:06.450676 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0623 21:38:06.496084 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:38:06.496097 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:38:06.581241 5800 logs.go:123] Gathering logs for etcd [ce7800fbbbf6] ... I0623 21:38:06.581254 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce7800fbbbf6" I0623 21:38:06.624344 5800 logs.go:123] Gathering logs for etcd [7cdd209913af] ... I0623 21:38:06.624356 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd209913af" I0623 21:38:06.673631 5800 logs.go:123] Gathering logs for coredns [d54ef8e17c7b] ... I0623 21:38:06.673645 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54ef8e17c7b" I0623 21:38:06.711711 5800 logs.go:123] Gathering logs for kube-controller-manager [7e3a1765c4db] ... I0623 21:38:06.711727 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3a1765c4db" I0623 21:38:06.770862 5800 logs.go:123] Gathering logs for Docker ... I0623 21:38:06.770878 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:38:06.790620 5800 logs.go:123] Gathering logs for container status ... I0623 21:38:06.790637 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:38:06.828902 5800 logs.go:123] Gathering logs for kube-apiserver [0c9aaab491c9] ... I0623 21:38:06.828914 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c9aaab491c9" I0623 21:38:06.880918 5800 logs.go:123] Gathering logs for coredns [9cbb6d3e39b8] ... I0623 21:38:06.880931 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cbb6d3e39b8" I0623 21:38:09.424096 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:38:14.425384 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:38:14.630419 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:38:14.734689 5800 logs.go:274] 2 containers: [095db28adc0e 0c9aaab491c9] I0623 21:38:14.734737 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:38:14.775677 5800 logs.go:274] 2 containers: [ce7800fbbbf6 7cdd209913af] I0623 21:38:14.775731 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:38:14.815436 5800 logs.go:274] 4 containers: [9cbb6d3e39b8 d54ef8e17c7b d335c6de28f8 1a5c5f15693b] I0623 21:38:14.815487 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:38:14.854349 5800 logs.go:274] 2 containers: [fedc7c601ca7 9ec311ba44fb] I0623 21:38:14.854403 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:38:14.894552 5800 logs.go:274] 2 containers: [d9a43cbd6c2f 3954234da2d6] I0623 21:38:14.894618 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:38:14.932882 5800 logs.go:274] 0 containers: [] W0623 21:38:14.932894 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:38:14.932933 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:38:14.982490 5800 logs.go:274] 2 containers: [37a0a473e538 1ab35c44f2a8] I0623 21:38:14.982539 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:38:15.021994 5800 logs.go:274] 2 containers: [7e3a1765c4db a191ce07e7e5] I0623 21:38:15.022015 5800 logs.go:123] Gathering logs for kube-controller-manager [7e3a1765c4db] ... I0623 21:38:15.022023 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3a1765c4db" I0623 21:38:15.093827 5800 logs.go:123] Gathering logs for container status ... I0623 21:38:15.093843 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:38:15.134586 5800 logs.go:123] Gathering logs for etcd [7cdd209913af] ... I0623 21:38:15.134600 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd209913af" I0623 21:38:15.187687 5800 logs.go:123] Gathering logs for coredns [1a5c5f15693b] ... I0623 21:38:15.187706 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a5c5f15693b" I0623 21:38:15.238067 5800 logs.go:123] Gathering logs for coredns [d335c6de28f8] ... I0623 21:38:15.238084 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d335c6de28f8" I0623 21:38:15.277570 5800 logs.go:123] Gathering logs for kube-proxy [d9a43cbd6c2f] ... I0623 21:38:15.277583 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a43cbd6c2f" I0623 21:38:15.314892 5800 logs.go:123] Gathering logs for Docker ... I0623 21:38:15.314903 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:38:15.334822 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:38:15.334836 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:38:15.423205 5800 logs.go:123] Gathering logs for coredns [9cbb6d3e39b8] ... I0623 21:38:15.423217 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cbb6d3e39b8" I0623 21:38:15.464545 5800 logs.go:123] Gathering logs for kube-controller-manager [a191ce07e7e5] ... I0623 21:38:15.464572 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a191ce07e7e5" I0623 21:38:15.537129 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:38:15.537144 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0623 21:38:15.586982 5800 logs.go:123] Gathering logs for kube-apiserver [0c9aaab491c9] ... I0623 21:38:15.586994 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c9aaab491c9" I0623 21:38:15.637316 5800 logs.go:123] Gathering logs for etcd [ce7800fbbbf6] ... I0623 21:38:15.637329 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce7800fbbbf6" I0623 21:38:15.681035 5800 logs.go:123] Gathering logs for coredns [d54ef8e17c7b] ... I0623 21:38:15.681047 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54ef8e17c7b" I0623 21:38:15.725795 5800 logs.go:123] Gathering logs for kube-scheduler [fedc7c601ca7] ... I0623 21:38:15.725809 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fedc7c601ca7" I0623 21:38:15.765920 5800 logs.go:123] Gathering logs for kube-scheduler [9ec311ba44fb] ... I0623 21:38:15.765931 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ec311ba44fb" I0623 21:38:15.817183 5800 logs.go:123] Gathering logs for kube-proxy [3954234da2d6] ... I0623 21:38:15.817195 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3954234da2d6" I0623 21:38:15.862747 5800 logs.go:123] Gathering logs for storage-provisioner [37a0a473e538] ... I0623 21:38:15.862759 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37a0a473e538" I0623 21:38:15.902294 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:38:15.902310 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:38:15.916053 5800 logs.go:123] Gathering logs for kube-apiserver [095db28adc0e] ... I0623 21:38:15.916069 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095db28adc0e" I0623 21:38:15.961983 5800 logs.go:123] Gathering logs for storage-provisioner [1ab35c44f2a8] ... I0623 21:38:15.961999 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab35c44f2a8" I0623 21:38:18.508475 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:38:23.509272 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:38:23.630842 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:38:23.718733 5800 logs.go:274] 2 containers: [095db28adc0e 0c9aaab491c9] I0623 21:38:23.718783 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:38:23.758413 5800 logs.go:274] 2 containers: [ce7800fbbbf6 7cdd209913af] I0623 21:38:23.758456 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:38:23.794629 5800 logs.go:274] 4 containers: [9cbb6d3e39b8 d54ef8e17c7b d335c6de28f8 1a5c5f15693b] I0623 21:38:23.794682 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:38:23.837509 5800 logs.go:274] 2 containers: [fedc7c601ca7 9ec311ba44fb] I0623 21:38:23.837552 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:38:23.875544 5800 logs.go:274] 2 containers: [d9a43cbd6c2f 3954234da2d6] I0623 21:38:23.875593 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:38:23.915797 5800 logs.go:274] 0 containers: [] W0623 21:38:23.915809 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:38:23.915849 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:38:23.951788 5800 logs.go:274] 2 containers: [37a0a473e538 1ab35c44f2a8] I0623 21:38:23.951838 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:38:23.989634 5800 logs.go:274] 2 containers: [7e3a1765c4db a191ce07e7e5] I0623 21:38:23.989650 5800 logs.go:123] Gathering logs for coredns [d54ef8e17c7b] ... I0623 21:38:23.989656 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54ef8e17c7b" I0623 21:38:24.027107 5800 logs.go:123] Gathering logs for etcd [7cdd209913af] ... I0623 21:38:24.027119 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd209913af" I0623 21:38:24.070794 5800 logs.go:123] Gathering logs for coredns [9cbb6d3e39b8] ... I0623 21:38:24.070807 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cbb6d3e39b8" I0623 21:38:24.108583 5800 logs.go:123] Gathering logs for kube-scheduler [9ec311ba44fb] ... I0623 21:38:24.108596 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ec311ba44fb" I0623 21:38:24.161229 5800 logs.go:123] Gathering logs for kube-proxy [d9a43cbd6c2f] ... I0623 21:38:24.161242 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a43cbd6c2f" I0623 21:38:24.200050 5800 logs.go:123] Gathering logs for kube-proxy [3954234da2d6] ... I0623 21:38:24.200062 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3954234da2d6" I0623 21:38:24.244440 5800 logs.go:123] Gathering logs for kube-controller-manager [a191ce07e7e5] ... I0623 21:38:24.244452 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a191ce07e7e5" I0623 21:38:24.312498 5800 logs.go:123] Gathering logs for Docker ... I0623 21:38:24.312512 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:38:24.333044 5800 logs.go:123] Gathering logs for kube-apiserver [095db28adc0e] ... I0623 21:38:24.333057 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095db28adc0e" I0623 21:38:24.385989 5800 logs.go:123] Gathering logs for storage-provisioner [1ab35c44f2a8] ... I0623 21:38:24.386001 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab35c44f2a8" I0623 21:38:24.434871 5800 logs.go:123] Gathering logs for kube-controller-manager [7e3a1765c4db] ... I0623 21:38:24.434887 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3a1765c4db" I0623 21:38:24.491187 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:38:24.491200 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0623 21:38:24.546891 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:38:24.546908 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:38:24.656043 5800 logs.go:123] Gathering logs for kube-apiserver [0c9aaab491c9] ... I0623 21:38:24.656059 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c9aaab491c9" I0623 21:38:24.710255 5800 logs.go:123] Gathering logs for etcd [ce7800fbbbf6] ... I0623 21:38:24.710269 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce7800fbbbf6" I0623 21:38:24.767527 5800 logs.go:123] Gathering logs for coredns [d335c6de28f8] ... I0623 21:38:24.767540 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d335c6de28f8" I0623 21:38:24.807024 5800 logs.go:123] Gathering logs for coredns [1a5c5f15693b] ... I0623 21:38:24.807037 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a5c5f15693b" I0623 21:38:24.850558 5800 logs.go:123] Gathering logs for kube-scheduler [fedc7c601ca7] ... I0623 21:38:24.850575 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fedc7c601ca7" I0623 21:38:24.891144 5800 logs.go:123] Gathering logs for storage-provisioner [37a0a473e538] ... I0623 21:38:24.891156 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37a0a473e538" I0623 21:38:24.929620 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:38:24.929633 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:38:24.941936 5800 logs.go:123] Gathering logs for container status ... I0623 21:38:24.941951 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:38:27.478965 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:38:32.479822 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:38:32.630452 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:38:32.694305 5800 logs.go:274] 2 containers: [095db28adc0e 0c9aaab491c9] I0623 21:38:32.694356 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:38:32.734715 5800 logs.go:274] 2 containers: [ce7800fbbbf6 7cdd209913af] I0623 21:38:32.734762 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:38:32.774917 5800 logs.go:274] 4 containers: [9cbb6d3e39b8 d54ef8e17c7b d335c6de28f8 1a5c5f15693b] I0623 21:38:32.774959 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:38:32.814167 5800 logs.go:274] 2 containers: [fedc7c601ca7 9ec311ba44fb] I0623 21:38:32.814219 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:38:32.849633 5800 logs.go:274] 2 containers: [d9a43cbd6c2f 3954234da2d6] I0623 21:38:32.849688 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:38:32.887528 5800 logs.go:274] 0 containers: [] W0623 21:38:32.887542 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:38:32.887579 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:38:32.929757 5800 logs.go:274] 2 containers: [37a0a473e538 1ab35c44f2a8] I0623 21:38:32.929810 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:38:32.983643 5800 logs.go:274] 2 containers: [7e3a1765c4db a191ce07e7e5] I0623 21:38:32.983666 5800 logs.go:123] Gathering logs for storage-provisioner [37a0a473e538] ... I0623 21:38:32.983678 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37a0a473e538" I0623 21:38:33.036442 5800 logs.go:123] Gathering logs for Docker ... I0623 21:38:33.036454 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:38:33.058044 5800 logs.go:123] Gathering logs for coredns [9cbb6d3e39b8] ... I0623 21:38:33.058056 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cbb6d3e39b8" I0623 21:38:33.096194 5800 logs.go:123] Gathering logs for coredns [d54ef8e17c7b] ... I0623 21:38:33.096208 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54ef8e17c7b" I0623 21:38:33.134724 5800 logs.go:123] Gathering logs for kube-proxy [d9a43cbd6c2f] ... I0623 21:38:33.134740 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a43cbd6c2f" I0623 21:38:33.176070 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:38:33.176086 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0623 21:38:33.222903 5800 logs.go:123] Gathering logs for kube-scheduler [fedc7c601ca7] ... I0623 21:38:33.222918 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fedc7c601ca7" I0623 21:38:33.266061 5800 logs.go:123] Gathering logs for storage-provisioner [1ab35c44f2a8] ... I0623 21:38:33.266073 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab35c44f2a8" I0623 21:38:33.304285 5800 logs.go:123] Gathering logs for kube-scheduler [9ec311ba44fb] ... I0623 21:38:33.304297 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ec311ba44fb" I0623 21:38:33.360617 5800 logs.go:123] Gathering logs for kube-controller-manager [7e3a1765c4db] ... I0623 21:38:33.360630 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3a1765c4db" I0623 21:38:33.413534 5800 logs.go:123] Gathering logs for container status ... I0623 21:38:33.413547 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:38:33.451577 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:38:33.451593 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:38:33.464060 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:38:33.464076 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:38:33.555413 5800 logs.go:123] Gathering logs for etcd [ce7800fbbbf6] ... I0623 21:38:33.555425 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce7800fbbbf6" I0623 21:38:33.601072 5800 logs.go:123] Gathering logs for coredns [d335c6de28f8] ... I0623 21:38:33.601084 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d335c6de28f8" I0623 21:38:33.644135 5800 logs.go:123] Gathering logs for coredns [1a5c5f15693b] ... I0623 21:38:33.644151 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a5c5f15693b" I0623 21:38:33.693140 5800 logs.go:123] Gathering logs for kube-proxy [3954234da2d6] ... I0623 21:38:33.693156 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3954234da2d6" I0623 21:38:33.737655 5800 logs.go:123] Gathering logs for kube-controller-manager [a191ce07e7e5] ... I0623 21:38:33.737667 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a191ce07e7e5" I0623 21:38:33.787925 5800 logs.go:123] Gathering logs for kube-apiserver [095db28adc0e] ... I0623 21:38:33.787940 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095db28adc0e" I0623 21:38:33.845557 5800 logs.go:123] Gathering logs for kube-apiserver [0c9aaab491c9] ... I0623 21:38:33.845569 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c9aaab491c9" I0623 21:38:33.902714 5800 logs.go:123] Gathering logs for etcd [7cdd209913af] ... I0623 21:38:33.902727 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd209913af" I0623 21:38:36.451105 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:38:41.452430 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:38:41.631000 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:38:41.699362 5800 logs.go:274] 2 containers: [095db28adc0e 0c9aaab491c9] I0623 21:38:41.699408 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:38:41.741711 5800 logs.go:274] 2 containers: [ce7800fbbbf6 7cdd209913af] I0623 21:38:41.741759 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:38:41.778386 5800 logs.go:274] 4 containers: [9cbb6d3e39b8 d54ef8e17c7b d335c6de28f8 1a5c5f15693b] I0623 21:38:41.778431 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:38:41.814814 5800 logs.go:274] 2 containers: [fedc7c601ca7 9ec311ba44fb] I0623 21:38:41.814863 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:38:41.853589 5800 logs.go:274] 2 containers: [d9a43cbd6c2f 3954234da2d6] I0623 21:38:41.853639 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:38:41.890287 5800 logs.go:274] 0 containers: [] W0623 21:38:41.890298 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:38:41.890338 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:38:41.932153 5800 logs.go:274] 2 containers: [37a0a473e538 1ab35c44f2a8] I0623 21:38:41.932204 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:38:41.968387 5800 logs.go:274] 2 containers: [7e3a1765c4db a191ce07e7e5] I0623 21:38:41.968405 5800 logs.go:123] Gathering logs for coredns [1a5c5f15693b] ... I0623 21:38:41.968413 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a5c5f15693b" I0623 21:38:42.007020 5800 logs.go:123] Gathering logs for storage-provisioner [1ab35c44f2a8] ... I0623 21:38:42.007034 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab35c44f2a8" I0623 21:38:42.048539 5800 logs.go:123] Gathering logs for kube-controller-manager [7e3a1765c4db] ... I0623 21:38:42.048551 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3a1765c4db" I0623 21:38:42.101911 5800 logs.go:123] Gathering logs for kube-controller-manager [a191ce07e7e5] ... I0623 21:38:42.101923 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a191ce07e7e5" I0623 21:38:42.154091 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:38:42.154105 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0623 21:38:42.201574 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:38:42.201588 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:38:42.215606 5800 logs.go:123] Gathering logs for kube-apiserver [095db28adc0e] ... I0623 21:38:42.215619 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095db28adc0e" I0623 21:38:42.266629 5800 logs.go:123] Gathering logs for coredns [d54ef8e17c7b] ... I0623 21:38:42.266645 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54ef8e17c7b" I0623 21:38:42.307658 5800 logs.go:123] Gathering logs for kube-scheduler [9ec311ba44fb] ... I0623 21:38:42.307671 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ec311ba44fb" I0623 21:38:42.358384 5800 logs.go:123] Gathering logs for Docker ... I0623 21:38:42.358397 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:38:42.380167 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:38:42.380180 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:38:42.463339 5800 logs.go:123] Gathering logs for coredns [d335c6de28f8] ... I0623 21:38:42.463353 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d335c6de28f8" I0623 21:38:42.500864 5800 logs.go:123] Gathering logs for kube-scheduler [fedc7c601ca7] ... I0623 21:38:42.500876 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fedc7c601ca7" I0623 21:38:42.547234 5800 logs.go:123] Gathering logs for kube-proxy [3954234da2d6] ... I0623 21:38:42.547248 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3954234da2d6" I0623 21:38:42.585692 5800 logs.go:123] Gathering logs for kube-apiserver [0c9aaab491c9] ... I0623 21:38:42.585704 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c9aaab491c9" I0623 21:38:42.628666 5800 logs.go:123] Gathering logs for etcd [ce7800fbbbf6] ... I0623 21:38:42.628678 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce7800fbbbf6" I0623 21:38:42.678166 5800 logs.go:123] Gathering logs for etcd [7cdd209913af] ... I0623 21:38:42.678177 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd209913af" I0623 21:38:42.734196 5800 logs.go:123] Gathering logs for coredns [9cbb6d3e39b8] ... I0623 21:38:42.734212 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cbb6d3e39b8" I0623 21:38:42.777618 5800 logs.go:123] Gathering logs for kube-proxy [d9a43cbd6c2f] ... I0623 21:38:42.777629 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a43cbd6c2f" I0623 21:38:42.818270 5800 logs.go:123] Gathering logs for storage-provisioner [37a0a473e538] ... I0623 21:38:42.818282 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37a0a473e538" I0623 21:38:42.857686 5800 logs.go:123] Gathering logs for container status ... I0623 21:38:42.857698 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:38:45.394869 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:38:50.395302 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:38:50.630151 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:38:50.685475 5800 logs.go:274] 2 containers: [095db28adc0e 0c9aaab491c9] I0623 21:38:50.685526 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:38:50.725039 5800 logs.go:274] 2 containers: [ce7800fbbbf6 7cdd209913af] I0623 21:38:50.725106 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:38:50.766602 5800 logs.go:274] 4 containers: [9cbb6d3e39b8 d54ef8e17c7b d335c6de28f8 1a5c5f15693b] I0623 21:38:50.766659 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:38:50.804541 5800 logs.go:274] 2 containers: [fedc7c601ca7 9ec311ba44fb] I0623 21:38:50.804597 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:38:50.843405 5800 logs.go:274] 2 containers: [d9a43cbd6c2f 3954234da2d6] I0623 21:38:50.843466 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:38:50.891834 5800 logs.go:274] 0 containers: [] W0623 21:38:50.891847 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:38:50.891879 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:38:50.930363 5800 logs.go:274] 2 containers: [37a0a473e538 1ab35c44f2a8] I0623 21:38:50.930411 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:38:50.966763 5800 logs.go:274] 2 containers: [7e3a1765c4db a191ce07e7e5] I0623 21:38:50.966784 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:38:50.966793 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0623 21:38:51.010543 5800 logs.go:123] Gathering logs for kube-proxy [d9a43cbd6c2f] ... I0623 21:38:51.010558 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a43cbd6c2f" I0623 21:38:51.052003 5800 logs.go:123] Gathering logs for storage-provisioner [1ab35c44f2a8] ... I0623 21:38:51.052015 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab35c44f2a8" I0623 21:38:51.099979 5800 logs.go:123] Gathering logs for etcd [7cdd209913af] ... I0623 21:38:51.099992 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cdd209913af" I0623 21:38:51.147640 5800 logs.go:123] Gathering logs for coredns [d54ef8e17c7b] ... I0623 21:38:51.147653 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54ef8e17c7b" I0623 21:38:51.189233 5800 logs.go:123] Gathering logs for kube-proxy [3954234da2d6] ... I0623 21:38:51.189248 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3954234da2d6" I0623 21:38:51.232292 5800 logs.go:123] Gathering logs for storage-provisioner [37a0a473e538] ... I0623 21:38:51.232305 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37a0a473e538" I0623 21:38:51.276852 5800 logs.go:123] Gathering logs for kube-controller-manager [a191ce07e7e5] ... I0623 21:38:51.276864 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a191ce07e7e5" I0623 21:38:51.331720 5800 logs.go:123] Gathering logs for etcd [ce7800fbbbf6] ... I0623 21:38:51.331732 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce7800fbbbf6" I0623 21:38:51.393301 5800 logs.go:123] Gathering logs for kube-scheduler [9ec311ba44fb] ... I0623 21:38:51.393316 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ec311ba44fb" I0623 21:38:51.444102 5800 logs.go:123] Gathering logs for kube-controller-manager [7e3a1765c4db] ... I0623 21:38:51.444116 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e3a1765c4db" I0623 21:38:51.503779 5800 logs.go:123] Gathering logs for container status ... I0623 21:38:51.503791 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:38:51.540805 5800 logs.go:123] Gathering logs for coredns [1a5c5f15693b] ... I0623 21:38:51.540816 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a5c5f15693b" I0623 21:38:51.585717 5800 logs.go:123] Gathering logs for kube-scheduler [fedc7c601ca7] ... I0623 21:38:51.585733 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fedc7c601ca7" I0623 21:38:51.633477 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:38:51.633488 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:38:51.647808 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:38:51.647820 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:38:51.738903 5800 logs.go:123] Gathering logs for kube-apiserver [095db28adc0e] ... I0623 21:38:51.738915 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095db28adc0e" I0623 21:38:51.784098 5800 logs.go:123] Gathering logs for kube-apiserver [0c9aaab491c9] ... I0623 21:38:51.784110 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c9aaab491c9" I0623 21:38:51.832075 5800 logs.go:123] Gathering logs for coredns [9cbb6d3e39b8] ... I0623 21:38:51.832087 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cbb6d3e39b8" I0623 21:38:51.875296 5800 logs.go:123] Gathering logs for coredns [d335c6de28f8] ... I0623 21:38:51.875308 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d335c6de28f8" I0623 21:38:51.923761 5800 logs.go:123] Gathering logs for Docker ... I0623 21:38:51.923775 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:38:54.444608 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:38:59.445264 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:38:59.630696 5800 kubeadm.go:605] restartCluster took 4m5.828973125s W0623 21:38:59.630936 5800 out.go:241] ๐Ÿคฆ Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: cluster wait timed out during healthz check I0623 21:38:59.631040 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force" I0623 21:39:33.891361 5800 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (34.260302492s) I0623 21:39:33.891414 5800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0623 21:39:33.906003 5800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0623 21:39:33.916512 5800 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0623 21:39:33.916555 5800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0623 21:39:33.927066 5800 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0623 21:39:33.927092 5800 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0623 21:39:43.485534 5800 out.go:203] โ–ช Generating certificates and keys ... I0623 21:39:43.487960 5800 out.go:203] โ–ช Booting up control plane ... I0623 21:39:43.490458 5800 out.go:203] โ–ช Configuring RBAC rules ... I0623 21:39:43.492961 5800 cni.go:93] Creating CNI manager for "" I0623 21:39:43.492972 5800 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0623 21:39:43.492993 5800 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0623 21:39:43.493134 5800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0623 21:39:43.493203 5800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=362d5fdc0a3dbee389b3d3f1034e8023e72bd3a7 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2022_06_23T21_39_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0623 21:39:43.631587 5800 ops.go:34] apiserver oom_adj: -16 I0623 21:39:43.742720 5800 kubeadm.go:1020] duration metric: took 249.630367ms to wait for elevateKubeSystemPrivileges. I0623 21:39:43.780725 5800 kubeadm.go:393] StartCluster complete in 4m50.028474322s I0623 21:39:43.780749 5800 settings.go:142] acquiring lock: {Name:mk5204e051c6faf094374ab16f08b9d85e55f899 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0623 21:39:43.780830 5800 settings.go:150] Updating kubeconfig: /home/aman/.kube/config I0623 21:39:43.782936 5800 lock.go:35] WriteFile acquiring /home/aman/.kube/config: {Name:mk09eced36d216b175949b6fb79e6410031a5286 Clock:{} Delay:500ms Timeout:1m0s Cancel:} W0623 21:40:13.785476 5800 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8443: i/o timeout W0623 21:40:44.287713 5800 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8443: i/o timeout W0623 21:41:14.787157 5800 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8443: i/o timeout W0623 21:41:45.286994 5800 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8443: i/o timeout W0623 21:42:15.787498 5800 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8443: i/o timeout W0623 21:42:45.788365 5800 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8443: i/o timeout I0623 21:42:45.788423 5800 kapi.go:241] timed out trying to rescale deployment "coredns" in namespace "kube-system" and context "minikube" to 1: timed out waiting for the condition E0623 21:42:45.788445 5800 start.go:264] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: timed out waiting for the condition I0623 21:42:45.788600 5800 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0623 21:42:45.788629 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0623 21:42:45.790334 5800 out.go:176] ๐Ÿ”Ž Verifying Kubernetes components... I0623 21:42:45.788886 5800 addons.go:415] enableAddons start: toEnable=map[], additional=[] I0623 21:42:45.789230 5800 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3 I0623 21:42:45.790565 5800 addons.go:65] Setting storage-provisioner=true in profile "minikube" I0623 21:42:45.790610 5800 addons.go:153] Setting addon storage-provisioner=true in "minikube" W0623 21:42:45.790628 5800 addons.go:165] addon storage-provisioner should already be in state true I0623 21:42:45.790705 5800 host.go:66] Checking if "minikube" exists ... I0623 21:42:45.790725 5800 addons.go:65] Setting default-storageclass=true in profile "minikube" I0623 21:42:45.790767 5800 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0623 21:42:45.791561 5800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0623 21:42:45.791747 5800 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0623 21:42:45.792390 5800 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0623 21:42:45.989063 5800 out.go:176] โ–ช Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0623 21:42:45.989321 5800 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I0623 21:42:45.989330 5800 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0623 21:42:45.989375 5800 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0623 21:42:46.028303 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0623 21:42:46.028342 5800 api_server.go:51] waiting for apiserver process to appear ... I0623 21:42:46.028380 5800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0623 21:42:46.111559 5800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:39887 SSHKeyPath:/home/aman/.minikube/machines/minikube/id_rsa Username:docker} I0623 21:42:46.261311 5800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0623 21:42:46.910328 5800 start.go:777] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS I0623 21:42:46.910343 5800 api_server.go:71] duration metric: took 1.121691865s to wait for apiserver process to appear ... I0623 21:42:46.910354 5800 api_server.go:87] waiting for apiserver healthz status ... I0623 21:42:46.910364 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:42:51.910658 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:42:52.411617 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:42:57.412422 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:42:57.911244 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:43:02.911700 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:43:03.411054 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:43:08.411463 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:43:08.411505 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:43:13.411928 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:43:13.412017 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... W0623 21:43:16.007551 5800 out.go:241] โ— Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 192.168.49.2:8443: i/o timeout] I0623 21:43:16.009044 5800 out.go:176] ๐ŸŒŸ Enabled addons: storage-provisioner I0623 21:43:16.009111 5800 addons.go:417] enableAddons completed in 30.22031764s I0623 21:43:18.413026 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:43:18.911813 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:43:23.912683 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:43:24.411376 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:43:29.412187 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:43:29.910932 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:43:34.912323 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:43:35.411871 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:43:40.412462 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:43:40.911548 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:43:45.912410 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:43:46.411205 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:43:46.479153 5800 logs.go:274] 1 containers: [bd9f617ca7ea] I0623 21:43:46.479203 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:43:46.517431 5800 logs.go:274] 1 containers: [4a122731432b] I0623 21:43:46.517475 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:43:46.554742 5800 logs.go:274] 2 containers: [4ee60235dcac c1766050e2f4] I0623 21:43:46.554790 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:43:46.590444 5800 logs.go:274] 1 containers: [b86c03b9b121] I0623 21:43:46.590489 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:43:46.627176 5800 logs.go:274] 1 containers: [4b2557fe68f4] I0623 21:43:46.627231 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:43:46.662838 5800 logs.go:274] 0 containers: [] W0623 21:43:46.662850 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:43:46.662892 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:43:46.707931 5800 logs.go:274] 1 containers: [24ee34e5d66e] I0623 21:43:46.707981 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:43:46.744535 5800 logs.go:274] 1 containers: [c5095547e6f6] I0623 21:43:46.744555 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:43:46.744565 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:43:46.758371 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:43:46.758382 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:43:46.852846 5800 logs.go:123] Gathering logs for coredns [c1766050e2f4] ... I0623 21:43:46.852860 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1766050e2f4" I0623 21:43:46.891926 5800 logs.go:123] Gathering logs for kube-scheduler [b86c03b9b121] ... I0623 21:43:46.891942 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86c03b9b121" I0623 21:43:46.945117 5800 logs.go:123] Gathering logs for kube-controller-manager [c5095547e6f6] ... I0623 21:43:46.945130 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5095547e6f6" I0623 21:43:47.003148 5800 logs.go:123] Gathering logs for storage-provisioner [24ee34e5d66e] ... I0623 21:43:47.003163 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ee34e5d66e" I0623 21:43:47.042701 5800 logs.go:123] Gathering logs for Docker ... I0623 21:43:47.042713 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:43:47.066406 5800 logs.go:123] Gathering logs for container status ... I0623 21:43:47.066419 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:43:47.103721 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:43:47.103736 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" W0623 21:43:47.196946 5800 logs.go:138] Found kubelet problem: Jun 23 16:09:56 minikube kubelet[7626]: W0623 16:09:56.074774 7626 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object W0623 21:43:47.197100 5800 logs.go:138] Found kubelet problem: Jun 23 16:09:56 minikube kubelet[7626]: E0623 16:09:56.075110 7626 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object I0623 21:43:47.199426 5800 logs.go:123] Gathering logs for kube-apiserver [bd9f617ca7ea] ... I0623 21:43:47.199437 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f617ca7ea" I0623 21:43:47.250637 5800 logs.go:123] Gathering logs for etcd [4a122731432b] ... I0623 21:43:47.250650 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a122731432b" I0623 21:43:47.299958 5800 logs.go:123] Gathering logs for coredns [4ee60235dcac] ... I0623 21:43:47.299972 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee60235dcac" I0623 21:43:47.340575 5800 logs.go:123] Gathering logs for kube-proxy [4b2557fe68f4] ... I0623 21:43:47.340587 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2557fe68f4" I0623 21:43:47.383695 5800 out.go:310] Setting ErrFile to fd 2... I0623 21:43:47.383717 5800 out.go:349] isatty.IsTerminal(2) = true W0623 21:43:47.383818 5800 out.go:241] โŒ Problems detected in kubelet: W0623 21:43:47.383839 5800 out.go:241] Jun 23 16:09:56 minikube kubelet[7626]: W0623 16:09:56.074774 7626 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object W0623 21:43:47.383850 5800 out.go:241] Jun 23 16:09:56 minikube kubelet[7626]: E0623 16:09:56.075110 7626 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object I0623 21:43:47.384001 5800 out.go:310] Setting ErrFile to fd 2... I0623 21:43:47.384011 5800 out.go:349] isatty.IsTerminal(2) = true I0623 21:43:57.385288 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:44:02.386362 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:44:02.411855 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:44:02.498281 5800 logs.go:274] 1 containers: [bd9f617ca7ea] I0623 21:44:02.498332 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:44:02.539318 5800 logs.go:274] 1 containers: [4a122731432b] I0623 21:44:02.539366 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:44:02.580790 5800 logs.go:274] 2 containers: [4ee60235dcac c1766050e2f4] I0623 21:44:02.580826 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:44:02.623354 5800 logs.go:274] 1 containers: [b86c03b9b121] I0623 21:44:02.623428 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:44:02.663528 5800 logs.go:274] 1 containers: [4b2557fe68f4] I0623 21:44:02.663590 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:44:02.702581 5800 logs.go:274] 0 containers: [] W0623 21:44:02.702591 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:44:02.702627 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:44:02.738681 5800 logs.go:274] 1 containers: [24ee34e5d66e] I0623 21:44:02.738736 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:44:02.778607 5800 logs.go:274] 1 containers: [c5095547e6f6] I0623 21:44:02.778632 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:44:02.778640 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:44:02.791850 5800 logs.go:123] Gathering logs for kube-apiserver [bd9f617ca7ea] ... I0623 21:44:02.791862 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f617ca7ea" I0623 21:44:02.843627 5800 logs.go:123] Gathering logs for kube-scheduler [b86c03b9b121] ... I0623 21:44:02.843638 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86c03b9b121" I0623 21:44:02.894022 5800 logs.go:123] Gathering logs for storage-provisioner [24ee34e5d66e] ... I0623 21:44:02.894035 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ee34e5d66e" I0623 21:44:02.934106 5800 logs.go:123] Gathering logs for kube-controller-manager [c5095547e6f6] ... I0623 21:44:02.934118 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5095547e6f6" I0623 21:44:03.029477 5800 logs.go:123] Gathering logs for container status ... I0623 21:44:03.029494 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:44:03.076379 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:44:03.076394 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" W0623 21:44:03.157178 5800 logs.go:138] Found kubelet problem: Jun 23 16:09:56 minikube kubelet[7626]: W0623 16:09:56.074774 7626 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object W0623 21:44:03.157330 5800 logs.go:138] Found kubelet problem: Jun 23 16:09:56 minikube kubelet[7626]: E0623 16:09:56.075110 7626 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object I0623 21:44:03.159597 5800 logs.go:123] Gathering logs for etcd [4a122731432b] ... I0623 21:44:03.159605 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a122731432b" I0623 21:44:03.213041 5800 logs.go:123] Gathering logs for coredns [4ee60235dcac] ... I0623 21:44:03.213056 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee60235dcac" I0623 21:44:03.251554 5800 logs.go:123] Gathering logs for coredns [c1766050e2f4] ... I0623 21:44:03.251570 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1766050e2f4" I0623 21:44:03.291868 5800 logs.go:123] Gathering logs for kube-proxy [4b2557fe68f4] ... I0623 21:44:03.291880 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2557fe68f4" I0623 21:44:03.330818 5800 logs.go:123] Gathering logs for Docker ... I0623 21:44:03.330834 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:44:03.354625 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:44:03.354639 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:44:03.444686 5800 out.go:310] Setting ErrFile to fd 2... I0623 21:44:03.444703 5800 out.go:349] isatty.IsTerminal(2) = true W0623 21:44:03.444791 5800 out.go:241] โŒ Problems detected in kubelet: W0623 21:44:03.444814 5800 out.go:241] Jun 23 16:09:56 minikube kubelet[7626]: W0623 16:09:56.074774 7626 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object W0623 21:44:03.444825 5800 out.go:241] Jun 23 16:09:56 minikube kubelet[7626]: E0623 16:09:56.075110 7626 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object I0623 21:44:03.444839 5800 out.go:310] Setting ErrFile to fd 2... I0623 21:44:03.444847 5800 out.go:349] isatty.IsTerminal(2) = true I0623 21:44:13.446212 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:44:18.446827 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:44:18.911751 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:44:18.980873 5800 logs.go:274] 1 containers: [bd9f617ca7ea] I0623 21:44:18.980941 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:44:19.019491 5800 logs.go:274] 1 containers: [4a122731432b] I0623 21:44:19.019542 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:44:19.060744 5800 logs.go:274] 2 containers: [4ee60235dcac c1766050e2f4] I0623 21:44:19.060794 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:44:19.097606 5800 logs.go:274] 1 containers: [b86c03b9b121] I0623 21:44:19.097682 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:44:19.136734 5800 logs.go:274] 1 containers: [4b2557fe68f4] I0623 21:44:19.136784 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:44:19.174334 5800 logs.go:274] 0 containers: [] W0623 21:44:19.174346 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:44:19.174382 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:44:19.210871 5800 logs.go:274] 1 containers: [24ee34e5d66e] I0623 21:44:19.210919 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:44:19.247389 5800 logs.go:274] 1 containers: [c5095547e6f6] I0623 21:44:19.247408 5800 logs.go:123] Gathering logs for coredns [c1766050e2f4] ... I0623 21:44:19.247430 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1766050e2f4" I0623 21:44:19.288650 5800 logs.go:123] Gathering logs for kube-proxy [4b2557fe68f4] ... I0623 21:44:19.288664 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2557fe68f4" I0623 21:44:19.327357 5800 logs.go:123] Gathering logs for storage-provisioner [24ee34e5d66e] ... I0623 21:44:19.327369 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ee34e5d66e" I0623 21:44:19.376764 5800 logs.go:123] Gathering logs for kube-controller-manager [c5095547e6f6] ... I0623 21:44:19.376778 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5095547e6f6" I0623 21:44:19.428764 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:44:19.428778 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:44:19.444474 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:44:19.444489 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:44:19.532009 5800 logs.go:123] Gathering logs for kube-apiserver [bd9f617ca7ea] ... I0623 21:44:19.532021 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f617ca7ea" I0623 21:44:19.588061 5800 logs.go:123] Gathering logs for coredns [4ee60235dcac] ... I0623 21:44:19.588074 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee60235dcac" I0623 21:44:19.626043 5800 logs.go:123] Gathering logs for Docker ... I0623 21:44:19.626067 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:44:19.649142 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:44:19.649155 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" W0623 21:44:19.726886 5800 logs.go:138] Found kubelet problem: Jun 23 16:09:56 minikube kubelet[7626]: W0623 16:09:56.074774 7626 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object W0623 21:44:19.727040 5800 logs.go:138] Found kubelet problem: Jun 23 16:09:56 minikube kubelet[7626]: E0623 16:09:56.075110 7626 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object I0623 21:44:19.729434 5800 logs.go:123] Gathering logs for etcd [4a122731432b] ... I0623 21:44:19.729445 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a122731432b" I0623 21:44:19.787138 5800 logs.go:123] Gathering logs for kube-scheduler [b86c03b9b121] ... I0623 21:44:19.787151 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86c03b9b121" I0623 21:44:19.840646 5800 logs.go:123] Gathering logs for container status ... I0623 21:44:19.840660 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:44:19.874961 5800 out.go:310] Setting ErrFile to fd 2... I0623 21:44:19.874979 5800 out.go:349] isatty.IsTerminal(2) = true W0623 21:44:19.875053 5800 out.go:241] โŒ Problems detected in kubelet: W0623 21:44:19.875070 5800 out.go:241] Jun 23 16:09:56 minikube kubelet[7626]: W0623 16:09:56.074774 7626 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object W0623 21:44:19.875081 5800 out.go:241] Jun 23 16:09:56 minikube kubelet[7626]: E0623 16:09:56.075110 7626 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object I0623 21:44:19.875089 5800 out.go:310] Setting ErrFile to fd 2... I0623 21:44:19.875097 5800 out.go:349] isatty.IsTerminal(2) = true I0623 21:44:29.876298 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:44:34.877567 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:44:34.911044 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:44:34.995280 5800 logs.go:274] 1 containers: [bd9f617ca7ea] I0623 21:44:34.995325 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:44:35.036523 5800 logs.go:274] 1 containers: [4a122731432b] I0623 21:44:35.036577 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:44:35.073301 5800 logs.go:274] 2 containers: [4ee60235dcac c1766050e2f4] I0623 21:44:35.073345 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:44:35.113531 5800 logs.go:274] 1 containers: [b86c03b9b121] I0623 21:44:35.113575 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:44:35.150054 5800 logs.go:274] 1 containers: [4b2557fe68f4] I0623 21:44:35.150097 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:44:35.188235 5800 logs.go:274] 0 containers: [] W0623 21:44:35.188245 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:44:35.188302 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:44:35.224634 5800 logs.go:274] 1 containers: [24ee34e5d66e] I0623 21:44:35.224680 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:44:35.263695 5800 logs.go:274] 1 containers: [c5095547e6f6] I0623 21:44:35.263718 5800 logs.go:123] Gathering logs for coredns [c1766050e2f4] ... I0623 21:44:35.263724 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1766050e2f4" I0623 21:44:35.307297 5800 logs.go:123] Gathering logs for kube-proxy [4b2557fe68f4] ... I0623 21:44:35.307312 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2557fe68f4" I0623 21:44:35.347831 5800 logs.go:123] Gathering logs for storage-provisioner [24ee34e5d66e] ... I0623 21:44:35.347847 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ee34e5d66e" I0623 21:44:35.389080 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:44:35.389092 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:44:35.402269 5800 logs.go:123] Gathering logs for etcd [4a122731432b] ... I0623 21:44:35.402292 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a122731432b" I0623 21:44:35.462379 5800 logs.go:123] Gathering logs for coredns [4ee60235dcac] ... I0623 21:44:35.462392 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee60235dcac" I0623 21:44:35.502591 5800 logs.go:123] Gathering logs for kube-scheduler [b86c03b9b121] ... I0623 21:44:35.502604 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86c03b9b121" I0623 21:44:35.557498 5800 logs.go:123] Gathering logs for kube-controller-manager [c5095547e6f6] ... I0623 21:44:35.557512 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5095547e6f6" I0623 21:44:35.616761 5800 logs.go:123] Gathering logs for Docker ... I0623 21:44:35.616776 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:44:35.640560 5800 logs.go:123] Gathering logs for container status ... I0623 21:44:35.640572 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:44:35.678527 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:44:35.678540 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" W0623 21:44:35.755831 5800 logs.go:138] Found kubelet problem: Jun 23 16:09:56 minikube kubelet[7626]: W0623 16:09:56.074774 7626 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object W0623 21:44:35.755984 5800 logs.go:138] Found kubelet problem: Jun 23 16:09:56 minikube kubelet[7626]: E0623 16:09:56.075110 7626 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object I0623 21:44:35.758256 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:44:35.758264 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:44:35.849184 5800 logs.go:123] Gathering logs for kube-apiserver [bd9f617ca7ea] ... I0623 21:44:35.849196 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f617ca7ea" I0623 21:44:35.900418 5800 out.go:310] Setting ErrFile to fd 2... I0623 21:44:35.900436 5800 out.go:349] isatty.IsTerminal(2) = true W0623 21:44:35.900513 5800 out.go:241] โŒ Problems detected in kubelet: W0623 21:44:35.900530 5800 out.go:241] Jun 23 16:09:56 minikube kubelet[7626]: W0623 16:09:56.074774 7626 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object W0623 21:44:35.900542 5800 out.go:241] Jun 23 16:09:56 minikube kubelet[7626]: E0623 16:09:56.075110 7626 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object I0623 21:44:35.900565 5800 out.go:310] Setting ErrFile to fd 2... I0623 21:44:35.900574 5800 out.go:349] isatty.IsTerminal(2) = true I0623 21:44:45.901619 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:44:50.903128 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:44:50.911690 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:44:50.965383 5800 logs.go:274] 1 containers: [bd9f617ca7ea] I0623 21:44:50.965431 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:44:51.012302 5800 logs.go:274] 1 containers: [4a122731432b] I0623 21:44:51.012353 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:44:51.054788 5800 logs.go:274] 2 containers: [4ee60235dcac c1766050e2f4] I0623 21:44:51.054827 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:44:51.091024 5800 logs.go:274] 1 containers: [b86c03b9b121] I0623 21:44:51.091066 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:44:51.126820 5800 logs.go:274] 1 containers: [4b2557fe68f4] I0623 21:44:51.126879 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:44:51.163458 5800 logs.go:274] 0 containers: [] W0623 21:44:51.163471 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:44:51.163512 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:44:51.204033 5800 logs.go:274] 1 containers: [24ee34e5d66e] I0623 21:44:51.204072 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:44:51.240597 5800 logs.go:274] 1 containers: [c5095547e6f6] I0623 21:44:51.240617 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:44:51.240628 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:44:51.337443 5800 logs.go:123] Gathering logs for coredns [c1766050e2f4] ... I0623 21:44:51.337455 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1766050e2f4" I0623 21:44:51.376152 5800 logs.go:123] Gathering logs for kube-proxy [4b2557fe68f4] ... I0623 21:44:51.376166 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2557fe68f4" I0623 21:44:51.417204 5800 logs.go:123] Gathering logs for kube-controller-manager [c5095547e6f6] ... I0623 21:44:51.417217 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5095547e6f6" I0623 21:44:51.475081 5800 logs.go:123] Gathering logs for Docker ... I0623 21:44:51.475095 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:44:51.499830 5800 logs.go:123] Gathering logs for container status ... I0623 21:44:51.499845 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:44:51.536264 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:44:51.536279 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" W0623 21:44:51.619798 5800 logs.go:138] Found kubelet problem: Jun 23 16:09:56 minikube kubelet[7626]: W0623 16:09:56.074774 7626 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object W0623 21:44:51.619953 5800 logs.go:138] Found kubelet problem: Jun 23 16:09:56 minikube kubelet[7626]: E0623 16:09:56.075110 7626 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object I0623 21:44:51.622356 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:44:51.622366 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:44:51.637113 5800 logs.go:123] Gathering logs for coredns [4ee60235dcac] ... I0623 21:44:51.637124 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee60235dcac" I0623 21:44:51.677526 5800 logs.go:123] Gathering logs for kube-scheduler [b86c03b9b121] ... I0623 21:44:51.677538 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86c03b9b121" I0623 21:44:51.741437 5800 logs.go:123] Gathering logs for storage-provisioner [24ee34e5d66e] ... I0623 21:44:51.741450 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ee34e5d66e" I0623 21:44:51.784178 5800 logs.go:123] Gathering logs for kube-apiserver [bd9f617ca7ea] ... I0623 21:44:51.784190 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f617ca7ea" I0623 21:44:51.866181 5800 logs.go:123] Gathering logs for etcd [4a122731432b] ... I0623 21:44:51.866193 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a122731432b" I0623 21:44:51.915965 5800 out.go:310] Setting ErrFile to fd 2... I0623 21:44:51.915984 5800 out.go:349] isatty.IsTerminal(2) = true W0623 21:44:51.916061 5800 out.go:241] โŒ Problems detected in kubelet: W0623 21:44:51.916079 5800 out.go:241] Jun 23 16:09:56 minikube kubelet[7626]: W0623 16:09:56.074774 7626 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object W0623 21:44:51.916093 5800 out.go:241] Jun 23 16:09:56 minikube kubelet[7626]: E0623 16:09:56.075110 7626 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object I0623 21:44:51.916103 5800 out.go:310] Setting ErrFile to fd 2... I0623 21:44:51.916111 5800 out.go:349] isatty.IsTerminal(2) = true I0623 21:45:01.917843 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:45:06.918125 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:45:07.412010 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:45:07.487337 5800 logs.go:274] 1 containers: [bd9f617ca7ea] I0623 21:45:07.487387 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:45:07.529026 5800 logs.go:274] 1 containers: [4a122731432b] I0623 21:45:07.529071 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:45:07.565775 5800 logs.go:274] 2 containers: [4ee60235dcac c1766050e2f4] I0623 21:45:07.565827 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:45:07.602238 5800 logs.go:274] 1 containers: [b86c03b9b121] I0623 21:45:07.602300 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:45:07.639011 5800 logs.go:274] 1 containers: [4b2557fe68f4] I0623 21:45:07.639063 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:45:07.681321 5800 logs.go:274] 0 containers: [] W0623 21:45:07.681332 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:45:07.681370 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:45:07.718517 5800 logs.go:274] 1 containers: [24ee34e5d66e] I0623 21:45:07.718577 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:45:07.755247 5800 logs.go:274] 1 containers: [c5095547e6f6] I0623 21:45:07.755268 5800 logs.go:123] Gathering logs for kube-apiserver [bd9f617ca7ea] ... I0623 21:45:07.755276 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f617ca7ea" I0623 21:45:07.799576 5800 logs.go:123] Gathering logs for coredns [c1766050e2f4] ... I0623 21:45:07.799587 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1766050e2f4" I0623 21:45:07.850949 5800 logs.go:123] Gathering logs for storage-provisioner [24ee34e5d66e] ... I0623 21:45:07.850962 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ee34e5d66e" I0623 21:45:07.894190 5800 logs.go:123] Gathering logs for container status ... I0623 21:45:07.894203 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:45:07.930884 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:45:07.930898 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" W0623 21:45:08.029511 5800 logs.go:138] Found kubelet problem: Jun 23 16:09:56 minikube kubelet[7626]: W0623 16:09:56.074774 7626 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object W0623 21:45:08.029708 5800 logs.go:138] Found kubelet problem: Jun 23 16:09:56 minikube kubelet[7626]: E0623 16:09:56.075110 7626 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object I0623 21:45:08.032215 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:45:08.032227 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:45:08.051240 5800 logs.go:123] Gathering logs for coredns [4ee60235dcac] ... I0623 21:45:08.051253 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee60235dcac" I0623 21:45:08.095967 5800 logs.go:123] Gathering logs for kube-scheduler [b86c03b9b121] ... I0623 21:45:08.095979 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86c03b9b121" I0623 21:45:08.154319 5800 logs.go:123] Gathering logs for kube-proxy [4b2557fe68f4] ... I0623 21:45:08.154344 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2557fe68f4" I0623 21:45:08.198223 5800 logs.go:123] Gathering logs for kube-controller-manager [c5095547e6f6] ... I0623 21:45:08.198238 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5095547e6f6" I0623 21:45:08.255371 5800 logs.go:123] Gathering logs for Docker ... I0623 21:45:08.255386 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:45:08.281078 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:45:08.281090 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:45:08.368558 5800 logs.go:123] Gathering logs for etcd [4a122731432b] ... I0623 21:45:08.368572 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a122731432b" I0623 21:45:08.426053 5800 out.go:310] Setting ErrFile to fd 2... I0623 21:45:08.426074 5800 out.go:349] isatty.IsTerminal(2) = true W0623 21:45:08.426149 5800 out.go:241] โŒ Problems detected in kubelet: W0623 21:45:08.426166 5800 out.go:241] Jun 23 16:09:56 minikube kubelet[7626]: W0623 16:09:56.074774 7626 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object W0623 21:45:08.426188 5800 out.go:241] Jun 23 16:09:56 minikube kubelet[7626]: E0623 16:09:56.075110 7626 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object I0623 21:45:08.426225 5800 out.go:310] Setting ErrFile to fd 2... I0623 21:45:08.426234 5800 out.go:349] isatty.IsTerminal(2) = true I0623 21:45:18.427376 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:45:23.428279 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:45:23.911424 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:45:24.030881 5800 logs.go:274] 1 containers: [bd9f617ca7ea] I0623 21:45:24.030979 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:45:24.091104 5800 logs.go:274] 1 containers: [4a122731432b] I0623 21:45:24.091149 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:45:24.135427 5800 logs.go:274] 2 containers: [4ee60235dcac c1766050e2f4] I0623 21:45:24.135478 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:45:24.174857 5800 logs.go:274] 1 containers: [b86c03b9b121] I0623 21:45:24.174919 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:45:24.218351 5800 logs.go:274] 1 containers: [4b2557fe68f4] I0623 21:45:24.218404 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:45:24.257485 5800 logs.go:274] 0 containers: [] W0623 21:45:24.257497 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:45:24.257552 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:45:24.294565 5800 logs.go:274] 1 containers: [24ee34e5d66e] I0623 21:45:24.294608 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:45:24.331234 5800 logs.go:274] 1 containers: [c5095547e6f6] I0623 21:45:24.331254 5800 logs.go:123] Gathering logs for coredns [4ee60235dcac] ... I0623 21:45:24.331261 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee60235dcac" I0623 21:45:24.370057 5800 logs.go:123] Gathering logs for kube-scheduler [b86c03b9b121] ... I0623 21:45:24.370071 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86c03b9b121" I0623 21:45:24.426746 5800 logs.go:123] Gathering logs for storage-provisioner [24ee34e5d66e] ... I0623 21:45:24.426759 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ee34e5d66e" I0623 21:45:24.468897 5800 logs.go:123] Gathering logs for Docker ... I0623 21:45:24.468912 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:45:24.491197 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:45:24.491209 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" W0623 21:45:24.567461 5800 logs.go:138] Found kubelet problem: Jun 23 16:09:56 minikube kubelet[7626]: W0623 16:09:56.074774 7626 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object W0623 21:45:24.567615 5800 logs.go:138] Found kubelet problem: Jun 23 16:09:56 minikube kubelet[7626]: E0623 16:09:56.075110 7626 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object I0623 21:45:24.569959 5800 logs.go:123] Gathering logs for kube-apiserver [bd9f617ca7ea] ... I0623 21:45:24.569967 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f617ca7ea" I0623 21:45:24.614198 5800 logs.go:123] Gathering logs for etcd [4a122731432b] ... I0623 21:45:24.614217 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a122731432b" I0623 21:45:24.660402 5800 logs.go:123] Gathering logs for kube-proxy [4b2557fe68f4] ... I0623 21:45:24.660416 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2557fe68f4" I0623 21:45:24.699733 5800 logs.go:123] Gathering logs for kube-controller-manager [c5095547e6f6] ... I0623 21:45:24.699747 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5095547e6f6" I0623 21:45:24.752214 5800 logs.go:123] Gathering logs for container status ... I0623 21:45:24.752230 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:45:24.785425 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:45:24.785437 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:45:24.798464 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:45:24.798478 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:45:24.890701 5800 logs.go:123] Gathering logs for coredns [c1766050e2f4] ... I0623 21:45:24.890724 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1766050e2f4" I0623 21:45:24.943424 5800 out.go:310] Setting ErrFile to fd 2... I0623 21:45:24.943449 5800 out.go:349] isatty.IsTerminal(2) = true W0623 21:45:24.943545 5800 out.go:241] โŒ Problems detected in kubelet: W0623 21:45:24.943569 5800 out.go:241] Jun 23 16:09:56 minikube kubelet[7626]: W0623 16:09:56.074774 7626 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object W0623 21:45:24.943583 5800 out.go:241] Jun 23 16:09:56 minikube kubelet[7626]: E0623 16:09:56.075110 7626 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object I0623 21:45:24.943593 5800 out.go:310] Setting ErrFile to fd 2... I0623 21:45:24.943601 5800 out.go:349] isatty.IsTerminal(2) = true I0623 21:45:34.944212 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:45:39.945361 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:45:40.410990 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:45:40.467494 5800 logs.go:274] 1 containers: [bd9f617ca7ea] I0623 21:45:40.467551 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:45:40.510691 5800 logs.go:274] 1 containers: [4a122731432b] I0623 21:45:40.510743 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:45:40.547896 5800 logs.go:274] 2 containers: [4ee60235dcac c1766050e2f4] I0623 21:45:40.547946 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:45:40.589804 5800 logs.go:274] 1 containers: [b86c03b9b121] I0623 21:45:40.589850 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:45:40.629990 5800 logs.go:274] 1 containers: [4b2557fe68f4] I0623 21:45:40.630037 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:45:40.667590 5800 logs.go:274] 0 containers: [] W0623 21:45:40.667602 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:45:40.667641 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:45:40.705358 5800 logs.go:274] 1 containers: [24ee34e5d66e] I0623 21:45:40.705407 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:45:40.746236 5800 logs.go:274] 1 containers: [c5095547e6f6] I0623 21:45:40.746254 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:45:40.746263 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" W0623 21:45:40.825463 5800 logs.go:138] Found kubelet problem: Jun 23 16:09:56 minikube kubelet[7626]: W0623 16:09:56.074774 7626 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object W0623 21:45:40.825615 5800 logs.go:138] Found kubelet problem: Jun 23 16:09:56 minikube kubelet[7626]: E0623 16:09:56.075110 7626 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object I0623 21:45:40.827928 5800 logs.go:123] Gathering logs for coredns [4ee60235dcac] ... I0623 21:45:40.827936 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee60235dcac" I0623 21:45:40.871116 5800 logs.go:123] Gathering logs for kube-proxy [4b2557fe68f4] ... I0623 21:45:40.871131 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2557fe68f4" I0623 21:45:40.922834 5800 logs.go:123] Gathering logs for storage-provisioner [24ee34e5d66e] ... I0623 21:45:40.922847 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ee34e5d66e" I0623 21:45:40.967060 5800 logs.go:123] Gathering logs for Docker ... I0623 21:45:40.967075 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:45:40.996282 5800 logs.go:123] Gathering logs for container status ... I0623 21:45:40.996295 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:45:41.060513 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:45:41.060525 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:45:41.075716 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:45:41.075731 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:45:41.169492 5800 logs.go:123] Gathering logs for kube-apiserver [bd9f617ca7ea] ... I0623 21:45:41.169511 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f617ca7ea" I0623 21:45:41.224428 5800 logs.go:123] Gathering logs for etcd [4a122731432b] ... I0623 21:45:41.224441 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a122731432b" I0623 21:45:41.276432 5800 logs.go:123] Gathering logs for coredns [c1766050e2f4] ... I0623 21:45:41.276444 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1766050e2f4" I0623 21:45:41.330192 5800 logs.go:123] Gathering logs for kube-scheduler [b86c03b9b121] ... I0623 21:45:41.330209 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86c03b9b121" I0623 21:45:41.385526 5800 logs.go:123] Gathering logs for kube-controller-manager [c5095547e6f6] ... I0623 21:45:41.385540 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5095547e6f6" I0623 21:45:41.443824 5800 out.go:310] Setting ErrFile to fd 2... I0623 21:45:41.443841 5800 out.go:349] isatty.IsTerminal(2) = true W0623 21:45:41.443912 5800 out.go:241] โŒ Problems detected in kubelet: W0623 21:45:41.443930 5800 out.go:241] Jun 23 16:09:56 minikube kubelet[7626]: W0623 16:09:56.074774 7626 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object W0623 21:45:41.443941 5800 out.go:241] Jun 23 16:09:56 minikube kubelet[7626]: E0623 16:09:56.075110 7626 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object I0623 21:45:41.443949 5800 out.go:310] Setting ErrFile to fd 2... I0623 21:45:41.443956 5800 out.go:349] isatty.IsTerminal(2) = true I0623 21:45:51.445095 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:45:56.446410 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:45:56.911173 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:45:56.989854 5800 logs.go:274] 1 containers: [bd9f617ca7ea] I0623 21:45:56.989906 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:45:57.038520 5800 logs.go:274] 1 containers: [4a122731432b] I0623 21:45:57.038557 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:45:57.077944 5800 logs.go:274] 2 containers: [4ee60235dcac c1766050e2f4] I0623 21:45:57.077995 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:45:57.119164 5800 logs.go:274] 1 containers: [b86c03b9b121] I0623 21:45:57.119224 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:45:57.161067 5800 logs.go:274] 1 containers: [4b2557fe68f4] I0623 21:45:57.161122 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:45:57.199325 5800 logs.go:274] 0 containers: [] W0623 21:45:57.199338 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:45:57.199379 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:45:57.239092 5800 logs.go:274] 1 containers: [24ee34e5d66e] I0623 21:45:57.239143 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:45:57.277166 5800 logs.go:274] 1 containers: [c5095547e6f6] I0623 21:45:57.277188 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:45:57.277197 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" W0623 21:45:57.363195 5800 logs.go:138] Found kubelet problem: Jun 23 16:09:56 minikube kubelet[7626]: W0623 16:09:56.074774 7626 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object W0623 21:45:57.363363 5800 logs.go:138] Found kubelet problem: Jun 23 16:09:56 minikube kubelet[7626]: E0623 16:09:56.075110 7626 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object I0623 21:45:57.365914 5800 logs.go:123] Gathering logs for kube-apiserver [bd9f617ca7ea] ... I0623 21:45:57.365928 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f617ca7ea" I0623 21:45:57.418587 5800 logs.go:123] Gathering logs for kube-scheduler [b86c03b9b121] ... I0623 21:45:57.418601 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86c03b9b121" I0623 21:45:57.490718 5800 logs.go:123] Gathering logs for container status ... I0623 21:45:57.490732 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:45:57.530384 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:45:57.530396 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:45:57.543900 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:45:57.543914 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:45:57.630070 5800 logs.go:123] Gathering logs for etcd [4a122731432b] ... I0623 21:45:57.630090 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a122731432b" I0623 21:45:57.681194 5800 logs.go:123] Gathering logs for coredns [4ee60235dcac] ... I0623 21:45:57.681207 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee60235dcac" I0623 21:45:57.724937 5800 logs.go:123] Gathering logs for coredns [c1766050e2f4] ... I0623 21:45:57.724950 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1766050e2f4" I0623 21:45:57.771893 5800 logs.go:123] Gathering logs for kube-proxy [4b2557fe68f4] ... I0623 21:45:57.771905 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2557fe68f4" I0623 21:45:57.822438 5800 logs.go:123] Gathering logs for storage-provisioner [24ee34e5d66e] ... I0623 21:45:57.822450 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ee34e5d66e" I0623 21:45:57.863070 5800 logs.go:123] Gathering logs for kube-controller-manager [c5095547e6f6] ... I0623 21:45:57.863084 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5095547e6f6" I0623 21:45:57.918892 5800 logs.go:123] Gathering logs for Docker ... I0623 21:45:57.918908 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:45:57.949626 5800 out.go:310] Setting ErrFile to fd 2... I0623 21:45:57.949645 5800 out.go:349] isatty.IsTerminal(2) = true W0623 21:45:57.949722 5800 out.go:241] โŒ Problems detected in kubelet: W0623 21:45:57.949742 5800 out.go:241] Jun 23 16:09:56 minikube kubelet[7626]: W0623 16:09:56.074774 7626 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object W0623 21:45:57.949769 5800 out.go:241] Jun 23 16:09:56 minikube kubelet[7626]: E0623 16:09:56.075110 7626 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object I0623 21:45:57.949788 5800 out.go:310] Setting ErrFile to fd 2... I0623 21:45:57.949796 5800 out.go:349] isatty.IsTerminal(2) = true I0623 21:46:07.951629 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:46:12.953009 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:46:13.411032 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:46:13.479947 5800 logs.go:274] 1 containers: [bd9f617ca7ea] I0623 21:46:13.479999 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:46:13.523758 5800 logs.go:274] 1 containers: [4a122731432b] I0623 21:46:13.523801 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:46:13.563980 5800 logs.go:274] 2 containers: [4ee60235dcac c1766050e2f4] I0623 21:46:13.564028 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:46:13.602914 5800 logs.go:274] 1 containers: [b86c03b9b121] I0623 21:46:13.602979 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:46:13.645856 5800 logs.go:274] 1 containers: [4b2557fe68f4] I0623 21:46:13.645897 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:46:13.686118 5800 logs.go:274] 0 containers: [] W0623 21:46:13.686132 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:46:13.686172 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:46:13.724804 5800 logs.go:274] 1 containers: [24ee34e5d66e] I0623 21:46:13.724852 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:46:13.783521 5800 logs.go:274] 1 containers: [c5095547e6f6] I0623 21:46:13.783548 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:46:13.783558 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:46:13.798901 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:46:13.798916 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:46:14.042411 5800 logs.go:123] Gathering logs for etcd [4a122731432b] ... I0623 21:46:14.042423 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a122731432b" I0623 21:46:14.089932 5800 logs.go:123] Gathering logs for coredns [4ee60235dcac] ... I0623 21:46:14.089945 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee60235dcac" I0623 21:46:14.141448 5800 logs.go:123] Gathering logs for coredns [c1766050e2f4] ... I0623 21:46:14.141464 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1766050e2f4" I0623 21:46:14.186630 5800 logs.go:123] Gathering logs for kube-scheduler [b86c03b9b121] ... I0623 21:46:14.186646 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86c03b9b121" I0623 21:46:14.247596 5800 logs.go:123] Gathering logs for kube-proxy [4b2557fe68f4] ... I0623 21:46:14.247609 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2557fe68f4" I0623 21:46:14.290311 5800 logs.go:123] Gathering logs for storage-provisioner [24ee34e5d66e] ... I0623 21:46:14.290326 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ee34e5d66e" I0623 21:46:14.334153 5800 logs.go:123] Gathering logs for Docker ... I0623 21:46:14.334169 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:46:14.360812 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:46:14.360824 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" W0623 21:46:14.448332 5800 logs.go:138] Found kubelet problem: Jun 23 16:09:56 minikube kubelet[7626]: W0623 16:09:56.074774 7626 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object W0623 21:46:14.448486 5800 logs.go:138] Found kubelet problem: Jun 23 16:09:56 minikube kubelet[7626]: E0623 16:09:56.075110 7626 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object I0623 21:46:14.450933 5800 logs.go:123] Gathering logs for kube-apiserver [bd9f617ca7ea] ... I0623 21:46:14.450942 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f617ca7ea" I0623 21:46:14.500557 5800 logs.go:123] Gathering logs for kube-controller-manager [c5095547e6f6] ... I0623 21:46:14.500569 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5095547e6f6" I0623 21:46:14.575430 5800 logs.go:123] Gathering logs for container status ... I0623 21:46:14.575443 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:46:14.614331 5800 out.go:310] Setting ErrFile to fd 2... I0623 21:46:14.614352 5800 out.go:349] isatty.IsTerminal(2) = true W0623 21:46:14.614462 5800 out.go:241] โŒ Problems detected in kubelet: W0623 21:46:14.614488 5800 out.go:241] Jun 23 16:09:56 minikube kubelet[7626]: W0623 16:09:56.074774 7626 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object W0623 21:46:14.614502 5800 out.go:241] Jun 23 16:09:56 minikube kubelet[7626]: E0623 16:09:56.075110 7626 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object I0623 21:46:14.614513 5800 out.go:310] Setting ErrFile to fd 2... I0623 21:46:14.614523 5800 out.go:349] isatty.IsTerminal(2) = true I0623 21:46:24.615792 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:46:29.616373 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:46:29.911022 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:46:30.038412 5800 logs.go:274] 1 containers: [bd9f617ca7ea] I0623 21:46:30.038469 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:46:30.088322 5800 logs.go:274] 1 containers: [4a122731432b] I0623 21:46:30.088369 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:46:30.126094 5800 logs.go:274] 2 containers: [4ee60235dcac c1766050e2f4] I0623 21:46:30.126147 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:46:30.175991 5800 logs.go:274] 1 containers: [b86c03b9b121] I0623 21:46:30.176042 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:46:30.215672 5800 logs.go:274] 1 containers: [4b2557fe68f4] I0623 21:46:30.215724 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:46:30.264183 5800 logs.go:274] 0 containers: [] W0623 21:46:30.264195 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:46:30.264230 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:46:30.307208 5800 logs.go:274] 1 containers: [24ee34e5d66e] I0623 21:46:30.307256 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:46:30.352473 5800 logs.go:274] 1 containers: [c5095547e6f6] I0623 21:46:30.352492 5800 logs.go:123] Gathering logs for kube-controller-manager [c5095547e6f6] ... I0623 21:46:30.352502 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5095547e6f6" I0623 21:46:30.425159 5800 logs.go:123] Gathering logs for Docker ... I0623 21:46:30.425173 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:46:30.450385 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:46:30.450397 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:46:30.543971 5800 logs.go:123] Gathering logs for kube-apiserver [bd9f617ca7ea] ... I0623 21:46:30.543986 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f617ca7ea" I0623 21:46:30.607569 5800 logs.go:123] Gathering logs for coredns [4ee60235dcac] ... I0623 21:46:30.607585 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee60235dcac" I0623 21:46:30.652029 5800 logs.go:123] Gathering logs for coredns [c1766050e2f4] ... I0623 21:46:30.652044 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1766050e2f4" I0623 21:46:30.703664 5800 logs.go:123] Gathering logs for kube-scheduler [b86c03b9b121] ... I0623 21:46:30.703678 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86c03b9b121" I0623 21:46:30.766546 5800 logs.go:123] Gathering logs for storage-provisioner [24ee34e5d66e] ... I0623 21:46:30.766562 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ee34e5d66e" I0623 21:46:30.813907 5800 logs.go:123] Gathering logs for container status ... I0623 21:46:30.813935 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:46:30.853277 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:46:30.853291 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" W0623 21:46:30.940237 5800 logs.go:138] Found kubelet problem: Jun 23 16:09:56 minikube kubelet[7626]: W0623 16:09:56.074774 7626 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object W0623 21:46:30.940425 5800 logs.go:138] Found kubelet problem: Jun 23 16:09:56 minikube kubelet[7626]: E0623 16:09:56.075110 7626 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object I0623 21:46:30.942776 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:46:30.942783 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:46:30.958950 5800 logs.go:123] Gathering logs for etcd [4a122731432b] ... I0623 21:46:30.958963 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a122731432b" I0623 21:46:31.020459 5800 logs.go:123] Gathering logs for kube-proxy [4b2557fe68f4] ... I0623 21:46:31.020475 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2557fe68f4" I0623 21:46:31.065890 5800 out.go:310] Setting ErrFile to fd 2... I0623 21:46:31.065921 5800 out.go:349] isatty.IsTerminal(2) = true W0623 21:46:31.066070 5800 out.go:241] โŒ Problems detected in kubelet: W0623 21:46:31.066552 5800 out.go:241] Jun 23 16:09:56 minikube kubelet[7626]: W0623 16:09:56.074774 7626 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object W0623 21:46:31.066576 5800 out.go:241] Jun 23 16:09:56 minikube kubelet[7626]: E0623 16:09:56.075110 7626 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object I0623 21:46:31.066589 5800 out.go:310] Setting ErrFile to fd 2... I0623 21:46:31.066602 5800 out.go:349] isatty.IsTerminal(2) = true I0623 21:46:41.067445 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:46:46.069095 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:46:46.411854 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:46:46.551381 5800 logs.go:274] 1 containers: [bd9f617ca7ea] I0623 21:46:46.551431 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:46:46.602077 5800 logs.go:274] 1 containers: [4a122731432b] I0623 21:46:46.602146 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:46:46.643605 5800 logs.go:274] 2 containers: [4ee60235dcac c1766050e2f4] I0623 21:46:46.643660 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:46:46.682042 5800 logs.go:274] 1 containers: [b86c03b9b121] I0623 21:46:46.682090 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:46:46.723585 5800 logs.go:274] 1 containers: [4b2557fe68f4] I0623 21:46:46.723636 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:46:46.762008 5800 logs.go:274] 0 containers: [] W0623 21:46:46.762020 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:46:46.762052 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:46:46.799173 5800 logs.go:274] 1 containers: [24ee34e5d66e] I0623 21:46:46.799215 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:46:46.839581 5800 logs.go:274] 1 containers: [c5095547e6f6] I0623 21:46:46.839600 5800 logs.go:123] Gathering logs for coredns [c1766050e2f4] ... I0623 21:46:46.839614 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1766050e2f4" I0623 21:46:46.878841 5800 logs.go:123] Gathering logs for kube-scheduler [b86c03b9b121] ... I0623 21:46:46.878857 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86c03b9b121" I0623 21:46:46.932178 5800 logs.go:123] Gathering logs for storage-provisioner [24ee34e5d66e] ... I0623 21:46:46.932192 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ee34e5d66e" I0623 21:46:46.982273 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:46:46.982286 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" W0623 21:46:47.068232 5800 logs.go:138] Found kubelet problem: Jun 23 16:09:56 minikube kubelet[7626]: W0623 16:09:56.074774 7626 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object W0623 21:46:47.068421 5800 logs.go:138] Found kubelet problem: Jun 23 16:09:56 minikube kubelet[7626]: E0623 16:09:56.075110 7626 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object I0623 21:46:47.070837 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:46:47.070845 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:46:47.214714 5800 logs.go:123] Gathering logs for etcd [4a122731432b] ... I0623 21:46:47.214732 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a122731432b" I0623 21:46:47.278888 5800 logs.go:123] Gathering logs for coredns [4ee60235dcac] ... I0623 21:46:47.278900 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee60235dcac" I0623 21:46:47.329643 5800 logs.go:123] Gathering logs for kube-proxy [4b2557fe68f4] ... I0623 21:46:47.329660 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2557fe68f4" I0623 21:46:47.378244 5800 logs.go:123] Gathering logs for kube-controller-manager [c5095547e6f6] ... I0623 21:46:47.378258 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5095547e6f6" I0623 21:46:47.454367 5800 logs.go:123] Gathering logs for Docker ... I0623 21:46:47.454382 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:46:47.480373 5800 logs.go:123] Gathering logs for container status ... I0623 21:46:47.480390 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:46:47.525079 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:46:47.525092 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:46:47.541142 5800 logs.go:123] Gathering logs for kube-apiserver [bd9f617ca7ea] ... I0623 21:46:47.541158 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f617ca7ea" I0623 21:46:47.599464 5800 out.go:310] Setting ErrFile to fd 2... I0623 21:46:47.599485 5800 out.go:349] isatty.IsTerminal(2) = true W0623 21:46:47.599583 5800 out.go:241] โŒ Problems detected in kubelet: W0623 21:46:47.599672 5800 out.go:241] Jun 23 16:09:56 minikube kubelet[7626]: W0623 16:09:56.074774 7626 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object W0623 21:46:47.599724 5800 out.go:241] Jun 23 16:09:56 minikube kubelet[7626]: E0623 16:09:56.075110 7626 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object I0623 21:46:47.599745 5800 out.go:310] Setting ErrFile to fd 2... I0623 21:46:47.599756 5800 out.go:349] isatty.IsTerminal(2) = true I0623 21:46:57.602442 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:47:02.604532 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:47:02.604765 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0623 21:47:02.694727 5800 logs.go:274] 1 containers: [bd9f617ca7ea] I0623 21:47:02.694783 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0623 21:47:02.732404 5800 logs.go:274] 1 containers: [4a122731432b] I0623 21:47:02.732443 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0623 21:47:02.775175 5800 logs.go:274] 2 containers: [4ee60235dcac c1766050e2f4] I0623 21:47:02.775223 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0623 21:47:02.813521 5800 logs.go:274] 1 containers: [b86c03b9b121] I0623 21:47:02.813566 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0623 21:47:02.851348 5800 logs.go:274] 1 containers: [4b2557fe68f4] I0623 21:47:02.851396 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0623 21:47:02.895031 5800 logs.go:274] 0 containers: [] W0623 21:47:02.895044 5800 logs.go:276] No container was found matching "kubernetes-dashboard" I0623 21:47:02.895086 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0623 21:47:02.934724 5800 logs.go:274] 1 containers: [24ee34e5d66e] I0623 21:47:02.934771 5800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0623 21:47:03.009175 5800 logs.go:274] 1 containers: [c5095547e6f6] I0623 21:47:03.009195 5800 logs.go:123] Gathering logs for kube-controller-manager [c5095547e6f6] ... I0623 21:47:03.009204 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5095547e6f6" I0623 21:47:03.076013 5800 logs.go:123] Gathering logs for container status ... I0623 21:47:03.076026 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0623 21:47:03.116114 5800 logs.go:123] Gathering logs for describe nodes ... I0623 21:47:03.116132 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0623 21:47:03.211830 5800 logs.go:123] Gathering logs for kube-apiserver [bd9f617ca7ea] ... I0623 21:47:03.211842 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f617ca7ea" I0623 21:47:03.265078 5800 logs.go:123] Gathering logs for etcd [4a122731432b] ... I0623 21:47:03.265091 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a122731432b" I0623 21:47:03.318721 5800 logs.go:123] Gathering logs for coredns [4ee60235dcac] ... I0623 21:47:03.318736 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee60235dcac" I0623 21:47:03.360553 5800 logs.go:123] Gathering logs for kube-scheduler [b86c03b9b121] ... I0623 21:47:03.360568 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86c03b9b121" I0623 21:47:03.417767 5800 logs.go:123] Gathering logs for kube-proxy [4b2557fe68f4] ... I0623 21:47:03.417781 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2557fe68f4" I0623 21:47:03.462640 5800 logs.go:123] Gathering logs for kubelet ... I0623 21:47:03.462654 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" W0623 21:47:03.546111 5800 logs.go:138] Found kubelet problem: Jun 23 16:09:56 minikube kubelet[7626]: W0623 16:09:56.074774 7626 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object W0623 21:47:03.546360 5800 logs.go:138] Found kubelet problem: Jun 23 16:09:56 minikube kubelet[7626]: E0623 16:09:56.075110 7626 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object I0623 21:47:03.549438 5800 logs.go:123] Gathering logs for dmesg ... I0623 21:47:03.549461 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0623 21:47:03.569545 5800 logs.go:123] Gathering logs for coredns [c1766050e2f4] ... I0623 21:47:03.569566 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1766050e2f4" I0623 21:47:03.623771 5800 logs.go:123] Gathering logs for storage-provisioner [24ee34e5d66e] ... I0623 21:47:03.623785 5800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ee34e5d66e" I0623 21:47:03.665930 5800 logs.go:123] Gathering logs for Docker ... I0623 21:47:03.665944 5800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0623 21:47:03.690616 5800 out.go:310] Setting ErrFile to fd 2... I0623 21:47:03.690639 5800 out.go:349] isatty.IsTerminal(2) = true W0623 21:47:03.690738 5800 out.go:241] โŒ Problems detected in kubelet: W0623 21:47:03.690757 5800 out.go:241] Jun 23 16:09:56 minikube kubelet[7626]: W0623 16:09:56.074774 7626 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object W0623 21:47:03.690770 5800 out.go:241] Jun 23 16:09:56 minikube kubelet[7626]: E0623 16:09:56.075110 7626 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object I0623 21:47:03.690792 5800 out.go:310] Setting ErrFile to fd 2... I0623 21:47:03.690801 5800 out.go:349] isatty.IsTerminal(2) = true I0623 21:47:13.692394 5800 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0623 21:47:18.694813 5800 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0623 21:47:18.696826 5800 out.go:176] W0623 21:47:18.697234 5800 out.go:241] โŒ Exiting due to GUEST_START: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: timed out waiting for the condition W0623 21:47:18.697317 5800 out.go:241] W0623 21:47:18.701275 5800 out.go:241] โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ โ”‚ โ”‚ ๐Ÿ˜ฟ If the above advice does not help, please let us know: โ”‚ โ”‚ ๐Ÿ‘‰ https://github.com/kubernetes/minikube/issues/new/choose โ”‚ โ”‚ โ”‚ โ”‚ Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. โ”‚ โ”‚ โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ * * ==> Docker <== * -- Logs begin at Thu 2022-06-23 16:04:46 UTC, end at Thu 2022-06-23 16:26:10 UTC. -- Jun 23 16:04:46 minikube systemd[1]: Starting Docker Application Container Engine... Jun 23 16:04:47 minikube dockerd[125]: time="2022-06-23T16:04:47.025150831Z" level=info msg="Starting up" Jun 23 16:04:47 minikube dockerd[125]: time="2022-06-23T16:04:47.027018231Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jun 23 16:04:47 minikube dockerd[125]: time="2022-06-23T16:04:47.027147012Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jun 23 16:04:47 minikube dockerd[125]: time="2022-06-23T16:04:47.027211356Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Jun 23 16:04:47 minikube dockerd[125]: time="2022-06-23T16:04:47.027261902Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jun 23 16:04:47 minikube dockerd[125]: time="2022-06-23T16:04:47.035325129Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jun 23 16:04:47 minikube dockerd[125]: time="2022-06-23T16:04:47.035362462Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jun 23 16:04:47 minikube dockerd[125]: time="2022-06-23T16:04:47.035383298Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Jun 23 16:04:47 minikube dockerd[125]: time="2022-06-23T16:04:47.035397230Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jun 23 16:04:47 minikube dockerd[125]: time="2022-06-23T16:04:47.043389379Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Jun 23 16:04:47 minikube dockerd[125]: time="2022-06-23T16:04:47.050486323Z" level=info msg="Loading containers: start." Jun 23 16:04:47 minikube dockerd[125]: time="2022-06-23T16:04:47.385113182Z" level=info msg="Removing stale sandbox 4c85cb74b831fe7c6d08c846978deb76c8264f86153379c0bf3544d3e9487016 (d1ba496e720f947aa6356f252f2e7340c690e249db1337783a331caa3d490a47)" Jun 23 16:04:47 minikube dockerd[125]: time="2022-06-23T16:04:47.389484406Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint eae9cd3816b6e3171d2d3859d55ed816b118adabaff8c00f9399693a16c95d0e 5d256c6cd65ab75f8c1deb4a284650c894710b51cd14e3890a1e814b668def51], retrying...." Jun 23 16:04:47 minikube dockerd[125]: time="2022-06-23T16:04:47.497966537Z" level=info msg="Removing stale sandbox 70f1e397cd5b3bf834b79a77339633b4769b2bf80d8c743f772a0a6359df4989 (9401e3571ded36600b6531f21e1ef0b70220c65b7d7178c44ec9c2521b1615f6)" Jun 23 16:04:47 minikube dockerd[125]: time="2022-06-23T16:04:47.499973989Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 639f9f4639b24138c427d4566350af9b9b900e90b29961da74722cea21767403 ef0b9f0a44791f2e8d9843d73f01ae548bd6db6b6aa3fac0fc2fce383d2dd09e], retrying...." Jun 23 16:04:47 minikube dockerd[125]: time="2022-06-23T16:04:47.594038612Z" level=info msg="Removing stale sandbox 8dd1f74c1117efd61827c74aaf4e65b7b3de22c9f85277a9f8708bb6a99b746c (03a43288f116ebe10b622d36cba8624a254133afff2e80bc7a0466b4b70b4701)" Jun 23 16:04:47 minikube dockerd[125]: time="2022-06-23T16:04:47.595715216Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 639f9f4639b24138c427d4566350af9b9b900e90b29961da74722cea21767403 afb52e045cc54a22806dcb99ac042a9790f578921c41aafad2dd2955289593f5], retrying...." Jun 23 16:04:47 minikube dockerd[125]: time="2022-06-23T16:04:47.691919530Z" level=info msg="Removing stale sandbox 9209a8aa770e1d1d824f0db5600095315cf754763bd8a49792d6a8f5a9560f90 (b61e539bbd5ba441844d15214c73890b3e8d408b5c5846f93250c90eb2f7bc66)" Jun 23 16:04:47 minikube dockerd[125]: time="2022-06-23T16:04:47.693470045Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 639f9f4639b24138c427d4566350af9b9b900e90b29961da74722cea21767403 da923cad9e8e038afb371ab851750cb28084623f289a1621c9c17045e258aa66], retrying...." Jun 23 16:04:47 minikube dockerd[125]: time="2022-06-23T16:04:47.788129271Z" level=info msg="Removing stale sandbox bb6f58cd506696f522b63c203b4a9324b7e4e07fb4626d91b78246cb14da77e2 (81614c4bfc92fcbf1fc4fa5b1d4230b9db20a3945e972b93959c51659a978520)" Jun 23 16:04:47 minikube dockerd[125]: time="2022-06-23T16:04:47.789851885Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 639f9f4639b24138c427d4566350af9b9b900e90b29961da74722cea21767403 a393a892052160f2f92a66698b552c4576649bb435ff457c5a6cbfd85a4025c4], retrying...." Jun 23 16:04:47 minikube dockerd[125]: time="2022-06-23T16:04:47.882429598Z" level=info msg="Removing stale sandbox e510f34cac2d6f6026883a14837d73a8cea5bab045d3c19a02f75c4315b04730 (1237cf45ecff548932f2e9b8e39b46c5e0c3079c57c681c7831050d6cecc9fc6)" Jun 23 16:04:47 minikube dockerd[125]: time="2022-06-23T16:04:47.884832787Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 639f9f4639b24138c427d4566350af9b9b900e90b29961da74722cea21767403 998fb01877f382c60310b43f1c2704db313670e27eeb9d5ff11e2a75328eb704], retrying...." Jun 23 16:04:48 minikube dockerd[125]: time="2022-06-23T16:04:48.010943418Z" level=info msg="Removing stale sandbox 2d5c4c4f944e2668c8971053189ee7064e561ff11ff18f23dc9267f05035e6b0 (ea9b1ae503f5194597f3bd4e245af1b6efb9bf471dfe5568e974a0162d263361)" Jun 23 16:04:48 minikube dockerd[125]: time="2022-06-23T16:04:48.019887656Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint eae9cd3816b6e3171d2d3859d55ed816b118adabaff8c00f9399693a16c95d0e de9120e12bfba62e94a4c3b24d59c153541323caed2332e69d170bbb317f8b05], retrying...." Jun 23 16:04:48 minikube dockerd[125]: time="2022-06-23T16:04:48.117225222Z" level=info msg="Removing stale sandbox 3dcbf6a0bfab5fd847de5fa1c86ff5e242387a24ef1d492b9f668cf1e01ac92b (dca33a8fbec924ae565f947fba1f889cdb5e1aafd980179fee626f6d7724f43c)" Jun 23 16:04:48 minikube dockerd[125]: time="2022-06-23T16:04:48.119744098Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 639f9f4639b24138c427d4566350af9b9b900e90b29961da74722cea21767403 9a9e4bf47fc7b0aba0288687d11dca53ef0a41efb5aef754c10f7df04c26e7c0], retrying...." Jun 23 16:04:48 minikube dockerd[125]: time="2022-06-23T16:04:48.150435799Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jun 23 16:04:48 minikube dockerd[125]: time="2022-06-23T16:04:48.213546401Z" level=info msg="Loading containers: done." Jun 23 16:04:48 minikube dockerd[125]: time="2022-06-23T16:04:48.247813404Z" level=info msg="Docker daemon" commit=459d0df graphdriver(s)=overlay2 version=20.10.12 Jun 23 16:04:48 minikube dockerd[125]: time="2022-06-23T16:04:48.248265425Z" level=info msg="Daemon has completed initialization" Jun 23 16:04:48 minikube systemd[1]: Started Docker Application Container Engine. Jun 23 16:04:48 minikube dockerd[125]: time="2022-06-23T16:04:48.340344181Z" level=info msg="API listen on [::]:2376" Jun 23 16:04:48 minikube dockerd[125]: time="2022-06-23T16:04:48.343371998Z" level=info msg="API listen on /var/run/docker.sock" Jun 23 16:05:34 minikube dockerd[125]: time="2022-06-23T16:05:34.701402924Z" level=info msg="ignoring event" container=1ab35c44f2a89cdc5531a8c230714df98160805c2278da9e34f70d5c60ee1b11 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 23 16:09:00 minikube dockerd[125]: time="2022-06-23T16:09:00.266990037Z" level=info msg="ignoring event" container=37a0a473e538a03724c34fb87f3b6fd66459c1b70508c1d236ef6074dfd32fdf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 23 16:09:05 minikube dockerd[125]: time="2022-06-23T16:09:05.411237767Z" level=info msg="ignoring event" container=9cbb6d3e39b8367c553452ffe5a98f3c8c84502e4f08538f53bba764458476e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 23 16:09:05 minikube dockerd[125]: time="2022-06-23T16:09:05.678165060Z" level=info msg="ignoring event" container=355c4e26db6dc2d93747b377eb09a5f2a42dad41fe1fe98cd69748c7dde08ac6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 23 16:09:10 minikube dockerd[125]: time="2022-06-23T16:09:10.858701461Z" level=info msg="ignoring event" container=d54ef8e17c7b34056d4f9dddcd81d7c82593a98f1098e2983738431a4f551eab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 23 16:09:10 minikube dockerd[125]: time="2022-06-23T16:09:10.977701325Z" level=info msg="ignoring event" container=ebe5520a341ad573c613a0647c755617a06069f16ba6692ec2fa694a77b2e472 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 23 16:09:11 minikube dockerd[125]: time="2022-06-23T16:09:11.112125807Z" level=info msg="ignoring event" container=d9a43cbd6c2f59143bebc7505a6f828bf8f1377810711b2c5346a03382b54861 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 23 16:09:11 minikube dockerd[125]: time="2022-06-23T16:09:11.235114429Z" level=info msg="ignoring event" container=df6c7ae24e353d0540b87999da7ba55dd1ab9bbea025e4f50bd4b550ebc0e20d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 23 16:09:11 minikube dockerd[125]: time="2022-06-23T16:09:11.361998826Z" level=info msg="ignoring event" container=27158cd79fdf39ec5de3e2c750d716bdb0cde1ed0c0e1deeb6a1f82c860cc941 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 23 16:09:11 minikube dockerd[125]: time="2022-06-23T16:09:11.540753382Z" level=info msg="ignoring event" container=ce7800fbbbf6dd0c34cc279d4b22802a15da42672aa6b50c4691fd15542e45f1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 23 16:09:21 minikube dockerd[125]: time="2022-06-23T16:09:21.647891254Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=fedc7c601ca7efc5e696402393f276da86c294425059eaf490bb66a3cd49b1f3 Jun 23 16:09:21 minikube dockerd[125]: time="2022-06-23T16:09:21.716011617Z" level=info msg="ignoring event" container=fedc7c601ca7efc5e696402393f276da86c294425059eaf490bb66a3cd49b1f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 23 16:09:31 minikube dockerd[125]: time="2022-06-23T16:09:31.829408458Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=095db28adc0ebee1b5d97b94c472a86bd70f42869535e92953bd615265543c72 Jun 23 16:09:31 minikube dockerd[125]: time="2022-06-23T16:09:31.985207610Z" level=info msg="ignoring event" container=095db28adc0ebee1b5d97b94c472a86bd70f42869535e92953bd615265543c72 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 23 16:09:32 minikube dockerd[125]: time="2022-06-23T16:09:32.133258152Z" level=info msg="ignoring event" container=7e3a1765c4db9b08b595dc65e4aee7a3bcf9d32a0199e1bd356178979d9ad525 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 23 16:09:32 minikube dockerd[125]: time="2022-06-23T16:09:32.285481198Z" level=info msg="ignoring event" container=503f1187c315ef153c7aea863091e8a66d4a87f47e87838f1114e9d146ca82ea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 23 16:09:32 minikube dockerd[125]: time="2022-06-23T16:09:32.424673599Z" level=info msg="ignoring event" container=e6d555016975df174df2a20871078be613f540f959c2e30af766c5f314a2b049 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 23 16:09:32 minikube dockerd[125]: time="2022-06-23T16:09:32.561972283Z" level=info msg="ignoring event" container=af26589e30bfeee0e48391c9e98f61b95db0ec09c9687ed20c6039716421fe10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 23 16:09:32 minikube dockerd[125]: time="2022-06-23T16:09:32.737729511Z" level=info msg="ignoring event" container=d5c4419dac9ac89f67baa445bf07b9cc9f305cac56683ed124da6a0b471cfaac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 24ee34e5d66e6 6e38f40d628db 13 minutes ago Running storage-provisioner 0 7df391b00a6dd 4ee60235dcacc a4ca41631cc7a 16 minutes ago Running coredns 0 6a104980232c7 c1766050e2f4f a4ca41631cc7a 16 minutes ago Running coredns 0 273cf9d9bded1 4b2557fe68f4b 9b7cc99821098 16 minutes ago Running kube-proxy 0 2872fcadb8847 bd9f617ca7ea9 f40be0088a83e 16 minutes ago Running kube-apiserver 4 b703e6c3585f5 b86c03b9b1214 99a3486be4f28 16 minutes ago Running kube-scheduler 4 0a80ef48a0136 4a122731432b5 25f8c7f3da61c 16 minutes ago Running etcd 4 d871e0f732235 c5095547e6f6e b07520cd7ab76 16 minutes ago Running kube-controller-manager 4 a42176a48d065 * * ==> coredns [4ee60235dcac] <== * .:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.8.6 linux/amd64, go1.17.1, 13a9191 [INFO] Reloading [INFO] plugin/health: Going into lameduck mode for 5s [INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031 [INFO] Reloading complete * * ==> coredns [c1766050e2f4] <== * .:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.8.6 linux/amd64, go1.17.1, 13a9191 [INFO] Reloading [INFO] plugin/health: Going into lameduck mode for 5s [INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031 [INFO] Reloading complete * * ==> describe nodes <== * Name: minikube Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=362d5fdc0a3dbee389b3d3f1034e8023e72bd3a7 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2022_06_23T21_39_43_0700 minikube.k8s.io/version=v1.25.2 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Thu, 23 Jun 2022 16:09:40 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Thu, 23 Jun 2022 16:26:04 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Thu, 23 Jun 2022 16:25:14 +0000 Thu, 23 Jun 2022 16:09:37 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Thu, 23 Jun 2022 16:25:14 +0000 Thu, 23 Jun 2022 16:09:37 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Thu, 23 Jun 2022 16:25:14 +0000 Thu, 23 Jun 2022 16:09:37 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Thu, 23 Jun 2022 16:25:14 +0000 Thu, 23 Jun 2022 16:09:54 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: minikube Capacity: cpu: 2 ephemeral-storage: 65792556Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 2546676Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 65792556Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 2546676Ki pods: 110 System Info: Machine ID: b6a262faae404a5db719705fd34b5c8b System UUID: b6a262faae404a5db719705fd34b5c8b Boot ID: f5a1ba13-2a75-48e8-a283-73968668ceee Kernel Version: 5.10.104-linuxkit OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.12 Kubelet Version: v1.23.3 Kube-Proxy Version: v1.23.3 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-64897985d-b9cvb 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (2%!)(MISSING) 170Mi (6%!)(MISSING) 16m kube-system coredns-64897985d-pfvsw 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (2%!)(MISSING) 170Mi (6%!)(MISSING) 16m kube-system etcd-minikube 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 16m kube-system kube-apiserver-minikube 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 16m kube-system kube-controller-manager-minikube 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 16m kube-system kube-proxy-c47vr 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 16m kube-system kube-scheduler-minikube 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 16m kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 13m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 850m (42%!)(MISSING) 0 (0%!)(MISSING) memory 240Mi (9%!)(MISSING) 340Mi (13%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 16m kube-proxy Normal NodeHasSufficientMemory 16m kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 16m kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 16m kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 16m kubelet Updated Node Allocatable limit across pods Normal Starting 16m kubelet Starting kubelet. Normal NodeReady 16m kubelet Node minikube status is now: NodeReady * * ==> dmesg <== * [Jun23 15:43] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds). [ +0.007504] the cryptoloop driver has been deprecated and will be removed in in Linux 5.16 [ +7.662941] grpcfuse: loading out-of-tree module taints kernel. * * ==> etcd [4a122731432b] <== * {"level":"info","ts":"2022-06-23T16:09:37.228Z","caller":"etcdmain/etcd.go:72","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.49.2:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--initial-advertise-peer-urls=https://192.168.49.2:2380","--initial-cluster=minikube=https://192.168.49.2:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.49.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.49.2:2380","--name=minikube","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]} {"level":"info","ts":"2022-06-23T16:09:37.228Z","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.49.2:2380"]} {"level":"info","ts":"2022-06-23T16:09:37.229Z","caller":"embed/etcd.go:478","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2022-06-23T16:09:37.231Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"]} {"level":"info","ts":"2022-06-23T16:09:37.233Z","caller":"embed/etcd.go:307","msg":"starting an etcd server","etcd-version":"3.5.1","git-sha":"e8732fb5f","go-version":"go1.16.3","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":false,"name":"minikube","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"minikube=https://192.168.49.2:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-size-bytes":2147483648,"pre-vote":true,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"} {"level":"info","ts":"2022-06-23T16:09:37.237Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"2.670707ms"} {"level":"info","ts":"2022-06-23T16:09:37.273Z","caller":"etcdserver/raft.go:448","msg":"starting local member","local-member-id":"aec36adc501070cc","cluster-id":"fa54960ea34d58be"} {"level":"info","ts":"2022-06-23T16:09:37.273Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=()"} {"level":"info","ts":"2022-06-23T16:09:37.273Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 0"} {"level":"info","ts":"2022-06-23T16:09:37.274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"} {"level":"info","ts":"2022-06-23T16:09:37.274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 1"} {"level":"info","ts":"2022-06-23T16:09:37.276Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"} {"level":"warn","ts":"2022-06-23T16:09:37.283Z","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"} {"level":"info","ts":"2022-06-23T16:09:37.295Z","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":1} {"level":"info","ts":"2022-06-23T16:09:37.305Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"} {"level":"info","ts":"2022-06-23T16:09:37.313Z","caller":"etcdserver/server.go:843","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.1","cluster-version":"to_be_decided"} {"level":"info","ts":"2022-06-23T16:09:37.321Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} {"level":"info","ts":"2022-06-23T16:09:37.322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"} {"level":"info","ts":"2022-06-23T16:09:37.322Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]} {"level":"info","ts":"2022-06-23T16:09:37.327Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2022-06-23T16:09:37.328Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]} {"level":"info","ts":"2022-06-23T16:09:37.328Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"} {"level":"info","ts":"2022-06-23T16:09:37.328Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"} {"level":"info","ts":"2022-06-23T16:09:37.328Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"} {"level":"info","ts":"2022-06-23T16:09:37.977Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"} {"level":"info","ts":"2022-06-23T16:09:37.977Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"} {"level":"info","ts":"2022-06-23T16:09:37.977Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"} {"level":"info","ts":"2022-06-23T16:09:37.978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"} {"level":"info","ts":"2022-06-23T16:09:37.978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"} {"level":"info","ts":"2022-06-23T16:09:37.978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"} {"level":"info","ts":"2022-06-23T16:09:37.978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"} {"level":"info","ts":"2022-06-23T16:09:37.978Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:minikube ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"} {"level":"info","ts":"2022-06-23T16:09:37.979Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-06-23T16:09:37.980Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"} {"level":"info","ts":"2022-06-23T16:09:37.981Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"} {"level":"info","ts":"2022-06-23T16:09:37.981Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"} {"level":"info","ts":"2022-06-23T16:09:37.981Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2022-06-23T16:09:37.979Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-06-23T16:09:37.989Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"} {"level":"info","ts":"2022-06-23T16:09:37.994Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"} {"level":"info","ts":"2022-06-23T16:09:37.998Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2022-06-23T16:09:37.998Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2022-06-23T16:19:38.007Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":583} {"level":"info","ts":"2022-06-23T16:19:38.010Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":583,"took":"2.7444ms"} {"level":"info","ts":"2022-06-23T16:24:37.991Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":792} {"level":"info","ts":"2022-06-23T16:24:37.995Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":792,"took":"4.074479ms"} * * ==> kernel <== * 16:26:10 up 42 min, 0 users, load average: 0.10, 0.33, 0.40 Linux minikube 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [bd9f617ca7ea] <== * W0623 16:09:38.854110 1 genericapiserver.go:538] Skipping API apps/v1beta2 because it has no resources. W0623 16:09:38.854248 1 genericapiserver.go:538] Skipping API apps/v1beta1 because it has no resources. W0623 16:09:38.861158 1 genericapiserver.go:538] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources. I0623 16:09:38.873288 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0623 16:09:38.873314 1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. W0623 16:09:38.897693 1 genericapiserver.go:538] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources. I0623 16:09:39.884526 1 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0623 16:09:39.884909 1 secure_serving.go:266] Serving securely on [::]:8443 I0623 16:09:39.884960 1 dynamic_serving_content.go:131] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key" I0623 16:09:39.896191 1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0623 16:09:39.896214 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0623 16:09:39.899072 1 autoregister_controller.go:141] Starting autoregister controller I0623 16:09:39.899109 1 cache.go:32] Waiting for caches to sync for autoregister controller I0623 16:09:39.899357 1 controller.go:83] Starting OpenAPI AggregationController I0623 16:09:39.904308 1 dynamic_serving_content.go:131] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key" I0623 16:09:39.907172 1 customresource_discovery_controller.go:209] Starting DiscoveryController I0623 16:09:39.908576 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I0623 16:09:39.908796 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0623 16:09:39.909009 1 available_controller.go:491] Starting AvailableConditionController I0623 16:09:39.909137 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0623 16:09:39.911073 1 apf_controller.go:317] Starting API Priority and Fairness config controller I0623 16:09:39.911606 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0623 16:09:39.911624 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller I0623 16:09:39.911671 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0623 16:09:39.911681 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister I0623 16:09:39.920964 1 controller.go:85] Starting OpenAPI controller I0623 16:09:39.921165 1 naming_controller.go:291] Starting NamingConditionController I0623 16:09:39.921296 1 establishing_controller.go:76] Starting EstablishingController I0623 16:09:39.921448 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0623 16:09:39.921628 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0623 16:09:39.921777 1 crd_finalizer.go:266] Starting CRDFinalizer I0623 16:09:39.924079 1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0623 16:09:39.935864 1 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0623 16:09:40.011175 1 apf_controller.go:322] Running API Priority and Fairness config worker I0623 16:09:40.012398 1 shared_informer.go:247] Caches are synced for crd-autoregister I0623 16:09:40.013232 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I0623 16:09:40.013762 1 cache.go:39] Caches are synced for AvailableConditionController controller I0623 16:09:40.024033 1 shared_informer.go:247] Caches are synced for node_authorizer I0623 16:09:40.028367 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0623 16:09:40.037302 1 controller.go:611] quota admission added evaluator for: namespaces I0623 16:09:40.102420 1 cache.go:39] Caches are synced for autoregister controller I0623 16:09:40.884839 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0623 16:09:40.887046 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0623 16:09:40.952210 1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000 I0623 16:09:40.978744 1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000 I0623 16:09:40.978804 1 storage_scheduling.go:109] all system priority classes are created successfully or already exist. I0623 16:09:41.581111 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0623 16:09:41.630061 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0623 16:09:41.700217 1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0623 16:09:41.707117 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2] I0623 16:09:41.708576 1 controller.go:611] quota admission added evaluator for: endpoints I0623 16:09:41.714296 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io I0623 16:09:42.144623 1 controller.go:611] quota admission added evaluator for: serviceaccounts I0623 16:09:43.299600 1 controller.go:611] quota admission added evaluator for: deployments.apps I0623 16:09:43.317131 1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0623 16:09:43.340129 1 controller.go:611] quota admission added evaluator for: daemonsets.apps I0623 16:09:43.623088 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io I0623 16:09:55.693853 1 controller.go:611] quota admission added evaluator for: replicasets.apps I0623 16:09:55.811914 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps I0623 16:09:56.500788 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io * * ==> kube-controller-manager [c5095547e6f6] <== * I0623 16:09:54.755550 1 serviceaccounts_controller.go:117] Starting service account controller I0623 16:09:54.756161 1 shared_informer.go:240] Waiting for caches to sync for service account I0623 16:09:54.901027 1 controllermanager.go:605] Started "ttl" I0623 16:09:54.903202 1 ttl_controller.go:121] Starting TTL controller I0623 16:09:54.903558 1 shared_informer.go:240] Waiting for caches to sync for TTL I0623 16:09:54.952129 1 shared_informer.go:240] Waiting for caches to sync for resource quota I0623 16:09:54.983027 1 shared_informer.go:247] Caches are synced for crt configmap I0623 16:09:55.007334 1 shared_informer.go:247] Caches are synced for ReplicaSet I0623 16:09:55.012258 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I0623 16:09:55.012296 1 shared_informer.go:247] Caches are synced for job I0623 16:09:55.021094 1 shared_informer.go:247] Caches are synced for bootstrap_signer I0623 16:09:55.024372 1 shared_informer.go:247] Caches are synced for HPA I0623 16:09:55.025668 1 shared_informer.go:247] Caches are synced for ephemeral I0623 16:09:55.026913 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I0623 16:09:55.028498 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I0623 16:09:55.042512 1 shared_informer.go:247] Caches are synced for expand I0623 16:09:55.044698 1 shared_informer.go:247] Caches are synced for ReplicationController I0623 16:09:55.050864 1 shared_informer.go:247] Caches are synced for namespace I0623 16:09:55.057087 1 shared_informer.go:247] Caches are synced for disruption I0623 16:09:55.057114 1 disruption.go:371] Sending events to api server. I0623 16:09:55.057247 1 shared_informer.go:247] Caches are synced for service account I0623 16:09:55.061022 1 shared_informer.go:247] Caches are synced for stateful set I0623 16:09:55.067310 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I0623 16:09:55.067363 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I0623 16:09:55.068879 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I0623 16:09:55.069178 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown W0623 16:09:55.070702 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I0623 16:09:55.077200 1 shared_informer.go:247] Caches are synced for PVC protection I0623 16:09:55.085161 1 shared_informer.go:247] Caches are synced for cronjob I0623 16:09:55.086670 1 shared_informer.go:247] Caches are synced for deployment I0623 16:09:55.086833 1 shared_informer.go:247] Caches are synced for GC I0623 16:09:55.089752 1 shared_informer.go:247] Caches are synced for persistent volume I0623 16:09:55.095329 1 shared_informer.go:247] Caches are synced for TTL after finished I0623 16:09:55.098621 1 shared_informer.go:247] Caches are synced for daemon sets I0623 16:09:55.104391 1 shared_informer.go:247] Caches are synced for TTL I0623 16:09:55.124647 1 shared_informer.go:247] Caches are synced for PV protection I0623 16:09:55.136031 1 shared_informer.go:247] Caches are synced for attach detach I0623 16:09:55.154195 1 shared_informer.go:247] Caches are synced for node I0623 16:09:55.155747 1 range_allocator.go:173] Starting range CIDR allocator I0623 16:09:55.156395 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator I0623 16:09:55.156749 1 shared_informer.go:247] Caches are synced for cidrallocator I0623 16:09:55.199135 1 shared_informer.go:247] Caches are synced for endpoint I0623 16:09:55.199771 1 range_allocator.go:374] Set node minikube PodCIDR to [10.244.0.0/24] I0623 16:09:55.227367 1 shared_informer.go:247] Caches are synced for taint I0623 16:09:55.228205 1 taint_manager.go:187] "Starting NoExecuteTaintManager" I0623 16:09:55.229323 1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: W0623 16:09:55.229679 1 node_lifecycle_controller.go:1012] Missing timestamp for Node minikube. Assuming now as a timestamp. I0623 16:09:55.230017 1 node_lifecycle_controller.go:1213] Controller detected that zone is now in state Normal. I0623 16:09:55.230335 1 event.go:294] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I0623 16:09:55.234287 1 shared_informer.go:247] Caches are synced for endpoint_slice I0623 16:09:55.249843 1 shared_informer.go:247] Caches are synced for resource quota I0623 16:09:55.253808 1 shared_informer.go:247] Caches are synced for resource quota I0623 16:09:55.260939 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I0623 16:09:55.629819 1 shared_informer.go:247] Caches are synced for garbage collector I0623 16:09:55.659538 1 shared_informer.go:247] Caches are synced for garbage collector I0623 16:09:55.660189 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0623 16:09:55.696838 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2" I0623 16:09:55.856255 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-c47vr" I0623 16:09:56.048300 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-pfvsw" I0623 16:09:56.086587 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-b9cvb" * * ==> kube-proxy [4b2557fe68f4] <== * I0623 16:09:56.470976 1 node.go:163] Successfully retrieved node IP: 192.168.49.2 I0623 16:09:56.471031 1 server_others.go:138] "Detected node IP" address="192.168.49.2" I0623 16:09:56.471054 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode="" I0623 16:09:56.495552 1 server_others.go:206] "Using iptables Proxier" I0623 16:09:56.495584 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0623 16:09:56.495714 1 server_others.go:214] "Creating dualStackProxier for iptables" I0623 16:09:56.495783 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0623 16:09:56.496107 1 server.go:656] "Version info" version="v1.23.3" I0623 16:09:56.497747 1 config.go:317] "Starting service config controller" I0623 16:09:56.497778 1 shared_informer.go:240] Waiting for caches to sync for service config I0623 16:09:56.497795 1 config.go:226] "Starting endpoint slice config controller" I0623 16:09:56.497798 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0623 16:09:56.598384 1 shared_informer.go:247] Caches are synced for service config I0623 16:09:56.598927 1 shared_informer.go:247] Caches are synced for endpoint slice config * * ==> kube-scheduler [b86c03b9b121] <== * I0623 16:09:40.031912 1 secure_serving.go:200] Serving securely on 127.0.0.1:10259 I0623 16:09:40.032156 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0623 16:09:40.032116 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0623 16:09:40.039942 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file W0623 16:09:40.043002 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0623 16:09:40.043064 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0623 16:09:40.043446 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0623 16:09:40.045557 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0623 16:09:40.044770 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0623 16:09:40.045597 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0623 16:09:40.044815 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0623 16:09:40.045965 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0623 16:09:40.044847 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0623 16:09:40.046084 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0623 16:09:40.044884 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0623 16:09:40.046150 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0623 16:09:40.045083 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0623 16:09:40.046161 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0623 16:09:40.045133 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0623 16:09:40.046242 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0623 16:09:40.045192 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0623 16:09:40.046252 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0623 16:09:40.045279 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0623 16:09:40.046339 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0623 16:09:40.045340 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0623 16:09:40.046348 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0623 16:09:40.045393 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0623 16:09:40.046412 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0623 16:09:40.045487 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0623 16:09:40.046481 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0623 16:09:40.045542 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0623 16:09:40.046532 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0623 16:09:40.046807 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0623 16:09:40.046827 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0623 16:09:40.927277 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0623 16:09:40.927353 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0623 16:09:40.931032 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0623 16:09:40.931122 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0623 16:09:40.964058 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0623 16:09:40.964441 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0623 16:09:40.992238 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0623 16:09:40.992517 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0623 16:09:41.028829 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0623 16:09:41.028892 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0623 16:09:41.092292 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0623 16:09:41.092316 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0623 16:09:41.117134 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0623 16:09:41.117274 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0623 16:09:41.215803 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0623 16:09:41.215832 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0623 16:09:41.277317 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0623 16:09:41.277421 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0623 16:09:41.277597 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0623 16:09:41.277664 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0623 16:09:41.361787 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0623 16:09:41.363476 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0623 16:09:41.388206 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0623 16:09:41.388602 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0623 16:09:41.440823 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system" I0623 16:09:43.340933 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Thu 2022-06-23 16:04:46 UTC, end at Thu 2022-06-23 16:26:10 UTC. -- Jun 23 16:09:43 minikube kubelet[7626]: I0623 16:09:43.817976 7626 kubelet.go:1977] "Starting kubelet main sync loop" Jun 23 16:09:43 minikube kubelet[7626]: E0623 16:09:43.818012 7626 kubelet.go:2001] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 23 16:09:43 minikube kubelet[7626]: I0623 16:09:43.829313 7626 manager.go:610] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 23 16:09:43 minikube kubelet[7626]: I0623 16:09:43.832327 7626 plugin_manager.go:114] "Starting Kubelet Plugin Manager" Jun 23 16:09:43 minikube kubelet[7626]: I0623 16:09:43.918794 7626 topology_manager.go:200] "Topology Admit Handler" Jun 23 16:09:43 minikube kubelet[7626]: I0623 16:09:43.919601 7626 topology_manager.go:200] "Topology Admit Handler" Jun 23 16:09:43 minikube kubelet[7626]: I0623 16:09:43.920629 7626 topology_manager.go:200] "Topology Admit Handler" Jun 23 16:09:43 minikube kubelet[7626]: I0623 16:09:43.920953 7626 topology_manager.go:200] "Topology Admit Handler" Jun 23 16:09:43 minikube kubelet[7626]: I0623 16:09:43.926168 7626 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd6e47233d36a9715b0ab9632f871843-ca-certs\") pod \"kube-apiserver-minikube\" (UID: \"cd6e47233d36a9715b0ab9632f871843\") " pod="kube-system/kube-apiserver-minikube" Jun 23 16:09:43 minikube kubelet[7626]: I0623 16:09:43.926292 7626 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b965983ec05322d0973594a01d5e8245-k8s-certs\") pod \"kube-controller-manager-minikube\" (UID: \"b965983ec05322d0973594a01d5e8245\") " pod="kube-system/kube-controller-manager-minikube" Jun 23 16:09:43 minikube kubelet[7626]: I0623 16:09:43.926426 7626 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd6e47233d36a9715b0ab9632f871843-etc-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"cd6e47233d36a9715b0ab9632f871843\") " pod="kube-system/kube-apiserver-minikube" Jun 23 16:09:43 minikube kubelet[7626]: I0623 16:09:43.926576 7626 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b965983ec05322d0973594a01d5e8245-ca-certs\") pod \"kube-controller-manager-minikube\" (UID: \"b965983ec05322d0973594a01d5e8245\") " pod="kube-system/kube-controller-manager-minikube" Jun 23 16:09:43 minikube kubelet[7626]: I0623 16:09:43.926689 7626 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b965983ec05322d0973594a01d5e8245-etc-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"b965983ec05322d0973594a01d5e8245\") " pod="kube-system/kube-controller-manager-minikube" Jun 23 16:09:43 minikube kubelet[7626]: I0623 16:09:43.926802 7626 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b965983ec05322d0973594a01d5e8245-flexvolume-dir\") pod \"kube-controller-manager-minikube\" (UID: \"b965983ec05322d0973594a01d5e8245\") " pod="kube-system/kube-controller-manager-minikube" Jun 23 16:09:43 minikube kubelet[7626]: I0623 16:09:43.926985 7626 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b965983ec05322d0973594a01d5e8245-kubeconfig\") pod \"kube-controller-manager-minikube\" (UID: \"b965983ec05322d0973594a01d5e8245\") " pod="kube-system/kube-controller-manager-minikube" Jun 23 16:09:43 minikube kubelet[7626]: I0623 16:09:43.927064 7626 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/9d3d310935e5fabe942511eec3e2cd0c-etcd-data\") pod \"etcd-minikube\" (UID: \"9d3d310935e5fabe942511eec3e2cd0c\") " pod="kube-system/etcd-minikube" Jun 23 16:09:43 minikube kubelet[7626]: I0623 16:09:43.927193 7626 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b965983ec05322d0973594a01d5e8245-usr-local-share-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"b965983ec05322d0973594a01d5e8245\") " pod="kube-system/kube-controller-manager-minikube" Jun 23 16:09:43 minikube kubelet[7626]: I0623 16:09:43.927320 7626 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b965983ec05322d0973594a01d5e8245-usr-share-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"b965983ec05322d0973594a01d5e8245\") " pod="kube-system/kube-controller-manager-minikube" Jun 23 16:09:43 minikube kubelet[7626]: I0623 16:09:43.927433 7626 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/be132fe5c6572cb34d93f5e05ce2a540-kubeconfig\") pod \"kube-scheduler-minikube\" (UID: \"be132fe5c6572cb34d93f5e05ce2a540\") " pod="kube-system/kube-scheduler-minikube" Jun 23 16:09:43 minikube kubelet[7626]: I0623 16:09:43.927634 7626 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/9d3d310935e5fabe942511eec3e2cd0c-etcd-certs\") pod \"etcd-minikube\" (UID: \"9d3d310935e5fabe942511eec3e2cd0c\") " pod="kube-system/etcd-minikube" Jun 23 16:09:43 minikube kubelet[7626]: I0623 16:09:43.927715 7626 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd6e47233d36a9715b0ab9632f871843-k8s-certs\") pod \"kube-apiserver-minikube\" (UID: \"cd6e47233d36a9715b0ab9632f871843\") " pod="kube-system/kube-apiserver-minikube" Jun 23 16:09:43 minikube kubelet[7626]: I0623 16:09:43.927825 7626 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd6e47233d36a9715b0ab9632f871843-usr-local-share-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"cd6e47233d36a9715b0ab9632f871843\") " pod="kube-system/kube-apiserver-minikube" Jun 23 16:09:43 minikube kubelet[7626]: I0623 16:09:43.928017 7626 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd6e47233d36a9715b0ab9632f871843-usr-share-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"cd6e47233d36a9715b0ab9632f871843\") " pod="kube-system/kube-apiserver-minikube" Jun 23 16:09:43 minikube kubelet[7626]: E0623 16:09:43.948540 7626 kubelet.go:1711] "Failed creating a mirror pod for" err="pods \"kube-apiserver-minikube\" already exists" pod="kube-system/kube-apiserver-minikube" Jun 23 16:09:44 minikube kubelet[7626]: E0623 16:09:44.171351 7626 kubelet.go:1711] "Failed creating a mirror pod for" err="pods \"kube-scheduler-minikube\" already exists" pod="kube-system/kube-scheduler-minikube" Jun 23 16:09:44 minikube kubelet[7626]: I0623 16:09:44.552308 7626 apiserver.go:52] "Watching apiserver" Jun 23 16:09:44 minikube kubelet[7626]: I0623 16:09:44.835692 7626 reconciler.go:157] "Reconciler: start to sync state" Jun 23 16:09:45 minikube kubelet[7626]: E0623 16:09:45.177397 7626 kubelet.go:1711] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-minikube\" already exists" pod="kube-system/kube-controller-manager-minikube" Jun 23 16:09:45 minikube kubelet[7626]: E0623 16:09:45.370966 7626 kubelet.go:1711] "Failed creating a mirror pod for" err="pods \"kube-scheduler-minikube\" already exists" pod="kube-system/kube-scheduler-minikube" Jun 23 16:09:45 minikube kubelet[7626]: E0623 16:09:45.572826 7626 kubelet.go:1711] "Failed creating a mirror pod for" err="pods \"etcd-minikube\" already exists" pod="kube-system/etcd-minikube" Jun 23 16:09:45 minikube kubelet[7626]: E0623 16:09:45.778145 7626 kubelet.go:1711] "Failed creating a mirror pod for" err="pods \"kube-apiserver-minikube\" already exists" pod="kube-system/kube-apiserver-minikube" Jun 23 16:09:55 minikube kubelet[7626]: I0623 16:09:55.290970 7626 kuberuntime_manager.go:1098] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24" Jun 23 16:09:55 minikube kubelet[7626]: I0623 16:09:55.293084 7626 docker_service.go:364] "Docker cri received runtime config" runtimeConfig="&RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}" Jun 23 16:09:55 minikube kubelet[7626]: I0623 16:09:55.294290 7626 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24" Jun 23 16:09:55 minikube kubelet[7626]: I0623 16:09:55.872158 7626 topology_manager.go:200] "Topology Admit Handler" Jun 23 16:09:56 minikube kubelet[7626]: I0623 16:09:56.018062 7626 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56e01583-d173-48d5-8de6-0ed2962676af-xtables-lock\") pod \"kube-proxy-c47vr\" (UID: \"56e01583-d173-48d5-8de6-0ed2962676af\") " pod="kube-system/kube-proxy-c47vr" Jun 23 16:09:56 minikube kubelet[7626]: I0623 16:09:56.018283 7626 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq6h9\" (UniqueName: \"kubernetes.io/projected/56e01583-d173-48d5-8de6-0ed2962676af-kube-api-access-vq6h9\") pod \"kube-proxy-c47vr\" (UID: \"56e01583-d173-48d5-8de6-0ed2962676af\") " pod="kube-system/kube-proxy-c47vr" Jun 23 16:09:56 minikube kubelet[7626]: I0623 16:09:56.018447 7626 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/56e01583-d173-48d5-8de6-0ed2962676af-kube-proxy\") pod \"kube-proxy-c47vr\" (UID: \"56e01583-d173-48d5-8de6-0ed2962676af\") " pod="kube-system/kube-proxy-c47vr" Jun 23 16:09:56 minikube kubelet[7626]: I0623 16:09:56.018641 7626 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56e01583-d173-48d5-8de6-0ed2962676af-lib-modules\") pod \"kube-proxy-c47vr\" (UID: \"56e01583-d173-48d5-8de6-0ed2962676af\") " pod="kube-system/kube-proxy-c47vr" Jun 23 16:09:56 minikube kubelet[7626]: I0623 16:09:56.066416 7626 topology_manager.go:200] "Topology Admit Handler" Jun 23 16:09:56 minikube kubelet[7626]: W0623 16:09:56.074774 7626 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object Jun 23 16:09:56 minikube kubelet[7626]: E0623 16:09:56.075110 7626 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object Jun 23 16:09:56 minikube kubelet[7626]: I0623 16:09:56.100097 7626 topology_manager.go:200] "Topology Admit Handler" Jun 23 16:09:56 minikube kubelet[7626]: I0623 16:09:56.219688 7626 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whp7c\" (UniqueName: \"kubernetes.io/projected/7fd90227-c725-4252-9ce8-e1fc9acb9df6-kube-api-access-whp7c\") pod \"coredns-64897985d-pfvsw\" (UID: \"7fd90227-c725-4252-9ce8-e1fc9acb9df6\") " pod="kube-system/coredns-64897985d-pfvsw" Jun 23 16:09:56 minikube kubelet[7626]: I0623 16:09:56.219920 7626 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fd90227-c725-4252-9ce8-e1fc9acb9df6-config-volume\") pod \"coredns-64897985d-pfvsw\" (UID: \"7fd90227-c725-4252-9ce8-e1fc9acb9df6\") " pod="kube-system/coredns-64897985d-pfvsw" Jun 23 16:09:56 minikube kubelet[7626]: I0623 16:09:56.220024 7626 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx6w5\" (UniqueName: \"kubernetes.io/projected/7f5ae605-ea97-4998-8867-a3a091475c64-kube-api-access-kx6w5\") pod \"coredns-64897985d-b9cvb\" (UID: \"7f5ae605-ea97-4998-8867-a3a091475c64\") " pod="kube-system/coredns-64897985d-b9cvb" Jun 23 16:09:56 minikube kubelet[7626]: I0623 16:09:56.220111 7626 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f5ae605-ea97-4998-8867-a3a091475c64-config-volume\") pod \"coredns-64897985d-b9cvb\" (UID: \"7f5ae605-ea97-4998-8867-a3a091475c64\") " pod="kube-system/coredns-64897985d-b9cvb" Jun 23 16:09:57 minikube kubelet[7626]: I0623 16:09:57.993449 7626 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-b9cvb through plugin: invalid network status for" Jun 23 16:09:58 minikube kubelet[7626]: I0623 16:09:58.058309 7626 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-b9cvb through plugin: invalid network status for" Jun 23 16:09:58 minikube kubelet[7626]: I0623 16:09:58.076344 7626 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-pfvsw through plugin: invalid network status for" Jun 23 16:09:58 minikube kubelet[7626]: I0623 16:09:58.179582 7626 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-pfvsw through plugin: invalid network status for" Jun 23 16:09:58 minikube kubelet[7626]: I0623 16:09:58.239259 7626 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="6a104980232c75caf7742bd4f35883116f999bd82cd86eec9f2d081bd9d0375a" Jun 23 16:09:59 minikube kubelet[7626]: I0623 16:09:59.259823 7626 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-b9cvb through plugin: invalid network status for" Jun 23 16:09:59 minikube kubelet[7626]: I0623 16:09:59.274302 7626 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-pfvsw through plugin: invalid network status for" Jun 23 16:12:46 minikube kubelet[7626]: I0623 16:12:46.964450 7626 topology_manager.go:200] "Topology Admit Handler" Jun 23 16:12:47 minikube kubelet[7626]: I0623 16:12:47.012969 7626 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d491c6a1-e187-40c4-8682-df7fa5e9312e-tmp\") pod \"storage-provisioner\" (UID: \"d491c6a1-e187-40c4-8682-df7fa5e9312e\") " pod="kube-system/storage-provisioner" Jun 23 16:12:47 minikube kubelet[7626]: I0623 16:12:47.013564 7626 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drx7t\" (UniqueName: \"kubernetes.io/projected/d491c6a1-e187-40c4-8682-df7fa5e9312e-kube-api-access-drx7t\") pod \"storage-provisioner\" (UID: \"d491c6a1-e187-40c4-8682-df7fa5e9312e\") " pod="kube-system/storage-provisioner" Jun 23 16:14:43 minikube kubelet[7626]: W0623 16:14:43.810966 7626 sysinfo.go:203] Nodes topology is not available, providing CPU topology Jun 23 16:19:43 minikube kubelet[7626]: W0623 16:19:43.816162 7626 sysinfo.go:203] Nodes topology is not available, providing CPU topology Jun 23 16:24:43 minikube kubelet[7626]: W0623 16:24:43.778698 7626 sysinfo.go:203] Nodes topology is not available, providing CPU topology * * ==> storage-provisioner [24ee34e5d66e] <== * I0623 16:12:47.866522 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0623 16:12:47.889989 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0623 16:12:47.890283 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0623 16:12:47.911828 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0623 16:12:47.914671 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_d9fd1549-3004-4c27-8e1f-eaed9adb4c50! I0623 16:12:47.928650 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7ac4bfeb-7ce6-4742-b03a-5e879a5c9905", APIVersion:"v1", ResourceVersion:"505", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_d9fd1549-3004-4c27-8e1f-eaed9adb4c50 became leader I0623 16:12:48.015475 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_d9fd1549-3004-4c27-8e1f-eaed9adb4c50!