* * ==> Audit <== * |---------|-----------------|----------|------|---------|---------------------|----------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|-----------------|----------|------|---------|---------------------|----------| | start | | minikube | dphy | v1.26.0 | 01 Aug 22 09:43 IST | | | start | | minikube | dphy | v1.26.0 | 01 Aug 22 10:02 IST | | | start | | minikube | dphy | v1.26.0 | 01 Aug 22 10:27 IST | | | start | | minikube | dphy | v1.26.0 | 01 Aug 22 10:32 IST | | | start | | minikube | dphy | v1.26.0 | 01 Aug 22 10:55 IST | | | start | | minikube | dphy | v1.26.0 | 01 Aug 22 11:19 IST | | | start | | minikube | dphy | v1.26.0 | 01 Aug 22 11:51 IST | | | start | --driver=docker | minikube | dphy | v1.26.0 | 01 Aug 22 12:09 IST | | |---------|-----------------|----------|------|---------|---------------------|----------| * * ==> Last Start <== * Log file created at: 2022/08/01 12:09:40 Running on machine: dphy-OptiPlex-7050 Binary: Built with gc go1.18.3 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0801 12:09:40.678395 13610 out.go:296] Setting OutFile to fd 1 ... I0801 12:09:40.678483 13610 out.go:348] isatty.IsTerminal(1) = true I0801 12:09:40.678486 13610 out.go:309] Setting ErrFile to fd 2... I0801 12:09:40.678495 13610 out.go:348] isatty.IsTerminal(2) = true I0801 12:09:40.678877 13610 root.go:329] Updating PATH: /home/dphy/.minikube/bin W0801 12:09:40.678973 13610 root.go:307] Error reading config file at /home/dphy/.minikube/config/config.json: open /home/dphy/.minikube/config/config.json: no such file or directory I0801 12:09:40.679074 13610 out.go:303] Setting JSON to false I0801 12:09:40.697425 13610 start.go:115] hostinfo: {"hostname":"dphy-OptiPlex-7050","uptime":5165,"bootTime":1659330816,"procs":320,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-41-generic","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"724c7e2f-d7a5-49c0-b33d-4b1a43addee7"} I0801 12:09:40.697475 13610 start.go:125] virtualization: kvm host I0801 12:09:40.740698 13610 out.go:177] ๐Ÿ˜„ minikube v1.26.0 on Ubuntu 20.04 I0801 12:09:40.790849 13610 notify.go:193] Checking for updates... I0801 12:09:40.791864 13610 config.go:178] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.1 I0801 12:09:40.793107 13610 driver.go:360] Setting default libvirt URI to qemu:///system I0801 12:09:40.879277 13610 docker.go:137] docker version: linux-20.10.17 I0801 12:09:40.879326 13610 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0801 12:09:40.986041 13610 info.go:265] docker info: {ID:MQ5Q:3FAT:WH4P:QJOG:NZ5V:DU2R:LQXU:THPG:VNLX:W6DA:XTNZ:U4FM Containers:11 ContainersRunning:2 ContainersPaused:0 ContainersStopped:9 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:false NGoroutines:68 SystemTime:2022-08-01 06:39:40.921811403 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:3993137152 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/libexec/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/libexec/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:}} I0801 12:09:40.986109 13610 docker.go:254] overlay module found I0801 12:09:41.049367 13610 out.go:177] โœจ Using the docker driver based on existing profile I0801 12:09:41.091017 13610 start.go:284] selected driver: docker I0801 12:09:41.091048 13610 start.go:805] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:docker.io/kicbase/stable:v0.0.32 Memory:3760 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/dphy:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} I0801 12:09:41.091203 13610 start.go:816] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0801 12:09:41.091331 13610 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0801 12:09:41.277450 13610 info.go:265] docker info: {ID:MQ5Q:3FAT:WH4P:QJOG:NZ5V:DU2R:LQXU:THPG:VNLX:W6DA:XTNZ:U4FM Containers:11 ContainersRunning:2 ContainersPaused:0 ContainersStopped:9 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:false NGoroutines:68 SystemTime:2022-08-01 06:39:41.180837534 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:3993137152 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/libexec/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/libexec/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:}} I0801 12:09:41.297352 13610 cni.go:95] Creating CNI manager for "" I0801 12:09:41.297364 13610 cni.go:169] CNI unnecessary in this configuration, recommending no CNI I0801 12:09:41.297377 13610 start_flags.go:310] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:docker.io/kicbase/stable:v0.0.32 Memory:3760 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/dphy:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} I0801 12:09:41.374485 13610 out.go:177] ๐Ÿ‘ Starting control plane node minikube in cluster minikube I0801 12:09:41.416278 13610 cache.go:120] Beginning downloading kic base image for docker with docker I0801 12:09:41.458035 13610 out.go:177] ๐Ÿšœ Pulling base image ... I0801 12:09:41.499809 13610 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker I0801 12:09:41.499857 13610 image.go:75] Checking for docker.io/kicbase/stable:v0.0.32 in local docker daemon I0801 12:09:41.499911 13610 preload.go:148] Found local preload: /home/dphy/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4 I0801 12:09:41.499925 13610 cache.go:57] Caching tarball of preloaded images I0801 12:09:41.500255 13610 preload.go:174] Found /home/dphy/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0801 12:09:41.500274 13610 cache.go:60] Finished verifying existence of preloaded tar for v1.24.1 on docker I0801 12:09:41.500429 13610 profile.go:148] Saving config to /home/dphy/.minikube/profiles/minikube/config.json ... I0801 12:09:41.577189 13610 cache.go:147] Downloading docker.io/kicbase/stable:v0.0.32 to local cache I0801 12:09:41.577268 13610 image.go:59] Checking for docker.io/kicbase/stable:v0.0.32 in local cache directory I0801 12:09:41.577277 13610 image.go:62] Found docker.io/kicbase/stable:v0.0.32 in local cache directory, skipping pull I0801 12:09:41.577279 13610 image.go:103] docker.io/kicbase/stable:v0.0.32 exists in cache, skipping pull I0801 12:09:41.577288 13610 cache.go:150] successfully saved docker.io/kicbase/stable:v0.0.32 as a tarball I0801 12:09:41.577291 13610 cache.go:161] Loading docker.io/kicbase/stable:v0.0.32 from local cache I0801 12:09:41.577937 13610 cache.go:170] Downloading docker.io/kicbase/stable:v0.0.32 to local daemon I0801 12:09:41.577974 13610 image.go:75] Checking for docker.io/kicbase/stable:v0.0.32 in local docker daemon I0801 12:09:41.629226 13610 image.go:243] Writing docker.io/kicbase/stable:v0.0.32 to local daemon I0801 12:09:46.365878 13610 cache.go:182] failed to download docker.io/kicbase/stable:v0.0.32, will try fallback image if available: writing daemon image: error loading image: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.24/images/load?quiet=0": dial unix /var/run/docker.sock: connect: permission denied I0801 12:09:46.365911 13610 image.go:75] Checking for docker.io/kicbase/stable:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 in local docker daemon I0801 12:09:46.437068 13610 cache.go:147] Downloading docker.io/kicbase/stable:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 to local cache I0801 12:09:46.437141 13610 image.go:59] Checking for docker.io/kicbase/stable:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 in local cache directory I0801 12:09:46.437149 13610 image.go:62] Found docker.io/kicbase/stable:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 in local cache directory, skipping pull I0801 12:09:46.437151 13610 image.go:103] docker.io/kicbase/stable:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 exists in cache, skipping pull I0801 12:09:46.437160 13610 cache.go:150] successfully saved docker.io/kicbase/stable:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 as a tarball I0801 12:09:46.437163 13610 cache.go:161] Loading docker.io/kicbase/stable:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 from local cache I0801 12:09:46.437657 13610 cache.go:170] Downloading docker.io/kicbase/stable:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 to local daemon I0801 12:09:46.437698 13610 image.go:75] Checking for docker.io/kicbase/stable:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 in local docker daemon I0801 12:09:46.487473 13610 image.go:243] Writing docker.io/kicbase/stable:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 to local daemon I0801 12:09:48.478546 13610 cache.go:182] failed to download docker.io/kicbase/stable:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95, will try fallback image if available: writing daemon image: error loading image: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.24/images/load?quiet=0": dial unix /var/run/docker.sock: connect: permission denied I0801 12:09:48.478568 13610 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.32 in local docker daemon I0801 12:09:48.574356 13610 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.32 to local cache I0801 12:09:48.574488 13610 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.32 in local cache directory I0801 12:09:48.574500 13610 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.32 in local cache directory, skipping pull I0801 12:09:48.574503 13610 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.32 exists in cache, skipping pull I0801 12:09:48.574518 13610 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.32 as a tarball I0801 12:09:48.574522 13610 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.32 from local cache I0801 12:09:48.575278 13610 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.32 to local daemon I0801 12:09:48.575355 13610 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.32 in local docker daemon I0801 12:09:48.632461 13610 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.32 to local daemon I0801 12:09:51.243033 13610 cache.go:182] failed to download gcr.io/k8s-minikube/kicbase:v0.0.32, will try fallback image if available: writing daemon image: error loading image: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.24/images/load?quiet=0": dial unix /var/run/docker.sock: connect: permission denied I0801 12:09:51.243063 13610 image.go:75] Checking for docker.io/kicbase/stable:v0.0.32 in local docker daemon I0801 12:09:51.318314 13610 cache.go:147] Downloading docker.io/kicbase/stable:v0.0.32 to local cache I0801 12:09:51.318393 13610 image.go:59] Checking for docker.io/kicbase/stable:v0.0.32 in local cache directory I0801 12:09:51.318401 13610 image.go:62] Found docker.io/kicbase/stable:v0.0.32 in local cache directory, skipping pull I0801 12:09:51.318404 13610 image.go:103] docker.io/kicbase/stable:v0.0.32 exists in cache, skipping pull I0801 12:09:51.318412 13610 cache.go:150] successfully saved docker.io/kicbase/stable:v0.0.32 as a tarball I0801 12:09:51.318415 13610 cache.go:161] Loading docker.io/kicbase/stable:v0.0.32 from local cache I0801 12:09:51.318896 13610 cache.go:170] Downloading docker.io/kicbase/stable:v0.0.32 to local daemon I0801 12:09:51.318928 13610 image.go:75] Checking for docker.io/kicbase/stable:v0.0.32 in local docker daemon I0801 12:09:51.365503 13610 image.go:243] Writing docker.io/kicbase/stable:v0.0.32 to local daemon I0801 12:09:53.101440 13610 cache.go:182] failed to download docker.io/kicbase/stable:v0.0.32, will try fallback image if available: writing daemon image: error loading image: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.24/images/load?quiet=0": dial unix /var/run/docker.sock: connect: permission denied E0801 12:09:53.101484 13610 cache.go:203] Error downloading kic artifacts: failed to download kic base image or any fallback image I0801 12:09:53.101509 13610 cache.go:208] Successfully downloaded all kic artifacts I0801 12:09:53.101566 13610 start.go:352] acquiring machines lock for minikube: {Name:mk090c1fd5b1967ac6a60f2587685e638cb79bfe Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0801 12:09:53.101707 13610 start.go:356] acquired machines lock for "minikube" in 94.153ยตs I0801 12:09:53.101734 13610 start.go:94] Skipping create...Using existing machine configuration I0801 12:09:53.101739 13610 fix.go:55] fixHost starting: I0801 12:09:53.102129 13610 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0801 12:09:53.182199 13610 fix.go:103] recreateIfNeeded on minikube: state=Running err= W0801 12:09:53.182223 13610 fix.go:129] unexpected machine state, will restart: I0801 12:09:53.488544 13610 out.go:177] ๐Ÿƒ Updating the running docker "minikube" container ... I0801 12:09:53.530246 13610 machine.go:88] provisioning docker machine ... I0801 12:09:53.530274 13610 ubuntu.go:169] provisioning hostname "minikube" I0801 12:09:53.530352 13610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0801 12:09:53.613378 13610 main.go:134] libmachine: Using SSH client type: native I0801 12:09:53.613494 13610 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x7dae00] 0x7dde60 [] 0s} 127.0.0.1 37731 } I0801 12:09:53.613501 13610 main.go:134] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0801 12:09:53.790524 13610 main.go:134] libmachine: SSH cmd err, output: : minikube I0801 12:09:53.790628 13610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0801 12:09:53.866196 13610 main.go:134] libmachine: Using SSH client type: native I0801 12:09:53.866321 13610 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x7dae00] 0x7dde60 [] 0s} 127.0.0.1 37731 } I0801 12:09:53.866330 13610 main.go:134] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0801 12:09:53.993385 13610 main.go:134] libmachine: SSH cmd err, output: : I0801 12:09:53.993408 13610 ubuntu.go:175] set auth options {CertDir:/home/dphy/.minikube CaCertPath:/home/dphy/.minikube/certs/ca.pem CaPrivateKeyPath:/home/dphy/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/dphy/.minikube/machines/server.pem ServerKeyPath:/home/dphy/.minikube/machines/server-key.pem ClientKeyPath:/home/dphy/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/dphy/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/dphy/.minikube} I0801 12:09:53.993439 13610 ubuntu.go:177] setting up certificates I0801 12:09:53.993453 13610 provision.go:83] configureAuth start I0801 12:09:53.993535 13610 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0801 12:09:54.079589 13610 provision.go:138] copyHostCerts I0801 12:09:54.079628 13610 exec_runner.go:144] found /home/dphy/.minikube/ca.pem, removing ... I0801 12:09:54.079632 13610 exec_runner.go:207] rm: /home/dphy/.minikube/ca.pem I0801 12:09:54.079679 13610 exec_runner.go:151] cp: /home/dphy/.minikube/certs/ca.pem --> /home/dphy/.minikube/ca.pem (1074 bytes) I0801 12:09:54.079732 13610 exec_runner.go:144] found /home/dphy/.minikube/cert.pem, removing ... I0801 12:09:54.079736 13610 exec_runner.go:207] rm: /home/dphy/.minikube/cert.pem I0801 12:09:54.079763 13610 exec_runner.go:151] cp: /home/dphy/.minikube/certs/cert.pem --> /home/dphy/.minikube/cert.pem (1115 bytes) I0801 12:09:54.079800 13610 exec_runner.go:144] found /home/dphy/.minikube/key.pem, removing ... I0801 12:09:54.079803 13610 exec_runner.go:207] rm: /home/dphy/.minikube/key.pem I0801 12:09:54.079824 13610 exec_runner.go:151] cp: /home/dphy/.minikube/certs/key.pem --> /home/dphy/.minikube/key.pem (1679 bytes) I0801 12:09:54.079858 13610 provision.go:112] generating server cert: /home/dphy/.minikube/machines/server.pem ca-key=/home/dphy/.minikube/certs/ca.pem private-key=/home/dphy/.minikube/certs/ca-key.pem org=dphy.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] I0801 12:09:54.172551 13610 provision.go:172] copyRemoteCerts I0801 12:09:54.172584 13610 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0801 12:09:54.172621 13610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0801 12:09:54.222754 13610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37731 SSHKeyPath:/home/dphy/.minikube/machines/minikube/id_rsa Username:docker} I0801 12:09:54.323228 13610 ssh_runner.go:362] scp /home/dphy/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1074 bytes) I0801 12:09:54.363530 13610 ssh_runner.go:362] scp /home/dphy/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes) I0801 12:09:54.402604 13610 ssh_runner.go:362] scp /home/dphy/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0801 12:09:54.435414 13610 provision.go:86] duration metric: configureAuth took 441.95275ms I0801 12:09:54.435427 13610 ubuntu.go:193] setting minikube options for container-runtime I0801 12:09:54.435565 13610 config.go:178] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.1 I0801 12:09:54.435603 13610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0801 12:09:54.483778 13610 main.go:134] libmachine: Using SSH client type: native I0801 12:09:54.483871 13610 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x7dae00] 0x7dde60 [] 0s} 127.0.0.1 37731 } I0801 12:09:54.483878 13610 main.go:134] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0801 12:09:54.633081 13610 main.go:134] libmachine: SSH cmd err, output: : overlay I0801 12:09:54.633099 13610 ubuntu.go:71] root file system type: overlay I0801 12:09:54.633347 13610 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0801 12:09:54.633451 13610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0801 12:09:54.715187 13610 main.go:134] libmachine: Using SSH client type: native I0801 12:09:54.715300 13610 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x7dae00] 0x7dde60 [] 0s} 127.0.0.1 37731 } I0801 12:09:54.715356 13610 main.go:134] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure Environment="NO_PROXY=localhost,127.0.0.0/8,::1" # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0801 12:09:54.886149 13610 main.go:134] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure Environment=NO_PROXY=localhost,127.0.0.0/8,::1 # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0801 12:09:54.886235 13610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0801 12:09:54.951651 13610 main.go:134] libmachine: Using SSH client type: native I0801 12:09:54.951739 13610 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x7dae00] 0x7dde60 [] 0s} 127.0.0.1 37731 } I0801 12:09:54.951751 13610 main.go:134] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0801 12:09:55.090841 13610 main.go:134] libmachine: SSH cmd err, output: : I0801 12:09:55.090859 13610 machine.go:91] provisioned docker machine in 1.560600785s I0801 12:09:55.090868 13610 start.go:306] post-start starting for "minikube" (driver="docker") I0801 12:09:55.090875 13610 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0801 12:09:55.090961 13610 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0801 12:09:55.091022 13610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0801 12:09:55.168168 13610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37731 SSHKeyPath:/home/dphy/.minikube/machines/minikube/id_rsa Username:docker} I0801 12:09:55.264674 13610 ssh_runner.go:195] Run: cat /etc/os-release I0801 12:09:55.272318 13610 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0801 12:09:55.272337 13610 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0801 12:09:55.272351 13610 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0801 12:09:55.272358 13610 info.go:137] Remote host: Ubuntu 20.04.4 LTS I0801 12:09:55.272368 13610 filesync.go:126] Scanning /home/dphy/.minikube/addons for local assets ... I0801 12:09:55.272435 13610 filesync.go:126] Scanning /home/dphy/.minikube/files for local assets ... I0801 12:09:55.272461 13610 start.go:309] post-start completed in 181.585285ms I0801 12:09:55.272547 13610 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0801 12:09:55.272596 13610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0801 12:09:55.340979 13610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37731 SSHKeyPath:/home/dphy/.minikube/machines/minikube/id_rsa Username:docker} I0801 12:09:55.430302 13610 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'" I0801 12:09:55.440909 13610 fix.go:57] fixHost completed within 2.33916542s I0801 12:09:55.440924 13610 start.go:81] releasing machines lock for "minikube", held for 2.339204438s I0801 12:09:55.441037 13610 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0801 12:09:55.573716 13610 out.go:177] ๐ŸŒ Found network options: I0801 12:09:55.615765 13610 out.go:177] โ–ช NO_PROXY=localhost,127.0.0.0/8,::1 W0801 12:09:55.657332 13610 proxy.go:118] fail to check proxy env: Error ip not in block W0801 12:09:55.657385 13610 proxy.go:118] fail to check proxy env: Error ip not in block W0801 12:09:55.657401 13610 proxy.go:118] fail to check proxy env: Error ip not in block I0801 12:09:55.698925 13610 out.go:177] โ–ช no_proxy=localhost,127.0.0.0/8,::1 W0801 12:09:55.740877 13610 proxy.go:118] fail to check proxy env: Error ip not in block W0801 12:09:55.741019 13610 proxy.go:118] fail to check proxy env: Error ip not in block W0801 12:09:55.741042 13610 proxy.go:118] fail to check proxy env: Error ip not in block W0801 12:09:55.741079 13610 proxy.go:118] fail to check proxy env: Error ip not in block W0801 12:09:55.741096 13610 proxy.go:118] fail to check proxy env: Error ip not in block W0801 12:09:55.741110 13610 proxy.go:118] fail to check proxy env: Error ip not in block I0801 12:09:55.741229 13610 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0801 12:09:55.741244 13610 ssh_runner.go:195] Run: systemctl --version I0801 12:09:55.741303 13610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0801 12:09:55.741306 13610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0801 12:09:55.823275 13610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37731 SSHKeyPath:/home/dphy/.minikube/machines/minikube/id_rsa Username:docker} I0801 12:09:55.826065 13610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37731 SSHKeyPath:/home/dphy/.minikube/machines/minikube/id_rsa Username:docker} I0801 12:09:56.571592 13610 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0801 12:09:56.596566 13610 cruntime.go:273] skipping containerd shutdown because we are bound to it I0801 12:09:56.596643 13610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0801 12:09:56.618422 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock image-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0801 12:09:56.648569 13610 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0801 12:09:56.775665 13610 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0801 12:09:56.868553 13610 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0801 12:09:56.959204 13610 ssh_runner.go:195] Run: sudo systemctl restart docker I0801 12:10:11.986275 13610 ssh_runner.go:235] Completed: sudo systemctl restart docker: (15.027035577s) I0801 12:10:11.986395 13610 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0801 12:10:12.077763 13610 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0801 12:10:12.148535 13610 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket I0801 12:10:12.157843 13610 start.go:447] Will wait 60s for socket path /var/run/cri-dockerd.sock I0801 12:10:12.157889 13610 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0801 12:10:12.161373 13610 start.go:468] Will wait 60s for crictl version I0801 12:10:12.161411 13610 ssh_runner.go:195] Run: sudo crictl version I0801 12:10:12.186586 13610 start.go:477] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 20.10.17 RuntimeApiVersion: 1.41.0 I0801 12:10:12.186656 13610 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0801 12:10:12.217096 13610 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0801 12:10:12.295292 13610 out.go:204] ๐Ÿณ Preparing Kubernetes v1.24.1 on Docker 20.10.17 ... I0801 12:10:12.336815 13610 out.go:177] โ–ช env NO_PROXY=localhost,127.0.0.0/8,::1 I0801 12:10:12.378759 13610 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0801 12:10:12.460702 13610 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I0801 12:10:12.464664 13610 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker I0801 12:10:12.464704 13610 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0801 12:10:12.493754 13610 docker.go:602] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.24.1 k8s.gcr.io/kube-scheduler:v1.24.1 k8s.gcr.io/kube-controller-manager:v1.24.1 k8s.gcr.io/kube-proxy:v1.24.1 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/pause:3.7 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0801 12:10:12.493770 13610 docker.go:533] Images already preloaded, skipping extraction I0801 12:10:12.493808 13610 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0801 12:10:12.528853 13610 docker.go:602] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.24.1 k8s.gcr.io/kube-proxy:v1.24.1 k8s.gcr.io/kube-controller-manager:v1.24.1 k8s.gcr.io/kube-scheduler:v1.24.1 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/pause:3.7 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0801 12:10:12.528876 13610 cache_images.go:84] Images are preloaded, skipping loading I0801 12:10:12.528921 13610 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0801 12:10:12.613641 13610 cni.go:95] Creating CNI manager for "" I0801 12:10:12.613651 13610 cni.go:169] CNI unnecessary in this configuration, recommending no CNI I0801 12:10:12.613668 13610 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0801 12:10:12.613688 13610 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0801 12:10:12.613765 13610 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/cri-dockerd.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.24.1 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0801 12:10:12.613813 13610 kubeadm.go:961] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=minikube --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 --runtime-request-timeout=15m [Install] config: {KubernetesVersion:v1.24.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0801 12:10:12.613856 13610 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1 I0801 12:10:12.620947 13610 binaries.go:44] Found k8s binaries, skipping transfer I0801 12:10:12.620989 13610 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0801 12:10:12.627863 13610 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (470 bytes) I0801 12:10:12.639890 13610 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0801 12:10:12.652020 13610 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2030 bytes) I0801 12:10:12.663877 13610 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0801 12:10:12.667493 13610 certs.go:54] Setting up /home/dphy/.minikube/profiles/minikube for IP: 192.168.49.2 I0801 12:10:12.667566 13610 certs.go:182] skipping minikubeCA CA generation: /home/dphy/.minikube/ca.key I0801 12:10:12.667589 13610 certs.go:182] skipping proxyClientCA CA generation: /home/dphy/.minikube/proxy-client-ca.key I0801 12:10:12.667635 13610 certs.go:298] skipping minikube-user signed cert generation: /home/dphy/.minikube/profiles/minikube/client.key I0801 12:10:12.667662 13610 certs.go:298] skipping minikube signed cert generation: /home/dphy/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I0801 12:10:12.667682 13610 certs.go:298] skipping aggregator signed cert generation: /home/dphy/.minikube/profiles/minikube/proxy-client.key I0801 12:10:12.667745 13610 certs.go:388] found cert: /home/dphy/.minikube/certs/home/dphy/.minikube/certs/ca-key.pem (1675 bytes) I0801 12:10:12.667762 13610 certs.go:388] found cert: /home/dphy/.minikube/certs/home/dphy/.minikube/certs/ca.pem (1074 bytes) I0801 12:10:12.667780 13610 certs.go:388] found cert: /home/dphy/.minikube/certs/home/dphy/.minikube/certs/cert.pem (1115 bytes) I0801 12:10:12.667794 13610 certs.go:388] found cert: /home/dphy/.minikube/certs/home/dphy/.minikube/certs/key.pem (1679 bytes) I0801 12:10:12.668144 13610 ssh_runner.go:362] scp /home/dphy/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0801 12:10:12.685517 13610 ssh_runner.go:362] scp /home/dphy/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0801 12:10:12.701485 13610 ssh_runner.go:362] scp /home/dphy/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0801 12:10:12.717826 13610 ssh_runner.go:362] scp /home/dphy/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0801 12:10:12.734644 13610 ssh_runner.go:362] scp /home/dphy/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0801 12:10:12.751804 13610 ssh_runner.go:362] scp /home/dphy/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0801 12:10:12.768123 13610 ssh_runner.go:362] scp /home/dphy/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0801 12:10:12.784262 13610 ssh_runner.go:362] scp /home/dphy/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0801 12:10:12.800608 13610 ssh_runner.go:362] scp /home/dphy/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0801 12:10:12.817904 13610 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0801 12:10:12.830246 13610 ssh_runner.go:195] Run: openssl version I0801 12:10:12.834896 13610 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0801 12:10:12.842099 13610 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0801 12:10:12.845742 13610 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Aug 1 04:21 /usr/share/ca-certificates/minikubeCA.pem I0801 12:10:12.845766 13610 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0801 12:10:12.850440 13610 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0801 12:10:12.857098 13610 kubeadm.go:395] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:docker.io/kicbase/stable:v0.0.32 Memory:3760 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/dphy:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} I0801 12:10:12.857170 13610 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0801 12:10:12.882384 13610 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0801 12:10:12.889238 13610 kubeadm.go:410] found existing configuration files, will attempt cluster restart I0801 12:10:12.889246 13610 kubeadm.go:626] restartCluster start I0801 12:10:12.889280 13610 ssh_runner.go:195] Run: sudo test -d /data/minikube I0801 12:10:12.895708 13610 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1 stdout: stderr: I0801 12:10:12.896032 13610 kubeconfig.go:92] found "minikube" server: "https://192.168.49.2:8443" I0801 12:10:12.896574 13610 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new I0801 12:10:12.902952 13610 api_server.go:165] Checking apiserver status ... I0801 12:10:12.902990 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0801 12:10:12.910922 13610 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0801 12:10:13.111378 13610 api_server.go:165] Checking apiserver status ... I0801 12:10:13.111467 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0801 12:10:13.132875 13610 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0801 12:10:13.311223 13610 api_server.go:165] Checking apiserver status ... I0801 12:10:13.311354 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0801 12:10:13.326744 13610 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0801 12:10:13.511005 13610 api_server.go:165] Checking apiserver status ... I0801 12:10:13.511122 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0801 12:10:13.536043 13610 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0801 12:10:13.711269 13610 api_server.go:165] Checking apiserver status ... I0801 12:10:13.711374 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0801 12:10:13.735687 13610 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0801 12:10:13.911023 13610 api_server.go:165] Checking apiserver status ... I0801 12:10:13.911149 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0801 12:10:13.938520 13610 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0801 12:10:14.111787 13610 api_server.go:165] Checking apiserver status ... I0801 12:10:14.111908 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0801 12:10:14.126641 13610 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0801 12:10:14.311052 13610 api_server.go:165] Checking apiserver status ... I0801 12:10:14.311182 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0801 12:10:14.338211 13610 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0801 12:10:14.511483 13610 api_server.go:165] Checking apiserver status ... I0801 12:10:14.511609 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0801 12:10:14.536556 13610 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0801 12:10:14.711866 13610 api_server.go:165] Checking apiserver status ... I0801 12:10:14.711968 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0801 12:10:14.727132 13610 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0801 12:10:14.911663 13610 api_server.go:165] Checking apiserver status ... I0801 12:10:14.911777 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0801 12:10:14.926700 13610 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0801 12:10:15.112034 13610 api_server.go:165] Checking apiserver status ... I0801 12:10:15.112168 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0801 12:10:15.139101 13610 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0801 12:10:15.311624 13610 api_server.go:165] Checking apiserver status ... I0801 12:10:15.311752 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0801 12:10:15.326653 13610 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0801 12:10:15.512041 13610 api_server.go:165] Checking apiserver status ... I0801 12:10:15.512168 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0801 12:10:15.538176 13610 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0801 12:10:15.710990 13610 api_server.go:165] Checking apiserver status ... I0801 12:10:15.721276 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0801 12:10:15.736656 13610 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0801 12:10:15.911709 13610 api_server.go:165] Checking apiserver status ... I0801 12:10:15.911769 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0801 12:10:15.921529 13610 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0801 12:10:15.921538 13610 api_server.go:165] Checking apiserver status ... I0801 12:10:15.921580 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0801 12:10:15.931681 13610 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0801 12:10:15.931694 13610 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition I0801 12:10:15.931698 13610 kubeadm.go:1092] stopping kube-system containers ... I0801 12:10:15.931749 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0801 12:10:15.961684 13610 docker.go:434] Stopping containers: [d34ef40886ea c72aaf4d122f 0886bb3d3899 a7755f2015c8 b951b5998cd1 667ccaea3e5c 667e92042879 e8792c88b249 8242dc5f1a05 0605bc754d2d ab29b7b5861e e79229c6fdc6 e532d8496e0a 126145f9e140 c97af5d8e11d 1df6f6450fe8 823fac3665a9 b8dd4a8f5ad6 dba3cb21ce26 a82600d38fd8] I0801 12:10:15.961736 13610 ssh_runner.go:195] Run: docker stop d34ef40886ea c72aaf4d122f 0886bb3d3899 a7755f2015c8 b951b5998cd1 667ccaea3e5c 667e92042879 e8792c88b249 8242dc5f1a05 0605bc754d2d ab29b7b5861e e79229c6fdc6 e532d8496e0a 126145f9e140 c97af5d8e11d 1df6f6450fe8 823fac3665a9 b8dd4a8f5ad6 dba3cb21ce26 a82600d38fd8 I0801 12:10:15.987874 13610 ssh_runner.go:195] Run: sudo systemctl stop kubelet I0801 12:10:16.023524 13610 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0801 12:10:16.030503 13610 kubeadm.go:155] found existing configuration files: -rw------- 1 root root 5643 Aug 1 06:27 /etc/kubernetes/admin.conf -rw------- 1 root root 5656 Aug 1 06:27 /etc/kubernetes/controller-manager.conf -rw------- 1 root root 1971 Aug 1 06:27 /etc/kubernetes/kubelet.conf -rw------- 1 root root 5604 Aug 1 06:27 /etc/kubernetes/scheduler.conf I0801 12:10:16.030549 13610 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf I0801 12:10:16.037362 13610 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf I0801 12:10:16.043885 13610 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf I0801 12:10:16.050362 13610 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1 stdout: stderr: I0801 12:10:16.050401 13610 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf I0801 12:10:16.056997 13610 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf I0801 12:10:16.063684 13610 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1 stdout: stderr: I0801 12:10:16.063725 13610 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf I0801 12:10:16.070161 13610 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0801 12:10:16.077098 13610 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml I0801 12:10:16.077106 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml" I0801 12:10:16.115070 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml" I0801 12:10:16.600573 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml" I0801 12:10:16.754880 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml" I0801 12:10:16.808859 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml" I0801 12:10:16.853625 13610 api_server.go:51] waiting for apiserver process to appear ... I0801 12:10:16.853663 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0801 12:10:17.364112 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0801 12:10:17.863740 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0801 12:10:18.364029 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0801 12:10:18.863945 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0801 12:10:19.363994 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0801 12:10:19.864134 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0801 12:10:20.363738 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0801 12:10:20.863614 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0801 12:10:21.364180 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0801 12:10:21.864261 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0801 12:10:22.363698 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0801 12:10:22.863804 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0801 12:10:23.364297 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0801 12:10:23.864459 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0801 12:10:24.363635 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0801 12:10:24.864360 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0801 12:10:25.364024 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0801 12:10:25.374781 13610 api_server.go:71] duration metric: took 8.521154614s to wait for apiserver process to appear ... I0801 12:10:25.374792 13610 api_server.go:87] waiting for apiserver healthz status ... I0801 12:10:25.374799 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:10:30.376603 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:10:30.876905 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:10:35.877630 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:10:36.376809 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:10:41.377623 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:10:41.377670 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:10:46.378914 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:10:46.877615 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:10:51.878832 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:10:52.377814 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:10:57.379129 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:10:57.877697 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:11:02.878762 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:11:03.376728 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:11:08.377613 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:11:08.377667 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:11:13.378706 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:11:13.877039 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:11:18.877532 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:11:19.377173 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:11:24.377760 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:11:24.377844 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:11:29.378887 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:11:29.877744 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:11:29.935189 13610 logs.go:274] 2 containers: [6fb747a9eb01 1df6f6450fe8] I0801 12:11:29.935230 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:11:29.963748 13610 logs.go:274] 2 containers: [490f7678f689 e532d8496e0a] I0801 12:11:29.963791 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:11:29.992292 13610 logs.go:274] 4 containers: [c191559d0caf 1e9042711495 667e92042879 e8792c88b249] I0801 12:11:29.992339 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:11:30.020325 13610 logs.go:274] 2 containers: [88210294c08f 126145f9e140] I0801 12:11:30.020367 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:11:30.048918 13610 logs.go:274] 2 containers: [795b2fd54b9a c72aaf4d122f] I0801 12:11:30.048969 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:11:30.075589 13610 logs.go:274] 0 containers: [] W0801 12:11:30.075598 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:11:30.075635 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:11:30.105566 13610 logs.go:274] 2 containers: [7a0b604c74ed d34ef40886ea] I0801 12:11:30.105644 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:11:30.134513 13610 logs.go:274] 2 containers: [6b0c7e1f8874 c97af5d8e11d] I0801 12:11:30.134529 13610 logs.go:123] Gathering logs for kube-scheduler [126145f9e140] ... I0801 12:11:30.134538 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126145f9e140" I0801 12:11:30.171350 13610 logs.go:123] Gathering logs for Docker ... I0801 12:11:30.171360 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:11:30.210951 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:11:30.210963 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:11:30.279010 13610 logs.go:123] Gathering logs for etcd [e532d8496e0a] ... I0801 12:11:30.279024 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e532d8496e0a" I0801 12:11:30.358042 13610 logs.go:123] Gathering logs for storage-provisioner [7a0b604c74ed] ... I0801 12:11:30.358052 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0b604c74ed" I0801 12:11:30.385908 13610 logs.go:123] Gathering logs for kube-controller-manager [6b0c7e1f8874] ... I0801 12:11:30.385918 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b0c7e1f8874" I0801 12:11:30.422345 13610 logs.go:123] Gathering logs for kube-controller-manager [c97af5d8e11d] ... I0801 12:11:30.422355 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c97af5d8e11d" I0801 12:11:30.457879 13610 logs.go:123] Gathering logs for kube-scheduler [88210294c08f] ... I0801 12:11:30.457891 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88210294c08f" I0801 12:11:30.485790 13610 logs.go:123] Gathering logs for kube-proxy [795b2fd54b9a] ... I0801 12:11:30.485801 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 795b2fd54b9a" I0801 12:11:30.514455 13610 logs.go:123] Gathering logs for coredns [c191559d0caf] ... I0801 12:11:30.514464 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c191559d0caf" I0801 12:11:30.543852 13610 logs.go:123] Gathering logs for coredns [667e92042879] ... I0801 12:11:30.543864 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 667e92042879" I0801 12:11:30.570137 13610 logs.go:123] Gathering logs for storage-provisioner [d34ef40886ea] ... I0801 12:11:30.570150 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34ef40886ea" I0801 12:11:30.597150 13610 logs.go:123] Gathering logs for container status ... I0801 12:11:30.597160 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:11:30.636245 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:11:30.636255 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:11:30.645392 13610 logs.go:123] Gathering logs for etcd [490f7678f689] ... I0801 12:11:30.645401 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 490f7678f689" I0801 12:11:30.715449 13610 logs.go:123] Gathering logs for kube-apiserver [1df6f6450fe8] ... I0801 12:11:30.717881 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df6f6450fe8" I0801 12:11:30.785152 13610 logs.go:123] Gathering logs for coredns [1e9042711495] ... I0801 12:11:30.785161 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e9042711495" I0801 12:11:30.811820 13610 logs.go:123] Gathering logs for coredns [e8792c88b249] ... I0801 12:11:30.811830 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8792c88b249" I0801 12:11:30.838943 13610 logs.go:123] Gathering logs for kube-proxy [c72aaf4d122f] ... I0801 12:11:30.838954 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c72aaf4d122f" I0801 12:11:30.865295 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:11:30.865305 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:11:30.926070 13610 logs.go:123] Gathering logs for kube-apiserver [6fb747a9eb01] ... I0801 12:11:30.926080 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fb747a9eb01" I0801 12:11:33.460552 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:11:38.461267 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:11:38.876826 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:11:38.905968 13610 logs.go:274] 2 containers: [6fb747a9eb01 1df6f6450fe8] I0801 12:11:38.906025 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:11:38.931742 13610 logs.go:274] 2 containers: [490f7678f689 e532d8496e0a] I0801 12:11:38.931788 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:11:38.959332 13610 logs.go:274] 4 containers: [c191559d0caf 1e9042711495 667e92042879 e8792c88b249] I0801 12:11:38.959377 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:11:38.985473 13610 logs.go:274] 2 containers: [88210294c08f 126145f9e140] I0801 12:11:38.985544 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:11:39.012441 13610 logs.go:274] 2 containers: [795b2fd54b9a c72aaf4d122f] I0801 12:11:39.012496 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:11:39.037043 13610 logs.go:274] 0 containers: [] W0801 12:11:39.037052 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:11:39.037092 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:11:39.062277 13610 logs.go:274] 2 containers: [7a0b604c74ed d34ef40886ea] I0801 12:11:39.062340 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:11:39.088144 13610 logs.go:274] 2 containers: [6b0c7e1f8874 c97af5d8e11d] I0801 12:11:39.088162 13610 logs.go:123] Gathering logs for kube-proxy [c72aaf4d122f] ... I0801 12:11:39.088168 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c72aaf4d122f" I0801 12:11:39.116180 13610 logs.go:123] Gathering logs for kube-controller-manager [6b0c7e1f8874] ... I0801 12:11:39.116190 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b0c7e1f8874" I0801 12:11:39.162748 13610 logs.go:123] Gathering logs for Docker ... I0801 12:11:39.162758 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:11:39.201986 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:11:39.201995 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:11:39.262242 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:11:39.262252 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:11:39.331233 13610 logs.go:123] Gathering logs for storage-provisioner [7a0b604c74ed] ... I0801 12:11:39.331244 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0b604c74ed" I0801 12:11:39.359737 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:11:39.359749 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:11:39.368508 13610 logs.go:123] Gathering logs for etcd [e532d8496e0a] ... I0801 12:11:39.368518 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e532d8496e0a" I0801 12:11:39.434636 13610 logs.go:123] Gathering logs for coredns [e8792c88b249] ... I0801 12:11:39.434647 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8792c88b249" I0801 12:11:39.465195 13610 logs.go:123] Gathering logs for kube-scheduler [88210294c08f] ... I0801 12:11:39.465206 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88210294c08f" I0801 12:11:39.492726 13610 logs.go:123] Gathering logs for kube-scheduler [126145f9e140] ... I0801 12:11:39.492736 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126145f9e140" I0801 12:11:39.528643 13610 logs.go:123] Gathering logs for kube-proxy [795b2fd54b9a] ... I0801 12:11:39.528652 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 795b2fd54b9a" I0801 12:11:39.557381 13610 logs.go:123] Gathering logs for storage-provisioner [d34ef40886ea] ... I0801 12:11:39.557391 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34ef40886ea" I0801 12:11:39.586203 13610 logs.go:123] Gathering logs for container status ... I0801 12:11:39.586214 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:11:39.612094 13610 logs.go:123] Gathering logs for kube-apiserver [1df6f6450fe8] ... I0801 12:11:39.612106 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df6f6450fe8" I0801 12:11:39.678524 13610 logs.go:123] Gathering logs for coredns [c191559d0caf] ... I0801 12:11:39.678533 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c191559d0caf" I0801 12:11:39.705159 13610 logs.go:123] Gathering logs for coredns [1e9042711495] ... I0801 12:11:39.705185 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e9042711495" I0801 12:11:39.731190 13610 logs.go:123] Gathering logs for coredns [667e92042879] ... I0801 12:11:39.731200 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 667e92042879" I0801 12:11:39.767019 13610 logs.go:123] Gathering logs for kube-controller-manager [c97af5d8e11d] ... I0801 12:11:39.767031 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c97af5d8e11d" I0801 12:11:39.805267 13610 logs.go:123] Gathering logs for kube-apiserver [6fb747a9eb01] ... I0801 12:11:39.805277 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fb747a9eb01" I0801 12:11:39.838054 13610 logs.go:123] Gathering logs for etcd [490f7678f689] ... I0801 12:11:39.838064 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 490f7678f689" I0801 12:11:42.406911 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:11:47.407884 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:11:47.877849 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:11:47.939121 13610 logs.go:274] 2 containers: [6fb747a9eb01 1df6f6450fe8] I0801 12:11:47.939176 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:11:47.965190 13610 logs.go:274] 2 containers: [490f7678f689 e532d8496e0a] I0801 12:11:47.965254 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:11:47.992052 13610 logs.go:274] 4 containers: [c191559d0caf 1e9042711495 667e92042879 e8792c88b249] I0801 12:11:47.992096 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:11:48.018973 13610 logs.go:274] 2 containers: [88210294c08f 126145f9e140] I0801 12:11:48.019019 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:11:48.043860 13610 logs.go:274] 2 containers: [795b2fd54b9a c72aaf4d122f] I0801 12:11:48.043907 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:11:48.068981 13610 logs.go:274] 0 containers: [] W0801 12:11:48.068989 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:11:48.069024 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:11:48.098604 13610 logs.go:274] 2 containers: [7a0b604c74ed d34ef40886ea] I0801 12:11:48.098650 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:11:48.124427 13610 logs.go:274] 2 containers: [6b0c7e1f8874 c97af5d8e11d] I0801 12:11:48.124443 13610 logs.go:123] Gathering logs for kube-apiserver [6fb747a9eb01] ... I0801 12:11:48.124449 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fb747a9eb01" I0801 12:11:48.162884 13610 logs.go:123] Gathering logs for kube-scheduler [88210294c08f] ... I0801 12:11:48.162893 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88210294c08f" I0801 12:11:48.192873 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:11:48.192882 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:11:48.251770 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:11:48.251780 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:11:48.260789 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:11:48.260802 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:11:48.325720 13610 logs.go:123] Gathering logs for kube-apiserver [1df6f6450fe8] ... I0801 12:11:48.325730 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df6f6450fe8" I0801 12:11:48.407096 13610 logs.go:123] Gathering logs for etcd [490f7678f689] ... I0801 12:11:48.407106 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 490f7678f689" I0801 12:11:48.479984 13610 logs.go:123] Gathering logs for kube-proxy [795b2fd54b9a] ... I0801 12:11:48.479994 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 795b2fd54b9a" I0801 12:11:48.508816 13610 logs.go:123] Gathering logs for storage-provisioner [7a0b604c74ed] ... I0801 12:11:48.508826 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0b604c74ed" I0801 12:11:48.540221 13610 logs.go:123] Gathering logs for Docker ... I0801 12:11:48.540231 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:11:48.577534 13610 logs.go:123] Gathering logs for etcd [e532d8496e0a] ... I0801 12:11:48.577544 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e532d8496e0a" I0801 12:11:48.641173 13610 logs.go:123] Gathering logs for coredns [e8792c88b249] ... I0801 12:11:48.641183 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8792c88b249" I0801 12:11:48.668264 13610 logs.go:123] Gathering logs for kube-scheduler [126145f9e140] ... I0801 12:11:48.668277 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126145f9e140" I0801 12:11:48.703765 13610 logs.go:123] Gathering logs for storage-provisioner [d34ef40886ea] ... I0801 12:11:48.703777 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34ef40886ea" I0801 12:11:48.730970 13610 logs.go:123] Gathering logs for coredns [c191559d0caf] ... I0801 12:11:48.730981 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c191559d0caf" I0801 12:11:48.756626 13610 logs.go:123] Gathering logs for coredns [1e9042711495] ... I0801 12:11:48.756636 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e9042711495" I0801 12:11:48.782899 13610 logs.go:123] Gathering logs for coredns [667e92042879] ... I0801 12:11:48.782926 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 667e92042879" I0801 12:11:48.810154 13610 logs.go:123] Gathering logs for kube-proxy [c72aaf4d122f] ... I0801 12:11:48.810165 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c72aaf4d122f" I0801 12:11:48.837915 13610 logs.go:123] Gathering logs for kube-controller-manager [6b0c7e1f8874] ... I0801 12:11:48.837928 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b0c7e1f8874" I0801 12:11:48.874322 13610 logs.go:123] Gathering logs for kube-controller-manager [c97af5d8e11d] ... I0801 12:11:48.874333 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c97af5d8e11d" I0801 12:11:48.911620 13610 logs.go:123] Gathering logs for container status ... I0801 12:11:48.911630 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:11:51.437310 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:11:56.438273 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:11:56.876951 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:11:56.934825 13610 logs.go:274] 2 containers: [6fb747a9eb01 1df6f6450fe8] I0801 12:11:56.934877 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:11:56.961483 13610 logs.go:274] 2 containers: [490f7678f689 e532d8496e0a] I0801 12:11:56.961530 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:11:56.989521 13610 logs.go:274] 4 containers: [c191559d0caf 1e9042711495 667e92042879 e8792c88b249] I0801 12:11:56.989569 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:11:57.017692 13610 logs.go:274] 2 containers: [88210294c08f 126145f9e140] I0801 12:11:57.017779 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:11:57.043300 13610 logs.go:274] 2 containers: [795b2fd54b9a c72aaf4d122f] I0801 12:11:57.043343 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:11:57.068592 13610 logs.go:274] 0 containers: [] W0801 12:11:57.068606 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:11:57.068650 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:11:57.093864 13610 logs.go:274] 2 containers: [7a0b604c74ed d34ef40886ea] I0801 12:11:57.093908 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:11:57.120175 13610 logs.go:274] 2 containers: [6b0c7e1f8874 c97af5d8e11d] I0801 12:11:57.120190 13610 logs.go:123] Gathering logs for kube-scheduler [88210294c08f] ... I0801 12:11:57.120195 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88210294c08f" I0801 12:11:57.148247 13610 logs.go:123] Gathering logs for storage-provisioner [d34ef40886ea] ... I0801 12:11:57.148256 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34ef40886ea" I0801 12:11:57.181243 13610 logs.go:123] Gathering logs for kube-controller-manager [c97af5d8e11d] ... I0801 12:11:57.181253 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c97af5d8e11d" I0801 12:11:57.219178 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:11:57.219188 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:11:57.283165 13610 logs.go:123] Gathering logs for coredns [667e92042879] ... I0801 12:11:57.283185 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 667e92042879" I0801 12:11:57.310921 13610 logs.go:123] Gathering logs for kube-scheduler [126145f9e140] ... I0801 12:11:57.310935 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126145f9e140" I0801 12:11:57.351098 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:11:57.351108 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:11:57.416765 13610 logs.go:123] Gathering logs for etcd [490f7678f689] ... I0801 12:11:57.416776 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 490f7678f689" I0801 12:11:57.505360 13610 logs.go:123] Gathering logs for coredns [c191559d0caf] ... I0801 12:11:57.505375 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c191559d0caf" I0801 12:11:57.534377 13610 logs.go:123] Gathering logs for coredns [1e9042711495] ... I0801 12:11:57.534394 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e9042711495" I0801 12:11:57.571717 13610 logs.go:123] Gathering logs for kube-controller-manager [6b0c7e1f8874] ... I0801 12:11:57.571728 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b0c7e1f8874" I0801 12:11:57.610085 13610 logs.go:123] Gathering logs for Docker ... I0801 12:11:57.610094 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:11:57.647651 13610 logs.go:123] Gathering logs for kube-apiserver [1df6f6450fe8] ... I0801 12:11:57.647663 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df6f6450fe8" I0801 12:11:57.715402 13610 logs.go:123] Gathering logs for etcd [e532d8496e0a] ... I0801 12:11:57.715412 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e532d8496e0a" I0801 12:11:57.785991 13610 logs.go:123] Gathering logs for coredns [e8792c88b249] ... I0801 12:11:57.786001 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8792c88b249" I0801 12:11:57.813565 13610 logs.go:123] Gathering logs for kube-proxy [795b2fd54b9a] ... I0801 12:11:57.813580 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 795b2fd54b9a" I0801 12:11:57.841721 13610 logs.go:123] Gathering logs for kube-proxy [c72aaf4d122f] ... I0801 12:11:57.841729 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c72aaf4d122f" I0801 12:11:57.875011 13610 logs.go:123] Gathering logs for storage-provisioner [7a0b604c74ed] ... I0801 12:11:57.875021 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0b604c74ed" I0801 12:11:57.902946 13610 logs.go:123] Gathering logs for container status ... I0801 12:11:57.902956 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:11:57.942686 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:11:57.942699 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:11:57.952265 13610 logs.go:123] Gathering logs for kube-apiserver [6fb747a9eb01] ... I0801 12:11:57.952275 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fb747a9eb01" I0801 12:12:00.485147 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:12:05.485428 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:12:05.877812 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:12:05.934529 13610 logs.go:274] 2 containers: [6fb747a9eb01 1df6f6450fe8] I0801 12:12:05.934610 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:12:05.961485 13610 logs.go:274] 2 containers: [490f7678f689 e532d8496e0a] I0801 12:12:05.961563 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:12:05.988280 13610 logs.go:274] 4 containers: [c191559d0caf 1e9042711495 667e92042879 e8792c88b249] I0801 12:12:05.988324 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:12:06.014493 13610 logs.go:274] 2 containers: [88210294c08f 126145f9e140] I0801 12:12:06.014541 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:12:06.041575 13610 logs.go:274] 2 containers: [795b2fd54b9a c72aaf4d122f] I0801 12:12:06.041621 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:12:06.067504 13610 logs.go:274] 0 containers: [] W0801 12:12:06.067512 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:12:06.067548 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:12:06.094544 13610 logs.go:274] 2 containers: [7a0b604c74ed d34ef40886ea] I0801 12:12:06.094592 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:12:06.121055 13610 logs.go:274] 2 containers: [6b0c7e1f8874 c97af5d8e11d] I0801 12:12:06.121088 13610 logs.go:123] Gathering logs for etcd [e532d8496e0a] ... I0801 12:12:06.121096 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e532d8496e0a" I0801 12:12:06.185233 13610 logs.go:123] Gathering logs for coredns [1e9042711495] ... I0801 12:12:06.185243 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e9042711495" I0801 12:12:06.212746 13610 logs.go:123] Gathering logs for kube-scheduler [126145f9e140] ... I0801 12:12:06.212771 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126145f9e140" I0801 12:12:06.252230 13610 logs.go:123] Gathering logs for storage-provisioner [d34ef40886ea] ... I0801 12:12:06.252241 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34ef40886ea" I0801 12:12:06.278839 13610 logs.go:123] Gathering logs for kube-controller-manager [6b0c7e1f8874] ... I0801 12:12:06.278849 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b0c7e1f8874" I0801 12:12:06.321696 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:12:06.321705 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:12:06.385049 13610 logs.go:123] Gathering logs for kube-apiserver [1df6f6450fe8] ... I0801 12:12:06.385058 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df6f6450fe8" I0801 12:12:06.452765 13610 logs.go:123] Gathering logs for etcd [490f7678f689] ... I0801 12:12:06.452774 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 490f7678f689" I0801 12:12:06.522793 13610 logs.go:123] Gathering logs for kube-controller-manager [c97af5d8e11d] ... I0801 12:12:06.522802 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c97af5d8e11d" I0801 12:12:06.559757 13610 logs.go:123] Gathering logs for coredns [e8792c88b249] ... I0801 12:12:06.559767 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8792c88b249" I0801 12:12:06.589159 13610 logs.go:123] Gathering logs for kube-scheduler [88210294c08f] ... I0801 12:12:06.589170 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88210294c08f" I0801 12:12:06.617918 13610 logs.go:123] Gathering logs for storage-provisioner [7a0b604c74ed] ... I0801 12:12:06.617928 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0b604c74ed" I0801 12:12:06.645932 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:12:06.645944 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:12:06.704571 13610 logs.go:123] Gathering logs for coredns [c191559d0caf] ... I0801 12:12:06.704581 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c191559d0caf" I0801 12:12:06.732762 13610 logs.go:123] Gathering logs for coredns [667e92042879] ... I0801 12:12:06.732773 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 667e92042879" I0801 12:12:06.761439 13610 logs.go:123] Gathering logs for kube-proxy [c72aaf4d122f] ... I0801 12:12:06.761449 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c72aaf4d122f" I0801 12:12:06.790201 13610 logs.go:123] Gathering logs for Docker ... I0801 12:12:06.790212 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:12:06.829629 13610 logs.go:123] Gathering logs for container status ... I0801 12:12:06.829640 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:12:06.855855 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:12:06.855867 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:12:06.865030 13610 logs.go:123] Gathering logs for kube-apiserver [6fb747a9eb01] ... I0801 12:12:06.865043 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fb747a9eb01" I0801 12:12:06.897900 13610 logs.go:123] Gathering logs for kube-proxy [795b2fd54b9a] ... I0801 12:12:06.897911 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 795b2fd54b9a" I0801 12:12:09.427340 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:12:14.428749 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:12:14.877773 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:12:14.936034 13610 logs.go:274] 2 containers: [6fb747a9eb01 1df6f6450fe8] I0801 12:12:14.936081 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:12:14.962333 13610 logs.go:274] 2 containers: [490f7678f689 e532d8496e0a] I0801 12:12:14.962375 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:12:14.988028 13610 logs.go:274] 4 containers: [c191559d0caf 1e9042711495 667e92042879 e8792c88b249] I0801 12:12:14.988072 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:12:15.016423 13610 logs.go:274] 2 containers: [88210294c08f 126145f9e140] I0801 12:12:15.016480 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:12:15.042209 13610 logs.go:274] 2 containers: [795b2fd54b9a c72aaf4d122f] I0801 12:12:15.042265 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:12:15.067721 13610 logs.go:274] 0 containers: [] W0801 12:12:15.067730 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:12:15.067762 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:12:15.093795 13610 logs.go:274] 2 containers: [7a0b604c74ed d34ef40886ea] I0801 12:12:15.093854 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:12:15.119390 13610 logs.go:274] 2 containers: [6b0c7e1f8874 c97af5d8e11d] I0801 12:12:15.119407 13610 logs.go:123] Gathering logs for storage-provisioner [d34ef40886ea] ... I0801 12:12:15.119412 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34ef40886ea" I0801 12:12:15.147712 13610 logs.go:123] Gathering logs for coredns [c191559d0caf] ... I0801 12:12:15.147723 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c191559d0caf" I0801 12:12:15.184230 13610 logs.go:123] Gathering logs for coredns [667e92042879] ... I0801 12:12:15.184241 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 667e92042879" I0801 12:12:15.217293 13610 logs.go:123] Gathering logs for kube-apiserver [1df6f6450fe8] ... I0801 12:12:15.217306 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df6f6450fe8" I0801 12:12:15.287164 13610 logs.go:123] Gathering logs for etcd [e532d8496e0a] ... I0801 12:12:15.287174 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e532d8496e0a" I0801 12:12:15.352856 13610 logs.go:123] Gathering logs for coredns [1e9042711495] ... I0801 12:12:15.352866 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e9042711495" I0801 12:12:15.380439 13610 logs.go:123] Gathering logs for coredns [e8792c88b249] ... I0801 12:12:15.380449 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8792c88b249" I0801 12:12:15.408556 13610 logs.go:123] Gathering logs for kube-proxy [795b2fd54b9a] ... I0801 12:12:15.408568 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 795b2fd54b9a" I0801 12:12:15.437759 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:12:15.437769 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:12:15.447054 13610 logs.go:123] Gathering logs for kube-apiserver [6fb747a9eb01] ... I0801 12:12:15.447066 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fb747a9eb01" I0801 12:12:15.482576 13610 logs.go:123] Gathering logs for kube-proxy [c72aaf4d122f] ... I0801 12:12:15.482586 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c72aaf4d122f" I0801 12:12:15.509756 13610 logs.go:123] Gathering logs for kube-controller-manager [6b0c7e1f8874] ... I0801 12:12:15.509770 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b0c7e1f8874" I0801 12:12:15.546926 13610 logs.go:123] Gathering logs for container status ... I0801 12:12:15.546935 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:12:15.572220 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:12:15.572232 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:12:15.633391 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:12:15.633402 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:12:15.697712 13610 logs.go:123] Gathering logs for kube-scheduler [126145f9e140] ... I0801 12:12:15.722267 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126145f9e140" I0801 12:12:15.781648 13610 logs.go:123] Gathering logs for storage-provisioner [7a0b604c74ed] ... I0801 12:12:15.781657 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0b604c74ed" I0801 12:12:15.809546 13610 logs.go:123] Gathering logs for kube-controller-manager [c97af5d8e11d] ... I0801 12:12:15.809556 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c97af5d8e11d" I0801 12:12:15.845397 13610 logs.go:123] Gathering logs for Docker ... I0801 12:12:15.845408 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:12:15.883335 13610 logs.go:123] Gathering logs for etcd [490f7678f689] ... I0801 12:12:15.883345 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 490f7678f689" I0801 12:12:15.957414 13610 logs.go:123] Gathering logs for kube-scheduler [88210294c08f] ... I0801 12:12:15.957423 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88210294c08f" I0801 12:12:18.488040 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:12:23.489310 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:12:23.877132 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:12:23.934814 13610 logs.go:274] 2 containers: [6fb747a9eb01 1df6f6450fe8] I0801 12:12:23.934863 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:12:23.962496 13610 logs.go:274] 2 containers: [490f7678f689 e532d8496e0a] I0801 12:12:23.962543 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:12:23.989333 13610 logs.go:274] 4 containers: [c191559d0caf 1e9042711495 667e92042879 e8792c88b249] I0801 12:12:23.989384 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:12:24.016011 13610 logs.go:274] 2 containers: [88210294c08f 126145f9e140] I0801 12:12:24.016071 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:12:24.041226 13610 logs.go:274] 2 containers: [795b2fd54b9a c72aaf4d122f] I0801 12:12:24.041276 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:12:24.066560 13610 logs.go:274] 0 containers: [] W0801 12:12:24.066569 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:12:24.066602 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:12:24.095007 13610 logs.go:274] 2 containers: [7a0b604c74ed d34ef40886ea] I0801 12:12:24.095059 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:12:24.121178 13610 logs.go:274] 2 containers: [6b0c7e1f8874 c97af5d8e11d] I0801 12:12:24.121194 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:12:24.121200 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:12:24.185273 13610 logs.go:123] Gathering logs for etcd [e532d8496e0a] ... I0801 12:12:24.185283 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e532d8496e0a" I0801 12:12:24.251013 13610 logs.go:123] Gathering logs for kube-scheduler [126145f9e140] ... I0801 12:12:24.251023 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126145f9e140" I0801 12:12:24.289828 13610 logs.go:123] Gathering logs for storage-provisioner [7a0b604c74ed] ... I0801 12:12:24.289837 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0b604c74ed" I0801 12:12:24.319832 13610 logs.go:123] Gathering logs for kube-controller-manager [6b0c7e1f8874] ... I0801 12:12:24.319841 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b0c7e1f8874" I0801 12:12:24.356240 13610 logs.go:123] Gathering logs for Docker ... I0801 12:12:24.356249 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:12:24.394753 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:12:24.394763 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:12:24.454145 13610 logs.go:123] Gathering logs for kube-apiserver [6fb747a9eb01] ... I0801 12:12:24.454156 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fb747a9eb01" I0801 12:12:24.489031 13610 logs.go:123] Gathering logs for kube-apiserver [1df6f6450fe8] ... I0801 12:12:24.489043 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df6f6450fe8" I0801 12:12:24.555635 13610 logs.go:123] Gathering logs for etcd [490f7678f689] ... I0801 12:12:24.555647 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 490f7678f689" I0801 12:12:24.628018 13610 logs.go:123] Gathering logs for coredns [c191559d0caf] ... I0801 12:12:24.628043 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c191559d0caf" I0801 12:12:24.655505 13610 logs.go:123] Gathering logs for coredns [e8792c88b249] ... I0801 12:12:24.655515 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8792c88b249" I0801 12:12:24.682967 13610 logs.go:123] Gathering logs for kube-controller-manager [c97af5d8e11d] ... I0801 12:12:24.682979 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c97af5d8e11d" I0801 12:12:24.724853 13610 logs.go:123] Gathering logs for coredns [667e92042879] ... I0801 12:12:24.724863 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 667e92042879" I0801 12:12:24.752608 13610 logs.go:123] Gathering logs for kube-scheduler [88210294c08f] ... I0801 12:12:24.752625 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88210294c08f" I0801 12:12:24.782103 13610 logs.go:123] Gathering logs for kube-proxy [795b2fd54b9a] ... I0801 12:12:24.782113 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 795b2fd54b9a" I0801 12:12:24.812098 13610 logs.go:123] Gathering logs for kube-proxy [c72aaf4d122f] ... I0801 12:12:24.812108 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c72aaf4d122f" I0801 12:12:24.840439 13610 logs.go:123] Gathering logs for container status ... I0801 12:12:24.840449 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:12:24.869154 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:12:24.869164 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:12:24.877978 13610 logs.go:123] Gathering logs for coredns [1e9042711495] ... I0801 12:12:24.877990 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e9042711495" I0801 12:12:24.905515 13610 logs.go:123] Gathering logs for storage-provisioner [d34ef40886ea] ... I0801 12:12:24.905527 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34ef40886ea" I0801 12:12:27.435336 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:12:32.435840 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:12:32.877792 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:12:32.929495 13610 logs.go:274] 2 containers: [6fb747a9eb01 1df6f6450fe8] I0801 12:12:32.929544 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:12:32.954475 13610 logs.go:274] 2 containers: [490f7678f689 e532d8496e0a] I0801 12:12:32.954523 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:12:32.980294 13610 logs.go:274] 4 containers: [c191559d0caf 1e9042711495 667e92042879 e8792c88b249] I0801 12:12:32.980340 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:12:33.010815 13610 logs.go:274] 2 containers: [88210294c08f 126145f9e140] I0801 12:12:33.010859 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:12:33.037877 13610 logs.go:274] 2 containers: [795b2fd54b9a c72aaf4d122f] I0801 12:12:33.037924 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:12:33.062975 13610 logs.go:274] 0 containers: [] W0801 12:12:33.062995 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:12:33.063042 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:12:33.089492 13610 logs.go:274] 2 containers: [7a0b604c74ed d34ef40886ea] I0801 12:12:33.089539 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:12:33.114831 13610 logs.go:274] 2 containers: [6b0c7e1f8874 c97af5d8e11d] I0801 12:12:33.114845 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:12:33.114851 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:12:33.124024 13610 logs.go:123] Gathering logs for kube-proxy [c72aaf4d122f] ... I0801 12:12:33.124036 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c72aaf4d122f" I0801 12:12:33.151337 13610 logs.go:123] Gathering logs for kube-controller-manager [c97af5d8e11d] ... I0801 12:12:33.151348 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c97af5d8e11d" I0801 12:12:33.188363 13610 logs.go:123] Gathering logs for coredns [e8792c88b249] ... I0801 12:12:33.188373 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8792c88b249" I0801 12:12:33.215737 13610 logs.go:123] Gathering logs for kube-scheduler [126145f9e140] ... I0801 12:12:33.215750 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126145f9e140" I0801 12:12:33.251641 13610 logs.go:123] Gathering logs for kube-controller-manager [6b0c7e1f8874] ... I0801 12:12:33.251650 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b0c7e1f8874" I0801 12:12:33.287862 13610 logs.go:123] Gathering logs for Docker ... I0801 12:12:33.287894 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:12:33.324966 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:12:33.324976 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:12:33.382933 13610 logs.go:123] Gathering logs for kube-apiserver [1df6f6450fe8] ... I0801 12:12:33.382943 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df6f6450fe8" I0801 12:12:33.448954 13610 logs.go:123] Gathering logs for etcd [490f7678f689] ... I0801 12:12:33.448963 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 490f7678f689" I0801 12:12:33.526300 13610 logs.go:123] Gathering logs for etcd [e532d8496e0a] ... I0801 12:12:33.526313 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e532d8496e0a" I0801 12:12:33.595290 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:12:33.595300 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:12:33.660420 13610 logs.go:123] Gathering logs for kube-scheduler [88210294c08f] ... I0801 12:12:33.660430 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88210294c08f" I0801 12:12:33.688167 13610 logs.go:123] Gathering logs for storage-provisioner [d34ef40886ea] ... I0801 12:12:33.688176 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34ef40886ea" I0801 12:12:33.717440 13610 logs.go:123] Gathering logs for kube-proxy [795b2fd54b9a] ... I0801 12:12:33.717450 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 795b2fd54b9a" I0801 12:12:33.745673 13610 logs.go:123] Gathering logs for storage-provisioner [7a0b604c74ed] ... I0801 12:12:33.745682 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0b604c74ed" I0801 12:12:33.772636 13610 logs.go:123] Gathering logs for container status ... I0801 12:12:33.772646 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:12:33.801046 13610 logs.go:123] Gathering logs for kube-apiserver [6fb747a9eb01] ... I0801 12:12:33.801056 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fb747a9eb01" I0801 12:12:33.834538 13610 logs.go:123] Gathering logs for coredns [c191559d0caf] ... I0801 12:12:33.834547 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c191559d0caf" I0801 12:12:33.861297 13610 logs.go:123] Gathering logs for coredns [1e9042711495] ... I0801 12:12:33.861308 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e9042711495" I0801 12:12:33.888188 13610 logs.go:123] Gathering logs for coredns [667e92042879] ... I0801 12:12:33.888199 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 667e92042879" I0801 12:12:36.415913 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:12:41.417152 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:12:41.876866 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:12:41.909867 13610 logs.go:274] 2 containers: [6fb747a9eb01 1df6f6450fe8] I0801 12:12:41.909907 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:12:41.937455 13610 logs.go:274] 2 containers: [490f7678f689 e532d8496e0a] I0801 12:12:41.937495 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:12:41.965434 13610 logs.go:274] 4 containers: [c191559d0caf 1e9042711495 667e92042879 e8792c88b249] I0801 12:12:41.965486 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:12:41.993340 13610 logs.go:274] 2 containers: [88210294c08f 126145f9e140] I0801 12:12:41.993410 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:12:42.018551 13610 logs.go:274] 2 containers: [795b2fd54b9a c72aaf4d122f] I0801 12:12:42.018598 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:12:42.045650 13610 logs.go:274] 0 containers: [] W0801 12:12:42.045661 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:12:42.045710 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:12:42.075597 13610 logs.go:274] 2 containers: [7a0b604c74ed d34ef40886ea] I0801 12:12:42.075640 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:12:42.106002 13610 logs.go:274] 2 containers: [6b0c7e1f8874 c97af5d8e11d] I0801 12:12:42.106016 13610 logs.go:123] Gathering logs for storage-provisioner [7a0b604c74ed] ... I0801 12:12:42.106022 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0b604c74ed" I0801 12:12:42.134419 13610 logs.go:123] Gathering logs for storage-provisioner [d34ef40886ea] ... I0801 12:12:42.134429 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34ef40886ea" I0801 12:12:42.164946 13610 logs.go:123] Gathering logs for kube-controller-manager [c97af5d8e11d] ... I0801 12:12:42.164957 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c97af5d8e11d" I0801 12:12:42.204249 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:12:42.204259 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:12:42.214103 13610 logs.go:123] Gathering logs for coredns [667e92042879] ... I0801 12:12:42.214115 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 667e92042879" I0801 12:12:42.242280 13610 logs.go:123] Gathering logs for kube-scheduler [88210294c08f] ... I0801 12:12:42.242291 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88210294c08f" I0801 12:12:42.270418 13610 logs.go:123] Gathering logs for etcd [e532d8496e0a] ... I0801 12:12:42.270427 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e532d8496e0a" I0801 12:12:42.336321 13610 logs.go:123] Gathering logs for coredns [1e9042711495] ... I0801 12:12:42.336333 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e9042711495" I0801 12:12:42.363709 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:12:42.363719 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:12:42.441312 13610 logs.go:123] Gathering logs for kube-apiserver [6fb747a9eb01] ... I0801 12:12:42.441324 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fb747a9eb01" I0801 12:12:42.486923 13610 logs.go:123] Gathering logs for etcd [490f7678f689] ... I0801 12:12:42.486935 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 490f7678f689" I0801 12:12:42.566940 13610 logs.go:123] Gathering logs for container status ... I0801 12:12:42.566951 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:12:42.596643 13610 logs.go:123] Gathering logs for coredns [e8792c88b249] ... I0801 12:12:42.596654 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8792c88b249" I0801 12:12:42.626070 13610 logs.go:123] Gathering logs for kube-proxy [795b2fd54b9a] ... I0801 12:12:42.626083 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 795b2fd54b9a" I0801 12:12:42.654140 13610 logs.go:123] Gathering logs for kube-proxy [c72aaf4d122f] ... I0801 12:12:42.654149 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c72aaf4d122f" I0801 12:12:42.683288 13610 logs.go:123] Gathering logs for kube-scheduler [126145f9e140] ... I0801 12:12:42.683301 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126145f9e140" I0801 12:12:42.719854 13610 logs.go:123] Gathering logs for kube-controller-manager [6b0c7e1f8874] ... I0801 12:12:42.719863 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b0c7e1f8874" I0801 12:12:42.755183 13610 logs.go:123] Gathering logs for Docker ... I0801 12:12:42.755193 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:12:42.792187 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:12:42.792196 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:12:42.856927 13610 logs.go:123] Gathering logs for kube-apiserver [1df6f6450fe8] ... I0801 12:12:42.856938 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df6f6450fe8" I0801 12:12:42.927084 13610 logs.go:123] Gathering logs for coredns [c191559d0caf] ... I0801 12:12:42.927094 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c191559d0caf" I0801 12:12:45.455092 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:12:50.455835 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:12:50.877356 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:12:50.931471 13610 logs.go:274] 2 containers: [6fb747a9eb01 1df6f6450fe8] I0801 12:12:50.931513 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:12:50.957123 13610 logs.go:274] 2 containers: [490f7678f689 e532d8496e0a] I0801 12:12:50.957164 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:12:50.985611 13610 logs.go:274] 4 containers: [c191559d0caf 1e9042711495 667e92042879 e8792c88b249] I0801 12:12:50.985655 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:12:51.012322 13610 logs.go:274] 2 containers: [88210294c08f 126145f9e140] I0801 12:12:51.012369 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:12:51.037658 13610 logs.go:274] 2 containers: [795b2fd54b9a c72aaf4d122f] I0801 12:12:51.037702 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:12:51.062768 13610 logs.go:274] 0 containers: [] W0801 12:12:51.062777 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:12:51.062822 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:12:51.088453 13610 logs.go:274] 2 containers: [7a0b604c74ed d34ef40886ea] I0801 12:12:51.088520 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:12:51.115130 13610 logs.go:274] 2 containers: [6b0c7e1f8874 c97af5d8e11d] I0801 12:12:51.115160 13610 logs.go:123] Gathering logs for kube-apiserver [6fb747a9eb01] ... I0801 12:12:51.115179 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fb747a9eb01" I0801 12:12:51.147278 13610 logs.go:123] Gathering logs for coredns [c191559d0caf] ... I0801 12:12:51.147288 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c191559d0caf" I0801 12:12:51.174525 13610 logs.go:123] Gathering logs for storage-provisioner [7a0b604c74ed] ... I0801 12:12:51.174537 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0b604c74ed" I0801 12:12:51.207791 13610 logs.go:123] Gathering logs for container status ... I0801 12:12:51.207800 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:12:51.237197 13610 logs.go:123] Gathering logs for kube-proxy [795b2fd54b9a] ... I0801 12:12:51.237210 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 795b2fd54b9a" I0801 12:12:51.267864 13610 logs.go:123] Gathering logs for storage-provisioner [d34ef40886ea] ... I0801 12:12:51.267874 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34ef40886ea" I0801 12:12:51.296735 13610 logs.go:123] Gathering logs for Docker ... I0801 12:12:51.296744 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:12:51.332501 13610 logs.go:123] Gathering logs for etcd [490f7678f689] ... I0801 12:12:51.332512 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 490f7678f689" I0801 12:12:51.404341 13610 logs.go:123] Gathering logs for coredns [1e9042711495] ... I0801 12:12:51.404351 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e9042711495" I0801 12:12:51.432375 13610 logs.go:123] Gathering logs for coredns [667e92042879] ... I0801 12:12:51.432385 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 667e92042879" I0801 12:12:51.460061 13610 logs.go:123] Gathering logs for coredns [e8792c88b249] ... I0801 12:12:51.460075 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8792c88b249" I0801 12:12:51.486623 13610 logs.go:123] Gathering logs for kube-scheduler [126145f9e140] ... I0801 12:12:51.486635 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126145f9e140" I0801 12:12:51.522862 13610 logs.go:123] Gathering logs for kube-proxy [c72aaf4d122f] ... I0801 12:12:51.522871 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c72aaf4d122f" I0801 12:12:51.555663 13610 logs.go:123] Gathering logs for kube-controller-manager [c97af5d8e11d] ... I0801 12:12:51.555673 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c97af5d8e11d" I0801 12:12:51.593990 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:12:51.593999 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:12:51.653192 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:12:51.653202 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:12:51.662497 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:12:51.662511 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:12:51.726931 13610 logs.go:123] Gathering logs for etcd [e532d8496e0a] ... I0801 12:12:51.726943 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e532d8496e0a" I0801 12:12:51.793070 13610 logs.go:123] Gathering logs for kube-apiserver [1df6f6450fe8] ... I0801 12:12:51.793080 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df6f6450fe8" I0801 12:12:51.859533 13610 logs.go:123] Gathering logs for kube-scheduler [88210294c08f] ... I0801 12:12:51.859545 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88210294c08f" I0801 12:12:51.887537 13610 logs.go:123] Gathering logs for kube-controller-manager [6b0c7e1f8874] ... I0801 12:12:51.887547 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b0c7e1f8874" I0801 12:12:54.424994 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:12:59.425445 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:12:59.876870 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:12:59.905773 13610 logs.go:274] 2 containers: [6fb747a9eb01 1df6f6450fe8] I0801 12:12:59.905829 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:12:59.934699 13610 logs.go:274] 2 containers: [490f7678f689 e532d8496e0a] I0801 12:12:59.934740 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:12:59.962733 13610 logs.go:274] 4 containers: [c191559d0caf 1e9042711495 667e92042879 e8792c88b249] I0801 12:12:59.962779 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:12:59.988340 13610 logs.go:274] 2 containers: [88210294c08f 126145f9e140] I0801 12:12:59.988392 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:13:00.016386 13610 logs.go:274] 2 containers: [795b2fd54b9a c72aaf4d122f] I0801 12:13:00.016434 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:13:00.042508 13610 logs.go:274] 0 containers: [] W0801 12:13:00.042517 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:13:00.042550 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:13:00.068869 13610 logs.go:274] 2 containers: [7a0b604c74ed d34ef40886ea] I0801 12:13:00.068925 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:13:00.098960 13610 logs.go:274] 2 containers: [6b0c7e1f8874 c97af5d8e11d] I0801 12:13:00.098980 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:13:00.098989 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:13:00.164749 13610 logs.go:123] Gathering logs for storage-provisioner [7a0b604c74ed] ... I0801 12:13:00.164759 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0b604c74ed" I0801 12:13:00.192844 13610 logs.go:123] Gathering logs for kube-controller-manager [6b0c7e1f8874] ... I0801 12:13:00.192854 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b0c7e1f8874" I0801 12:13:00.229157 13610 logs.go:123] Gathering logs for Docker ... I0801 12:13:00.229168 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:13:00.267077 13610 logs.go:123] Gathering logs for kube-apiserver [6fb747a9eb01] ... I0801 12:13:00.267088 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fb747a9eb01" I0801 12:13:00.310135 13610 logs.go:123] Gathering logs for etcd [490f7678f689] ... I0801 12:13:00.310146 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 490f7678f689" I0801 12:13:00.382707 13610 logs.go:123] Gathering logs for coredns [e8792c88b249] ... I0801 12:13:00.382717 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8792c88b249" I0801 12:13:00.411297 13610 logs.go:123] Gathering logs for kube-scheduler [88210294c08f] ... I0801 12:13:00.411310 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88210294c08f" I0801 12:13:00.441278 13610 logs.go:123] Gathering logs for kube-proxy [c72aaf4d122f] ... I0801 12:13:00.441288 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c72aaf4d122f" I0801 12:13:00.468110 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:13:00.468120 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:13:00.526770 13610 logs.go:123] Gathering logs for coredns [c191559d0caf] ... I0801 12:13:00.526780 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c191559d0caf" I0801 12:13:00.554558 13610 logs.go:123] Gathering logs for kube-proxy [795b2fd54b9a] ... I0801 12:13:00.554572 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 795b2fd54b9a" I0801 12:13:00.585165 13610 logs.go:123] Gathering logs for storage-provisioner [d34ef40886ea] ... I0801 12:13:00.585175 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34ef40886ea" I0801 12:13:00.611682 13610 logs.go:123] Gathering logs for kube-controller-manager [c97af5d8e11d] ... I0801 12:13:00.611694 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c97af5d8e11d" I0801 12:13:00.650639 13610 logs.go:123] Gathering logs for container status ... I0801 12:13:00.650648 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:13:00.682332 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:13:00.722965 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:13:00.747944 13610 logs.go:123] Gathering logs for kube-apiserver [1df6f6450fe8] ... I0801 12:13:00.747961 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df6f6450fe8" I0801 12:13:00.835125 13610 logs.go:123] Gathering logs for etcd [e532d8496e0a] ... I0801 12:13:00.835135 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e532d8496e0a" I0801 12:13:00.900431 13610 logs.go:123] Gathering logs for coredns [1e9042711495] ... I0801 12:13:00.900440 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e9042711495" I0801 12:13:00.927852 13610 logs.go:123] Gathering logs for coredns [667e92042879] ... I0801 12:13:00.927862 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 667e92042879" I0801 12:13:00.955244 13610 logs.go:123] Gathering logs for kube-scheduler [126145f9e140] ... I0801 12:13:00.955255 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126145f9e140" I0801 12:13:03.493978 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:13:08.494956 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:13:08.877894 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:13:08.935687 13610 logs.go:274] 2 containers: [6fb747a9eb01 1df6f6450fe8] I0801 12:13:08.935731 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:13:08.962829 13610 logs.go:274] 2 containers: [490f7678f689 e532d8496e0a] I0801 12:13:08.962877 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:13:08.989871 13610 logs.go:274] 4 containers: [c191559d0caf 1e9042711495 667e92042879 e8792c88b249] I0801 12:13:08.989954 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:13:09.018548 13610 logs.go:274] 2 containers: [88210294c08f 126145f9e140] I0801 12:13:09.018594 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:13:09.045829 13610 logs.go:274] 2 containers: [795b2fd54b9a c72aaf4d122f] I0801 12:13:09.045884 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:13:09.071109 13610 logs.go:274] 0 containers: [] W0801 12:13:09.071121 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:13:09.071162 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:13:09.097014 13610 logs.go:274] 2 containers: [7a0b604c74ed d34ef40886ea] I0801 12:13:09.097056 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:13:09.124258 13610 logs.go:274] 2 containers: [6b0c7e1f8874 c97af5d8e11d] I0801 12:13:09.124276 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:13:09.124282 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:13:09.185560 13610 logs.go:123] Gathering logs for kube-apiserver [6fb747a9eb01] ... I0801 12:13:09.185569 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fb747a9eb01" I0801 12:13:09.224469 13610 logs.go:123] Gathering logs for kube-apiserver [1df6f6450fe8] ... I0801 12:13:09.224480 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df6f6450fe8" I0801 12:13:09.296757 13610 logs.go:123] Gathering logs for etcd [490f7678f689] ... I0801 12:13:09.296771 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 490f7678f689" I0801 12:13:09.369660 13610 logs.go:123] Gathering logs for coredns [c191559d0caf] ... I0801 12:13:09.369671 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c191559d0caf" I0801 12:13:09.399229 13610 logs.go:123] Gathering logs for coredns [1e9042711495] ... I0801 12:13:09.399239 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e9042711495" I0801 12:13:09.426829 13610 logs.go:123] Gathering logs for storage-provisioner [7a0b604c74ed] ... I0801 12:13:09.426841 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0b604c74ed" I0801 12:13:09.458082 13610 logs.go:123] Gathering logs for kube-controller-manager [6b0c7e1f8874] ... I0801 12:13:09.458093 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b0c7e1f8874" I0801 12:13:09.496913 13610 logs.go:123] Gathering logs for etcd [e532d8496e0a] ... I0801 12:13:09.496924 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e532d8496e0a" I0801 12:13:09.580094 13610 logs.go:123] Gathering logs for kube-controller-manager [c97af5d8e11d] ... I0801 12:13:09.580121 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c97af5d8e11d" I0801 12:13:09.618475 13610 logs.go:123] Gathering logs for coredns [667e92042879] ... I0801 12:13:09.618486 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 667e92042879" I0801 12:13:09.647638 13610 logs.go:123] Gathering logs for coredns [e8792c88b249] ... I0801 12:13:09.647651 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8792c88b249" I0801 12:13:09.677557 13610 logs.go:123] Gathering logs for container status ... I0801 12:13:09.677567 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:13:09.704187 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:13:09.704201 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:13:09.713595 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:13:09.713609 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:13:09.785537 13610 logs.go:123] Gathering logs for kube-scheduler [88210294c08f] ... I0801 12:13:09.785547 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88210294c08f" I0801 12:13:09.815780 13610 logs.go:123] Gathering logs for kube-scheduler [126145f9e140] ... I0801 12:13:09.815790 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126145f9e140" I0801 12:13:09.854820 13610 logs.go:123] Gathering logs for kube-proxy [795b2fd54b9a] ... I0801 12:13:09.854829 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 795b2fd54b9a" I0801 12:13:09.884976 13610 logs.go:123] Gathering logs for kube-proxy [c72aaf4d122f] ... I0801 12:13:09.884985 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c72aaf4d122f" I0801 12:13:09.913109 13610 logs.go:123] Gathering logs for storage-provisioner [d34ef40886ea] ... I0801 12:13:09.913119 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34ef40886ea" I0801 12:13:09.940592 13610 logs.go:123] Gathering logs for Docker ... I0801 12:13:09.940603 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:13:12.481123 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:13:17.481918 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:13:17.877652 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:13:17.938386 13610 logs.go:274] 2 containers: [6fb747a9eb01 1df6f6450fe8] I0801 12:13:17.938458 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:13:17.964582 13610 logs.go:274] 2 containers: [490f7678f689 e532d8496e0a] I0801 12:13:17.964630 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:13:17.992416 13610 logs.go:274] 4 containers: [c191559d0caf 1e9042711495 667e92042879 e8792c88b249] I0801 12:13:17.992476 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:13:18.019623 13610 logs.go:274] 2 containers: [88210294c08f 126145f9e140] I0801 12:13:18.019682 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:13:18.046353 13610 logs.go:274] 2 containers: [795b2fd54b9a c72aaf4d122f] I0801 12:13:18.046400 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:13:18.073409 13610 logs.go:274] 0 containers: [] W0801 12:13:18.073432 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:13:18.073478 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:13:18.101099 13610 logs.go:274] 2 containers: [7a0b604c74ed d34ef40886ea] I0801 12:13:18.101165 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:13:18.129735 13610 logs.go:274] 2 containers: [6b0c7e1f8874 c97af5d8e11d] I0801 12:13:18.129752 13610 logs.go:123] Gathering logs for kube-proxy [795b2fd54b9a] ... I0801 12:13:18.129758 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 795b2fd54b9a" I0801 12:13:18.160590 13610 logs.go:123] Gathering logs for kube-proxy [c72aaf4d122f] ... I0801 12:13:18.160599 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c72aaf4d122f" I0801 12:13:18.189332 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:13:18.189346 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:13:18.198971 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:13:18.198984 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:13:18.264113 13610 logs.go:123] Gathering logs for etcd [490f7678f689] ... I0801 12:13:18.264122 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 490f7678f689" I0801 12:13:18.336624 13610 logs.go:123] Gathering logs for coredns [1e9042711495] ... I0801 12:13:18.336634 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e9042711495" I0801 12:13:18.366081 13610 logs.go:123] Gathering logs for storage-provisioner [7a0b604c74ed] ... I0801 12:13:18.366110 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0b604c74ed" I0801 12:13:18.395259 13610 logs.go:123] Gathering logs for container status ... I0801 12:13:18.395269 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:13:18.421290 13610 logs.go:123] Gathering logs for etcd [e532d8496e0a] ... I0801 12:13:18.421299 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e532d8496e0a" I0801 12:13:18.487485 13610 logs.go:123] Gathering logs for coredns [667e92042879] ... I0801 12:13:18.487497 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 667e92042879" I0801 12:13:18.516032 13610 logs.go:123] Gathering logs for kube-scheduler [126145f9e140] ... I0801 12:13:18.516045 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126145f9e140" I0801 12:13:18.560186 13610 logs.go:123] Gathering logs for storage-provisioner [d34ef40886ea] ... I0801 12:13:18.560197 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34ef40886ea" I0801 12:13:18.591681 13610 logs.go:123] Gathering logs for coredns [e8792c88b249] ... I0801 12:13:18.591706 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8792c88b249" I0801 12:13:18.619883 13610 logs.go:123] Gathering logs for kube-scheduler [88210294c08f] ... I0801 12:13:18.619896 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88210294c08f" I0801 12:13:18.648131 13610 logs.go:123] Gathering logs for kube-controller-manager [6b0c7e1f8874] ... I0801 12:13:18.648140 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b0c7e1f8874" I0801 12:13:18.684223 13610 logs.go:123] Gathering logs for kube-controller-manager [c97af5d8e11d] ... I0801 12:13:18.684232 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c97af5d8e11d" I0801 12:13:18.721400 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:13:18.721410 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:13:18.784774 13610 logs.go:123] Gathering logs for kube-apiserver [6fb747a9eb01] ... I0801 12:13:18.784785 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fb747a9eb01" I0801 12:13:18.819383 13610 logs.go:123] Gathering logs for kube-apiserver [1df6f6450fe8] ... I0801 12:13:18.819394 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df6f6450fe8" I0801 12:13:18.886307 13610 logs.go:123] Gathering logs for coredns [c191559d0caf] ... I0801 12:13:18.886318 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c191559d0caf" I0801 12:13:18.913528 13610 logs.go:123] Gathering logs for Docker ... I0801 12:13:18.913542 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:13:21.450573 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:13:26.451099 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:13:26.877815 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:13:26.947325 13610 logs.go:274] 2 containers: [6fb747a9eb01 1df6f6450fe8] I0801 12:13:26.947380 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:13:27.031424 13610 logs.go:274] 2 containers: [490f7678f689 e532d8496e0a] I0801 12:13:27.031465 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:13:27.057463 13610 logs.go:274] 4 containers: [c191559d0caf 1e9042711495 667e92042879 e8792c88b249] I0801 12:13:27.057509 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:13:27.083482 13610 logs.go:274] 2 containers: [88210294c08f 126145f9e140] I0801 12:13:27.083525 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:13:27.109701 13610 logs.go:274] 2 containers: [795b2fd54b9a c72aaf4d122f] I0801 12:13:27.109756 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:13:27.134862 13610 logs.go:274] 0 containers: [] W0801 12:13:27.134884 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:13:27.134953 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:13:27.166855 13610 logs.go:274] 2 containers: [7a0b604c74ed d34ef40886ea] I0801 12:13:27.166910 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:13:27.192485 13610 logs.go:274] 2 containers: [6b0c7e1f8874 c97af5d8e11d] I0801 12:13:27.192504 13610 logs.go:123] Gathering logs for coredns [667e92042879] ... I0801 12:13:27.192512 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 667e92042879" I0801 12:13:27.220077 13610 logs.go:123] Gathering logs for kube-proxy [795b2fd54b9a] ... I0801 12:13:27.220088 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 795b2fd54b9a" I0801 12:13:27.248576 13610 logs.go:123] Gathering logs for kube-controller-manager [6b0c7e1f8874] ... I0801 12:13:27.248586 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b0c7e1f8874" I0801 12:13:27.284385 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:13:27.284395 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:13:27.297499 13610 logs.go:123] Gathering logs for kube-apiserver [1df6f6450fe8] ... I0801 12:13:27.297510 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df6f6450fe8" I0801 12:13:27.365980 13610 logs.go:123] Gathering logs for etcd [490f7678f689] ... I0801 12:13:27.365991 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 490f7678f689" I0801 12:13:27.451464 13610 logs.go:123] Gathering logs for coredns [1e9042711495] ... I0801 12:13:27.451476 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e9042711495" I0801 12:13:27.490004 13610 logs.go:123] Gathering logs for kube-proxy [c72aaf4d122f] ... I0801 12:13:27.490016 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c72aaf4d122f" I0801 12:13:27.530728 13610 logs.go:123] Gathering logs for kube-controller-manager [c97af5d8e11d] ... I0801 12:13:27.530738 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c97af5d8e11d" I0801 12:13:27.581911 13610 logs.go:123] Gathering logs for container status ... I0801 12:13:27.581923 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:13:27.619967 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:13:27.619977 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:13:27.682112 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:13:27.682123 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:13:27.747366 13610 logs.go:123] Gathering logs for kube-apiserver [6fb747a9eb01] ... I0801 12:13:27.747375 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fb747a9eb01" I0801 12:13:27.781202 13610 logs.go:123] Gathering logs for coredns [e8792c88b249] ... I0801 12:13:27.781212 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8792c88b249" I0801 12:13:27.811853 13610 logs.go:123] Gathering logs for kube-scheduler [88210294c08f] ... I0801 12:13:27.811864 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88210294c08f" I0801 12:13:27.840570 13610 logs.go:123] Gathering logs for storage-provisioner [7a0b604c74ed] ... I0801 12:13:27.840580 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0b604c74ed" I0801 12:13:27.870938 13610 logs.go:123] Gathering logs for storage-provisioner [d34ef40886ea] ... I0801 12:13:27.870947 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34ef40886ea" I0801 12:13:27.900706 13610 logs.go:123] Gathering logs for etcd [e532d8496e0a] ... I0801 12:13:27.900720 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e532d8496e0a" I0801 12:13:27.985499 13610 logs.go:123] Gathering logs for coredns [c191559d0caf] ... I0801 12:13:27.985511 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c191559d0caf" I0801 12:13:28.016583 13610 logs.go:123] Gathering logs for kube-scheduler [126145f9e140] ... I0801 12:13:28.016597 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126145f9e140" I0801 12:13:28.054421 13610 logs.go:123] Gathering logs for Docker ... I0801 12:13:28.054430 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:13:30.594114 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:13:35.594396 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:13:35.877535 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:13:35.937222 13610 logs.go:274] 2 containers: [6fb747a9eb01 1df6f6450fe8] I0801 12:13:35.937285 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:13:35.963944 13610 logs.go:274] 2 containers: [490f7678f689 e532d8496e0a] I0801 12:13:35.963985 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:13:35.991809 13610 logs.go:274] 4 containers: [c191559d0caf 1e9042711495 667e92042879 e8792c88b249] I0801 12:13:35.991853 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:13:36.017552 13610 logs.go:274] 2 containers: [88210294c08f 126145f9e140] I0801 12:13:36.017600 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:13:36.043557 13610 logs.go:274] 2 containers: [795b2fd54b9a c72aaf4d122f] I0801 12:13:36.043604 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:13:36.071671 13610 logs.go:274] 0 containers: [] W0801 12:13:36.071681 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:13:36.071716 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:13:36.097570 13610 logs.go:274] 2 containers: [7a0b604c74ed d34ef40886ea] I0801 12:13:36.097618 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:13:36.123568 13610 logs.go:274] 2 containers: [6b0c7e1f8874 c97af5d8e11d] I0801 12:13:36.123588 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:13:36.123593 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:13:36.132655 13610 logs.go:123] Gathering logs for etcd [490f7678f689] ... I0801 12:13:36.132666 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 490f7678f689" I0801 12:13:36.203403 13610 logs.go:123] Gathering logs for kube-proxy [c72aaf4d122f] ... I0801 12:13:36.203412 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c72aaf4d122f" I0801 12:13:36.235851 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:13:36.235861 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:13:36.313324 13610 logs.go:123] Gathering logs for storage-provisioner [7a0b604c74ed] ... I0801 12:13:36.313335 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0b604c74ed" I0801 12:13:36.342305 13610 logs.go:123] Gathering logs for kube-controller-manager [6b0c7e1f8874] ... I0801 12:13:36.342314 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b0c7e1f8874" I0801 12:13:36.380439 13610 logs.go:123] Gathering logs for container status ... I0801 12:13:36.380449 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:13:36.409290 13610 logs.go:123] Gathering logs for kube-apiserver [1df6f6450fe8] ... I0801 12:13:36.409300 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df6f6450fe8" I0801 12:13:36.488174 13610 logs.go:123] Gathering logs for etcd [e532d8496e0a] ... I0801 12:13:36.488185 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e532d8496e0a" I0801 12:13:36.556172 13610 logs.go:123] Gathering logs for coredns [1e9042711495] ... I0801 12:13:36.556186 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e9042711495" I0801 12:13:36.588004 13610 logs.go:123] Gathering logs for storage-provisioner [d34ef40886ea] ... I0801 12:13:36.588020 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34ef40886ea" I0801 12:13:36.620295 13610 logs.go:123] Gathering logs for kube-controller-manager [c97af5d8e11d] ... I0801 12:13:36.620304 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c97af5d8e11d" I0801 12:13:36.663759 13610 logs.go:123] Gathering logs for Docker ... I0801 12:13:36.663770 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:13:36.703921 13610 logs.go:123] Gathering logs for kube-proxy [795b2fd54b9a] ... I0801 12:13:36.703932 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 795b2fd54b9a" I0801 12:13:36.734067 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:13:36.734077 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:13:36.798576 13610 logs.go:123] Gathering logs for kube-apiserver [6fb747a9eb01] ... I0801 12:13:36.798587 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fb747a9eb01" I0801 12:13:36.834505 13610 logs.go:123] Gathering logs for coredns [c191559d0caf] ... I0801 12:13:36.834516 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c191559d0caf" I0801 12:13:36.863975 13610 logs.go:123] Gathering logs for coredns [667e92042879] ... I0801 12:13:36.863986 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 667e92042879" I0801 12:13:36.892020 13610 logs.go:123] Gathering logs for coredns [e8792c88b249] ... I0801 12:13:36.892033 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8792c88b249" I0801 12:13:36.923021 13610 logs.go:123] Gathering logs for kube-scheduler [88210294c08f] ... I0801 12:13:36.923031 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88210294c08f" I0801 12:13:36.952098 13610 logs.go:123] Gathering logs for kube-scheduler [126145f9e140] ... I0801 12:13:36.952107 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126145f9e140" I0801 12:13:39.524923 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:13:44.525877 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:13:44.877324 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:13:44.920968 13610 logs.go:274] 2 containers: [6fb747a9eb01 1df6f6450fe8] I0801 12:13:44.921053 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:13:44.947351 13610 logs.go:274] 2 containers: [490f7678f689 e532d8496e0a] I0801 12:13:44.947396 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:13:44.975519 13610 logs.go:274] 4 containers: [c191559d0caf 1e9042711495 667e92042879 e8792c88b249] I0801 12:13:44.975575 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:13:45.005404 13610 logs.go:274] 2 containers: [88210294c08f 126145f9e140] I0801 12:13:45.005453 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:13:45.033463 13610 logs.go:274] 2 containers: [795b2fd54b9a c72aaf4d122f] I0801 12:13:45.033516 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:13:45.063188 13610 logs.go:274] 0 containers: [] W0801 12:13:45.063203 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:13:45.063245 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:13:45.094552 13610 logs.go:274] 2 containers: [7a0b604c74ed d34ef40886ea] I0801 12:13:45.094609 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:13:45.121353 13610 logs.go:274] 2 containers: [6b0c7e1f8874 c97af5d8e11d] I0801 12:13:45.121374 13610 logs.go:123] Gathering logs for storage-provisioner [7a0b604c74ed] ... I0801 12:13:45.121424 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0b604c74ed" I0801 12:13:45.149649 13610 logs.go:123] Gathering logs for container status ... I0801 12:13:45.149660 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:13:45.177946 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:13:45.177956 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:13:45.242975 13610 logs.go:123] Gathering logs for kube-scheduler [126145f9e140] ... I0801 12:13:45.242986 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126145f9e140" I0801 12:13:45.285179 13610 logs.go:123] Gathering logs for kube-proxy [c72aaf4d122f] ... I0801 12:13:45.285190 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c72aaf4d122f" I0801 12:13:45.315572 13610 logs.go:123] Gathering logs for kube-controller-manager [6b0c7e1f8874] ... I0801 12:13:45.315584 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b0c7e1f8874" I0801 12:13:45.353080 13610 logs.go:123] Gathering logs for kube-controller-manager [c97af5d8e11d] ... I0801 12:13:45.353090 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c97af5d8e11d" I0801 12:13:45.394017 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:13:45.394029 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:13:45.403376 13610 logs.go:123] Gathering logs for kube-apiserver [6fb747a9eb01] ... I0801 12:13:45.403389 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fb747a9eb01" I0801 12:13:45.439537 13610 logs.go:123] Gathering logs for coredns [1e9042711495] ... I0801 12:13:45.439546 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e9042711495" I0801 12:13:45.469683 13610 logs.go:123] Gathering logs for coredns [667e92042879] ... I0801 12:13:45.469693 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 667e92042879" I0801 12:13:45.501478 13610 logs.go:123] Gathering logs for kube-scheduler [88210294c08f] ... I0801 12:13:45.501490 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88210294c08f" I0801 12:13:45.532685 13610 logs.go:123] Gathering logs for Docker ... I0801 12:13:45.532697 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:13:45.571919 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:13:45.571932 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:13:45.638405 13610 logs.go:123] Gathering logs for etcd [490f7678f689] ... I0801 12:13:45.638416 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 490f7678f689" I0801 12:13:45.723751 13610 logs.go:123] Gathering logs for coredns [c191559d0caf] ... I0801 12:13:45.850988 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c191559d0caf" I0801 12:13:45.906586 13610 logs.go:123] Gathering logs for kube-proxy [795b2fd54b9a] ... I0801 12:13:45.906599 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 795b2fd54b9a" I0801 12:13:45.938617 13610 logs.go:123] Gathering logs for storage-provisioner [d34ef40886ea] ... I0801 12:13:45.938631 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34ef40886ea" I0801 12:13:45.972393 13610 logs.go:123] Gathering logs for kube-apiserver [1df6f6450fe8] ... I0801 12:13:45.972408 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df6f6450fe8" I0801 12:13:46.051509 13610 logs.go:123] Gathering logs for etcd [e532d8496e0a] ... I0801 12:13:46.051520 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e532d8496e0a" I0801 12:13:46.129109 13610 logs.go:123] Gathering logs for coredns [e8792c88b249] ... I0801 12:13:46.129121 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8792c88b249" I0801 12:13:48.659756 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:13:53.660191 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:13:53.877941 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:13:53.938296 13610 logs.go:274] 2 containers: [6fb747a9eb01 1df6f6450fe8] I0801 12:13:53.938342 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:13:53.966182 13610 logs.go:274] 2 containers: [490f7678f689 e532d8496e0a] I0801 12:13:53.966228 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:13:53.993769 13610 logs.go:274] 4 containers: [c191559d0caf 1e9042711495 667e92042879 e8792c88b249] I0801 12:13:53.993818 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:13:54.019638 13610 logs.go:274] 2 containers: [88210294c08f 126145f9e140] I0801 12:13:54.019700 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:13:54.045253 13610 logs.go:274] 2 containers: [795b2fd54b9a c72aaf4d122f] I0801 12:13:54.045314 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:13:54.070957 13610 logs.go:274] 0 containers: [] W0801 12:13:54.070979 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:13:54.071014 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:13:54.097199 13610 logs.go:274] 2 containers: [7a0b604c74ed d34ef40886ea] I0801 12:13:54.097237 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:13:54.123467 13610 logs.go:274] 2 containers: [6b0c7e1f8874 c97af5d8e11d] I0801 12:13:54.123482 13610 logs.go:123] Gathering logs for etcd [e532d8496e0a] ... I0801 12:13:54.123487 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e532d8496e0a" I0801 12:13:54.188613 13610 logs.go:123] Gathering logs for kube-scheduler [126145f9e140] ... I0801 12:13:54.188623 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126145f9e140" I0801 12:13:54.226812 13610 logs.go:123] Gathering logs for storage-provisioner [d34ef40886ea] ... I0801 12:13:54.226823 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34ef40886ea" I0801 12:13:54.254188 13610 logs.go:123] Gathering logs for kube-controller-manager [6b0c7e1f8874] ... I0801 12:13:54.254198 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b0c7e1f8874" I0801 12:13:54.289737 13610 logs.go:123] Gathering logs for kube-controller-manager [c97af5d8e11d] ... I0801 12:13:54.289746 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c97af5d8e11d" I0801 12:13:54.328145 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:13:54.328155 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:13:54.391199 13610 logs.go:123] Gathering logs for kube-apiserver [1df6f6450fe8] ... I0801 12:13:54.391209 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df6f6450fe8" I0801 12:13:54.471771 13610 logs.go:123] Gathering logs for Docker ... I0801 12:13:54.471783 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:13:54.509112 13610 logs.go:123] Gathering logs for coredns [e8792c88b249] ... I0801 12:13:54.509121 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8792c88b249" I0801 12:13:54.537136 13610 logs.go:123] Gathering logs for kube-proxy [c72aaf4d122f] ... I0801 12:13:54.537147 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c72aaf4d122f" I0801 12:13:54.563767 13610 logs.go:123] Gathering logs for etcd [490f7678f689] ... I0801 12:13:54.563778 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 490f7678f689" I0801 12:13:54.638212 13610 logs.go:123] Gathering logs for coredns [c191559d0caf] ... I0801 12:13:54.638221 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c191559d0caf" I0801 12:13:54.665099 13610 logs.go:123] Gathering logs for coredns [1e9042711495] ... I0801 12:13:54.665111 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e9042711495" I0801 12:13:54.691412 13610 logs.go:123] Gathering logs for coredns [667e92042879] ... I0801 12:13:54.691423 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 667e92042879" I0801 12:13:54.718889 13610 logs.go:123] Gathering logs for kube-scheduler [88210294c08f] ... I0801 12:13:54.718900 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88210294c08f" I0801 12:13:54.748122 13610 logs.go:123] Gathering logs for kube-proxy [795b2fd54b9a] ... I0801 12:13:54.748131 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 795b2fd54b9a" I0801 12:13:54.776276 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:13:54.776285 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:13:54.836956 13610 logs.go:123] Gathering logs for kube-apiserver [6fb747a9eb01] ... I0801 12:13:54.836966 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fb747a9eb01" I0801 12:13:54.881352 13610 logs.go:123] Gathering logs for storage-provisioner [7a0b604c74ed] ... I0801 12:13:54.881362 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0b604c74ed" I0801 12:13:54.911353 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:13:54.911362 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:13:54.921259 13610 logs.go:123] Gathering logs for container status ... I0801 12:13:54.921270 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:13:57.452003 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:14:02.452985 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:14:02.877727 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:14:02.929545 13610 logs.go:274] 2 containers: [6fb747a9eb01 1df6f6450fe8] I0801 12:14:02.929594 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:14:02.956202 13610 logs.go:274] 2 containers: [490f7678f689 e532d8496e0a] I0801 12:14:02.956244 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:14:02.985803 13610 logs.go:274] 4 containers: [c191559d0caf 1e9042711495 667e92042879 e8792c88b249] I0801 12:14:02.985850 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:14:03.013363 13610 logs.go:274] 2 containers: [88210294c08f 126145f9e140] I0801 12:14:03.013405 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:14:03.039527 13610 logs.go:274] 2 containers: [795b2fd54b9a c72aaf4d122f] I0801 12:14:03.039580 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:14:03.065585 13610 logs.go:274] 0 containers: [] W0801 12:14:03.065607 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:14:03.065640 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:14:03.096969 13610 logs.go:274] 2 containers: [7a0b604c74ed d34ef40886ea] I0801 12:14:03.097015 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:14:03.123628 13610 logs.go:274] 2 containers: [6b0c7e1f8874 c97af5d8e11d] I0801 12:14:03.123647 13610 logs.go:123] Gathering logs for kube-controller-manager [6b0c7e1f8874] ... I0801 12:14:03.123654 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b0c7e1f8874" I0801 12:14:03.161108 13610 logs.go:123] Gathering logs for kube-apiserver [6fb747a9eb01] ... I0801 12:14:03.161118 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fb747a9eb01" I0801 12:14:03.202282 13610 logs.go:123] Gathering logs for kube-apiserver [1df6f6450fe8] ... I0801 12:14:03.202292 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df6f6450fe8" I0801 12:14:03.285495 13610 logs.go:123] Gathering logs for etcd [e532d8496e0a] ... I0801 12:14:03.285506 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e532d8496e0a" I0801 12:14:03.365497 13610 logs.go:123] Gathering logs for storage-provisioner [7a0b604c74ed] ... I0801 12:14:03.365519 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0b604c74ed" I0801 12:14:03.420054 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:14:03.420064 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:14:03.429158 13610 logs.go:123] Gathering logs for etcd [490f7678f689] ... I0801 12:14:03.429169 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 490f7678f689" I0801 12:14:03.505511 13610 logs.go:123] Gathering logs for coredns [1e9042711495] ... I0801 12:14:03.505522 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e9042711495" I0801 12:14:03.539615 13610 logs.go:123] Gathering logs for coredns [e8792c88b249] ... I0801 12:14:03.539628 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8792c88b249" I0801 12:14:03.568191 13610 logs.go:123] Gathering logs for storage-provisioner [d34ef40886ea] ... I0801 12:14:03.568201 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34ef40886ea" I0801 12:14:03.598164 13610 logs.go:123] Gathering logs for kube-controller-manager [c97af5d8e11d] ... I0801 12:14:03.598178 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c97af5d8e11d" I0801 12:14:03.636564 13610 logs.go:123] Gathering logs for container status ... I0801 12:14:03.636574 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:14:03.663789 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:14:03.663799 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:14:03.864600 13610 logs.go:123] Gathering logs for coredns [c191559d0caf] ... I0801 12:14:03.864621 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c191559d0caf" I0801 12:14:03.905987 13610 logs.go:123] Gathering logs for coredns [667e92042879] ... I0801 12:14:03.905998 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 667e92042879" I0801 12:14:03.935509 13610 logs.go:123] Gathering logs for kube-scheduler [88210294c08f] ... I0801 12:14:03.935522 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88210294c08f" I0801 12:14:03.967007 13610 logs.go:123] Gathering logs for kube-scheduler [126145f9e140] ... I0801 12:14:03.967017 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126145f9e140" I0801 12:14:04.014199 13610 logs.go:123] Gathering logs for kube-proxy [795b2fd54b9a] ... I0801 12:14:04.014209 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 795b2fd54b9a" I0801 12:14:04.043153 13610 logs.go:123] Gathering logs for kube-proxy [c72aaf4d122f] ... I0801 12:14:04.043164 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c72aaf4d122f" I0801 12:14:04.072694 13610 logs.go:123] Gathering logs for Docker ... I0801 12:14:04.072706 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:14:04.111887 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:14:04.111897 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:14:06.676560 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:14:11.677114 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:14:11.877645 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:14:11.934594 13610 logs.go:274] 2 containers: [6fb747a9eb01 1df6f6450fe8] I0801 12:14:11.934649 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:14:11.961734 13610 logs.go:274] 2 containers: [490f7678f689 e532d8496e0a] I0801 12:14:11.961790 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:14:11.989506 13610 logs.go:274] 4 containers: [c191559d0caf 1e9042711495 667e92042879 e8792c88b249] I0801 12:14:11.989559 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:14:12.016873 13610 logs.go:274] 2 containers: [88210294c08f 126145f9e140] I0801 12:14:12.016950 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:14:12.043433 13610 logs.go:274] 2 containers: [795b2fd54b9a c72aaf4d122f] I0801 12:14:12.043486 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:14:12.072695 13610 logs.go:274] 0 containers: [] W0801 12:14:12.072704 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:14:12.072736 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:14:12.101761 13610 logs.go:274] 2 containers: [7a0b604c74ed d34ef40886ea] I0801 12:14:12.101814 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:14:12.130551 13610 logs.go:274] 2 containers: [6b0c7e1f8874 c97af5d8e11d] I0801 12:14:12.130569 13610 logs.go:123] Gathering logs for kube-apiserver [6fb747a9eb01] ... I0801 12:14:12.130574 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fb747a9eb01" I0801 12:14:12.167486 13610 logs.go:123] Gathering logs for kube-scheduler [126145f9e140] ... I0801 12:14:12.167496 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126145f9e140" I0801 12:14:12.210881 13610 logs.go:123] Gathering logs for kube-proxy [c72aaf4d122f] ... I0801 12:14:12.210895 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c72aaf4d122f" I0801 12:14:12.239985 13610 logs.go:123] Gathering logs for Docker ... I0801 12:14:12.239995 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:14:12.279244 13610 logs.go:123] Gathering logs for container status ... I0801 12:14:12.279255 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:14:12.305205 13610 logs.go:123] Gathering logs for kube-controller-manager [c97af5d8e11d] ... I0801 12:14:12.305215 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c97af5d8e11d" I0801 12:14:12.343449 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:14:12.343459 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:14:12.352834 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:14:12.352847 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:14:12.425024 13610 logs.go:123] Gathering logs for kube-apiserver [1df6f6450fe8] ... I0801 12:14:12.425039 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df6f6450fe8" I0801 12:14:12.506739 13610 logs.go:123] Gathering logs for coredns [c191559d0caf] ... I0801 12:14:12.506750 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c191559d0caf" I0801 12:14:12.539833 13610 logs.go:123] Gathering logs for kube-scheduler [88210294c08f] ... I0801 12:14:12.539843 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88210294c08f" I0801 12:14:12.574801 13610 logs.go:123] Gathering logs for etcd [490f7678f689] ... I0801 12:14:12.574811 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 490f7678f689" I0801 12:14:12.650678 13610 logs.go:123] Gathering logs for etcd [e532d8496e0a] ... I0801 12:14:12.650689 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e532d8496e0a" I0801 12:14:12.718774 13610 logs.go:123] Gathering logs for coredns [1e9042711495] ... I0801 12:14:12.718787 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e9042711495" I0801 12:14:12.747183 13610 logs.go:123] Gathering logs for storage-provisioner [d34ef40886ea] ... I0801 12:14:12.747197 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34ef40886ea" I0801 12:14:12.774960 13610 logs.go:123] Gathering logs for kube-controller-manager [6b0c7e1f8874] ... I0801 12:14:12.774977 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b0c7e1f8874" I0801 12:14:12.812147 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:14:12.812158 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:14:12.876519 13610 logs.go:123] Gathering logs for coredns [667e92042879] ... I0801 12:14:12.876531 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 667e92042879" I0801 12:14:12.905288 13610 logs.go:123] Gathering logs for coredns [e8792c88b249] ... I0801 12:14:12.905298 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8792c88b249" I0801 12:14:12.937483 13610 logs.go:123] Gathering logs for kube-proxy [795b2fd54b9a] ... I0801 12:14:12.937493 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 795b2fd54b9a" I0801 12:14:12.967026 13610 logs.go:123] Gathering logs for storage-provisioner [7a0b604c74ed] ... I0801 12:14:12.967038 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0b604c74ed" I0801 12:14:15.495857 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:14:20.496047 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:14:20.877493 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:14:20.937125 13610 logs.go:274] 2 containers: [6fb747a9eb01 1df6f6450fe8] I0801 12:14:20.937194 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:14:20.962728 13610 logs.go:274] 2 containers: [490f7678f689 e532d8496e0a] I0801 12:14:20.962773 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:14:20.989581 13610 logs.go:274] 4 containers: [c191559d0caf 1e9042711495 667e92042879 e8792c88b249] I0801 12:14:20.989634 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:14:21.017476 13610 logs.go:274] 2 containers: [88210294c08f 126145f9e140] I0801 12:14:21.017532 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:14:21.043825 13610 logs.go:274] 2 containers: [795b2fd54b9a c72aaf4d122f] I0801 12:14:21.043877 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:14:21.069126 13610 logs.go:274] 0 containers: [] W0801 12:14:21.069135 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:14:21.069214 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:14:21.095586 13610 logs.go:274] 2 containers: [7a0b604c74ed d34ef40886ea] I0801 12:14:21.095631 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:14:21.123663 13610 logs.go:274] 2 containers: [6b0c7e1f8874 c97af5d8e11d] I0801 12:14:21.123678 13610 logs.go:123] Gathering logs for kube-apiserver [6fb747a9eb01] ... I0801 12:14:21.123684 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fb747a9eb01" I0801 12:14:21.162511 13610 logs.go:123] Gathering logs for coredns [1e9042711495] ... I0801 12:14:21.162520 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e9042711495" I0801 12:14:21.190780 13610 logs.go:123] Gathering logs for coredns [e8792c88b249] ... I0801 12:14:21.190790 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8792c88b249" I0801 12:14:21.220684 13610 logs.go:123] Gathering logs for kube-proxy [c72aaf4d122f] ... I0801 12:14:21.220695 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c72aaf4d122f" I0801 12:14:21.247939 13610 logs.go:123] Gathering logs for coredns [667e92042879] ... I0801 12:14:21.247949 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 667e92042879" I0801 12:14:21.277462 13610 logs.go:123] Gathering logs for kube-scheduler [126145f9e140] ... I0801 12:14:21.277475 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126145f9e140" I0801 12:14:21.316544 13610 logs.go:123] Gathering logs for kube-proxy [795b2fd54b9a] ... I0801 12:14:21.316556 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 795b2fd54b9a" I0801 12:14:21.344403 13610 logs.go:123] Gathering logs for kube-controller-manager [6b0c7e1f8874] ... I0801 12:14:21.344412 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b0c7e1f8874" I0801 12:14:21.381341 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:14:21.381352 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:14:21.445186 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:14:21.445196 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:14:21.454874 13610 logs.go:123] Gathering logs for kube-apiserver [1df6f6450fe8] ... I0801 12:14:21.454883 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df6f6450fe8" I0801 12:14:21.523171 13610 logs.go:123] Gathering logs for coredns [c191559d0caf] ... I0801 12:14:21.523182 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c191559d0caf" I0801 12:14:21.551578 13610 logs.go:123] Gathering logs for Docker ... I0801 12:14:21.551589 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:14:21.587687 13610 logs.go:123] Gathering logs for kube-controller-manager [c97af5d8e11d] ... I0801 12:14:21.587697 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c97af5d8e11d" I0801 12:14:21.625479 13610 logs.go:123] Gathering logs for container status ... I0801 12:14:21.625488 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:14:21.667831 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:14:21.667841 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:14:21.735024 13610 logs.go:123] Gathering logs for etcd [e532d8496e0a] ... I0801 12:14:21.735034 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e532d8496e0a" I0801 12:14:21.803521 13610 logs.go:123] Gathering logs for kube-scheduler [88210294c08f] ... I0801 12:14:21.803532 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88210294c08f" I0801 12:14:21.832030 13610 logs.go:123] Gathering logs for storage-provisioner [d34ef40886ea] ... I0801 12:14:21.832040 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34ef40886ea" I0801 12:14:21.858818 13610 logs.go:123] Gathering logs for etcd [490f7678f689] ... I0801 12:14:21.858831 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 490f7678f689" I0801 12:14:21.932954 13610 logs.go:123] Gathering logs for storage-provisioner [7a0b604c74ed] ... I0801 12:14:21.932964 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0b604c74ed" I0801 12:14:24.462565 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:14:29.463096 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:14:29.877853 13610 kubeadm.go:630] restartCluster took 4m16.988595905s W0801 12:14:29.878033 13610 out.go:239] ๐Ÿคฆ Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: cluster wait timed out during healthz check I0801 12:14:29.878101 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force" I0801 12:14:50.012161 13610 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (20.134046447s) I0801 12:14:50.012208 13610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0801 12:14:50.021936 13610 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0801 12:14:50.030232 13610 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0801 12:14:50.030273 13610 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0801 12:14:50.037499 13610 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0801 12:14:50.037516 13610 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0801 12:14:50.299520 13610 out.go:204] โ–ช Generating certificates and keys ... I0801 12:14:51.135952 13610 out.go:204] โ–ช Booting up control plane ... I0801 12:15:12.428381 13610 out.go:204] โ–ช Configuring RBAC rules ... I0801 12:15:14.468244 13610 cni.go:95] Creating CNI manager for "" I0801 12:15:14.468263 13610 cni.go:169] CNI unnecessary in this configuration, recommending no CNI I0801 12:15:14.468305 13610 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0801 12:15:14.468396 13610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0801 12:15:14.468397 13610 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=f4b412861bb746be73053c9f6d2895f12cf78565 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2022_08_01T12_15_14_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0801 12:15:14.587273 13610 kubeadm.go:1045] duration metric: took 118.959994ms to wait for elevateKubeSystemPrivileges. I0801 12:15:14.587302 13610 ops.go:34] apiserver oom_adj: -16 I0801 12:15:14.587309 13610 kubeadm.go:397] StartCluster complete in 5m1.73021709s I0801 12:15:14.587328 13610 settings.go:142] acquiring lock: {Name:mka7107cbdd64742d092b3f0d11a16c8e6dba250 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0801 12:15:14.587424 13610 settings.go:150] Updating kubeconfig: /home/dphy/.kube/config I0801 12:15:14.588001 13610 lock.go:35] WriteFile acquiring /home/dphy/.kube/config: {Name:mkd42ad56cd32d5358b335a4d17bffb594392700 Clock:{} Delay:500ms Timeout:1m0s Cancel:} W0801 12:15:44.589102 13610 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8443: i/o timeout W0801 12:16:15.091506 13610 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8443: i/o timeout W0801 12:16:45.591299 13610 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8443: i/o timeout W0801 12:17:16.091053 13610 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8443: i/o timeout W0801 12:17:16.590313 13610 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8443: connect: network is unreachable W0801 12:17:17.089873 13610 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8443: connect: network is unreachable W0801 12:17:17.589929 13610 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8443: connect: network is unreachable W0801 12:17:48.091225 13610 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8443: i/o timeout W0801 12:18:18.092120 13610 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8443: i/o timeout I0801 12:18:18.092157 13610 kapi.go:241] timed out trying to rescale deployment "coredns" in namespace "kube-system" and context "minikube" to 1: timed out waiting for the condition E0801 12:18:18.092168 13610 start.go:264] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: timed out waiting for the condition I0801 12:18:18.092257 13610 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true} I0801 12:18:18.153653 13610 out.go:177] ๐Ÿ”Ž Verifying Kubernetes components... I0801 12:18:18.092347 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0801 12:18:18.092421 13610 addons.go:412] enableAddons start: toEnable=map[], additional=[] I0801 12:18:18.092767 13610 config.go:178] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.1 I0801 12:18:18.153859 13610 addons.go:65] Setting storage-provisioner=true in profile "minikube" I0801 12:18:18.153908 13610 addons.go:65] Setting default-storageclass=true in profile "minikube" I0801 12:18:18.195347 13610 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0801 12:18:18.195361 13610 addons.go:153] Setting addon storage-provisioner=true in "minikube" W0801 12:18:18.195370 13610 addons.go:162] addon storage-provisioner should already be in state true I0801 12:18:18.195395 13610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0801 12:18:18.195425 13610 host.go:66] Checking if "minikube" exists ... I0801 12:18:18.195666 13610 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0801 12:18:18.195805 13610 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0801 12:18:18.263959 13610 api_server.go:51] waiting for apiserver process to appear ... I0801 12:18:18.264006 13610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0801 12:18:18.264038 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0801 12:18:18.303849 13610 out.go:177] โ–ช Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0801 12:18:18.282699 13610 api_server.go:71] duration metric: took 190.40743ms to wait for apiserver process to appear ... I0801 12:18:18.345551 13610 api_server.go:87] waiting for apiserver healthz status ... I0801 12:18:18.345608 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:18:18.345692 13610 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml I0801 12:18:18.345699 13610 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0801 12:18:18.345760 13610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0801 12:18:18.406376 13610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37731 SSHKeyPath:/home/dphy/.minikube/machines/minikube/id_rsa Username:docker} I0801 12:18:18.503160 13610 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0801 12:18:19.102949 13610 start.go:806] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS I0801 12:18:19.524623 13610 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.021443884s) I0801 12:18:23.346154 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:18:23.846965 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:18:28.848093 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:18:29.346702 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:18:34.347069 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:18:34.347102 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:18:39.347788 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:18:39.846421 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:18:44.846763 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:18:45.346376 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... W0801 12:18:48.274067 13610 out.go:239] โ— Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 192.168.49.2:8443: i/o timeout] I0801 12:18:48.336809 13610 out.go:177] ๐ŸŒŸ Enabled addons: storage-provisioner I0801 12:18:48.395278 13610 addons.go:414] enableAddons completed in 30.302884178s I0801 12:18:50.346994 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:18:50.847004 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:18:55.848135 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:18:56.346761 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:19:01.347923 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:19:01.846541 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:19:06.846967 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:19:06.847010 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:19:11.847180 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:19:12.346921 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:19:17.348201 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:19:17.846573 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:19:22.847872 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:19:23.346761 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:19:23.402836 13610 logs.go:274] 1 containers: [bd2c7a9d3b4b] I0801 12:19:23.402879 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:19:23.432655 13610 logs.go:274] 1 containers: [a3071594d55a] I0801 12:19:23.432712 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:19:23.465592 13610 logs.go:274] 2 containers: [0a117107f521 3267e18a47cb] I0801 12:19:23.465671 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:19:23.495222 13610 logs.go:274] 1 containers: [931bfae7392a] I0801 12:19:23.495276 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:19:23.523053 13610 logs.go:274] 1 containers: [ad0da7896d64] I0801 12:19:23.523113 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:19:23.554007 13610 logs.go:274] 0 containers: [] W0801 12:19:23.554027 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:19:23.554100 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:19:23.582591 13610 logs.go:274] 1 containers: [bfb81b8caea5] I0801 12:19:23.582639 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:19:23.612424 13610 logs.go:274] 1 containers: [7c534cd0ade0] I0801 12:19:23.612442 13610 logs.go:123] Gathering logs for kube-apiserver [bd2c7a9d3b4b] ... I0801 12:19:23.612448 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd2c7a9d3b4b" I0801 12:19:23.648687 13610 logs.go:123] Gathering logs for etcd [a3071594d55a] ... I0801 12:19:23.648696 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3071594d55a" I0801 12:19:23.749379 13610 logs.go:123] Gathering logs for coredns [0a117107f521] ... I0801 12:19:23.749392 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a117107f521" I0801 12:19:23.781436 13610 logs.go:123] Gathering logs for coredns [3267e18a47cb] ... I0801 12:19:23.781448 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3267e18a47cb" I0801 12:19:23.811426 13610 logs.go:123] Gathering logs for storage-provisioner [bfb81b8caea5] ... I0801 12:19:23.811438 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb81b8caea5" I0801 12:19:23.843087 13610 logs.go:123] Gathering logs for kube-controller-manager [7c534cd0ade0] ... I0801 12:19:23.843097 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c534cd0ade0" I0801 12:19:23.884136 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:19:23.884149 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:19:23.958753 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:19:23.958765 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:19:24.041826 13610 logs.go:123] Gathering logs for Docker ... I0801 12:19:24.041836 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:19:24.084706 13610 logs.go:123] Gathering logs for kube-proxy [ad0da7896d64] ... I0801 12:19:24.084721 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad0da7896d64" I0801 12:19:24.116429 13610 logs.go:123] Gathering logs for container status ... I0801 12:19:24.116441 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:19:24.144294 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:19:24.144305 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:19:24.159140 13610 logs.go:123] Gathering logs for kube-scheduler [931bfae7392a] ... I0801 12:19:24.159153 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 931bfae7392a" I0801 12:19:26.706158 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:19:31.707153 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:19:31.846501 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:19:31.899660 13610 logs.go:274] 1 containers: [bd2c7a9d3b4b] I0801 12:19:31.899727 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:19:31.930371 13610 logs.go:274] 1 containers: [a3071594d55a] I0801 12:19:31.930421 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:19:31.959798 13610 logs.go:274] 2 containers: [0a117107f521 3267e18a47cb] I0801 12:19:31.959849 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:19:31.989972 13610 logs.go:274] 1 containers: [931bfae7392a] I0801 12:19:31.990020 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:19:32.018932 13610 logs.go:274] 1 containers: [ad0da7896d64] I0801 12:19:32.018976 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:19:32.046444 13610 logs.go:274] 0 containers: [] W0801 12:19:32.046452 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:19:32.046517 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:19:32.082213 13610 logs.go:274] 1 containers: [bfb81b8caea5] I0801 12:19:32.082294 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:19:32.114080 13610 logs.go:274] 1 containers: [7c534cd0ade0] I0801 12:19:32.114095 13610 logs.go:123] Gathering logs for kube-proxy [ad0da7896d64] ... I0801 12:19:32.114103 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad0da7896d64" I0801 12:19:32.147675 13610 logs.go:123] Gathering logs for kube-controller-manager [7c534cd0ade0] ... I0801 12:19:32.147685 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c534cd0ade0" I0801 12:19:32.191063 13610 logs.go:123] Gathering logs for container status ... I0801 12:19:32.191075 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:19:32.219419 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:19:32.219432 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:19:32.293147 13610 logs.go:123] Gathering logs for etcd [a3071594d55a] ... I0801 12:19:32.293158 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3071594d55a" I0801 12:19:32.401867 13610 logs.go:123] Gathering logs for coredns [0a117107f521] ... I0801 12:19:32.401883 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a117107f521" I0801 12:19:32.433569 13610 logs.go:123] Gathering logs for coredns [3267e18a47cb] ... I0801 12:19:32.433585 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3267e18a47cb" I0801 12:19:32.469724 13610 logs.go:123] Gathering logs for storage-provisioner [bfb81b8caea5] ... I0801 12:19:32.469738 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb81b8caea5" I0801 12:19:32.508136 13610 logs.go:123] Gathering logs for Docker ... I0801 12:19:32.508155 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:19:32.553480 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:19:32.553497 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:19:32.573316 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:19:32.573329 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:19:32.646342 13610 logs.go:123] Gathering logs for kube-apiserver [bd2c7a9d3b4b] ... I0801 12:19:32.646353 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd2c7a9d3b4b" I0801 12:19:32.683370 13610 logs.go:123] Gathering logs for kube-scheduler [931bfae7392a] ... I0801 12:19:32.683389 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 931bfae7392a" I0801 12:19:35.233538 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:19:40.234606 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:19:40.347073 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:19:40.404588 13610 logs.go:274] 1 containers: [bd2c7a9d3b4b] I0801 12:19:40.404637 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:19:40.436221 13610 logs.go:274] 1 containers: [a3071594d55a] I0801 12:19:40.436278 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:19:40.464335 13610 logs.go:274] 2 containers: [0a117107f521 3267e18a47cb] I0801 12:19:40.464391 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:19:40.492373 13610 logs.go:274] 1 containers: [931bfae7392a] I0801 12:19:40.492421 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:19:40.521090 13610 logs.go:274] 1 containers: [ad0da7896d64] I0801 12:19:40.521144 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:19:40.549237 13610 logs.go:274] 0 containers: [] W0801 12:19:40.549262 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:19:40.549331 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:19:40.578423 13610 logs.go:274] 1 containers: [bfb81b8caea5] I0801 12:19:40.578502 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:19:40.611523 13610 logs.go:274] 1 containers: [7c534cd0ade0] I0801 12:19:40.611541 13610 logs.go:123] Gathering logs for etcd [a3071594d55a] ... I0801 12:19:40.611550 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3071594d55a" I0801 12:19:40.728964 13610 logs.go:123] Gathering logs for coredns [0a117107f521] ... I0801 12:19:40.750816 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a117107f521" I0801 12:19:40.788209 13610 logs.go:123] Gathering logs for coredns [3267e18a47cb] ... I0801 12:19:40.788219 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3267e18a47cb" I0801 12:19:40.818601 13610 logs.go:123] Gathering logs for container status ... I0801 12:19:40.818614 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:19:40.844894 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:19:40.844905 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:19:40.915170 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:19:40.915182 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:19:40.985880 13610 logs.go:123] Gathering logs for kube-apiserver [bd2c7a9d3b4b] ... I0801 12:19:40.985895 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd2c7a9d3b4b" I0801 12:19:41.022616 13610 logs.go:123] Gathering logs for kube-scheduler [931bfae7392a] ... I0801 12:19:41.022627 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 931bfae7392a" I0801 12:19:41.064290 13610 logs.go:123] Gathering logs for kube-proxy [ad0da7896d64] ... I0801 12:19:41.064301 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad0da7896d64" I0801 12:19:41.094830 13610 logs.go:123] Gathering logs for storage-provisioner [bfb81b8caea5] ... I0801 12:19:41.094843 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb81b8caea5" I0801 12:19:41.124907 13610 logs.go:123] Gathering logs for kube-controller-manager [7c534cd0ade0] ... I0801 12:19:41.124917 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c534cd0ade0" I0801 12:19:41.164586 13610 logs.go:123] Gathering logs for Docker ... I0801 12:19:41.164597 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:19:41.206184 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:19:41.206198 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:19:43.716610 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:19:48.717719 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:19:48.847311 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:19:48.908008 13610 logs.go:274] 1 containers: [bd2c7a9d3b4b] I0801 12:19:48.908064 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:19:48.934836 13610 logs.go:274] 1 containers: [a3071594d55a] I0801 12:19:48.934882 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:19:48.961717 13610 logs.go:274] 2 containers: [0a117107f521 3267e18a47cb] I0801 12:19:48.961760 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:19:48.987184 13610 logs.go:274] 1 containers: [931bfae7392a] I0801 12:19:48.987226 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:19:49.013557 13610 logs.go:274] 1 containers: [ad0da7896d64] I0801 12:19:49.013612 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:19:49.038946 13610 logs.go:274] 0 containers: [] W0801 12:19:49.038955 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:19:49.038989 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:19:49.065238 13610 logs.go:274] 1 containers: [bfb81b8caea5] I0801 12:19:49.065282 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:19:49.091249 13610 logs.go:274] 1 containers: [7c534cd0ade0] I0801 12:19:49.091262 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:19:49.091268 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:19:49.100173 13610 logs.go:123] Gathering logs for kube-apiserver [bd2c7a9d3b4b] ... I0801 12:19:49.100183 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd2c7a9d3b4b" I0801 12:19:49.135681 13610 logs.go:123] Gathering logs for coredns [0a117107f521] ... I0801 12:19:49.135699 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a117107f521" I0801 12:19:49.164793 13610 logs.go:123] Gathering logs for kube-proxy [ad0da7896d64] ... I0801 12:19:49.164806 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad0da7896d64" I0801 12:19:49.193536 13610 logs.go:123] Gathering logs for storage-provisioner [bfb81b8caea5] ... I0801 12:19:49.193546 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb81b8caea5" I0801 12:19:49.220773 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:19:49.220782 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:19:49.287140 13610 logs.go:123] Gathering logs for etcd [a3071594d55a] ... I0801 12:19:49.287151 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3071594d55a" I0801 12:19:49.383043 13610 logs.go:123] Gathering logs for coredns [3267e18a47cb] ... I0801 12:19:49.383054 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3267e18a47cb" I0801 12:19:49.410989 13610 logs.go:123] Gathering logs for kube-scheduler [931bfae7392a] ... I0801 12:19:49.411003 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 931bfae7392a" I0801 12:19:49.450740 13610 logs.go:123] Gathering logs for kube-controller-manager [7c534cd0ade0] ... I0801 12:19:49.450750 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c534cd0ade0" I0801 12:19:49.487904 13610 logs.go:123] Gathering logs for Docker ... I0801 12:19:49.487914 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:19:49.527469 13610 logs.go:123] Gathering logs for container status ... I0801 12:19:49.527479 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:19:49.551934 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:19:49.551944 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:19:52.114453 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:19:57.114774 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:19:57.347269 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:19:57.404232 13610 logs.go:274] 1 containers: [bd2c7a9d3b4b] I0801 12:19:57.404273 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:19:57.434831 13610 logs.go:274] 1 containers: [a3071594d55a] I0801 12:19:57.434879 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:19:57.470873 13610 logs.go:274] 2 containers: [0a117107f521 3267e18a47cb] I0801 12:19:57.470915 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:19:57.504971 13610 logs.go:274] 1 containers: [931bfae7392a] I0801 12:19:57.505011 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:19:57.536243 13610 logs.go:274] 1 containers: [ad0da7896d64] I0801 12:19:57.536298 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:19:57.576115 13610 logs.go:274] 0 containers: [] W0801 12:19:57.576124 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:19:57.576182 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:19:57.608539 13610 logs.go:274] 1 containers: [bfb81b8caea5] I0801 12:19:57.608593 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:19:57.637614 13610 logs.go:274] 1 containers: [7c534cd0ade0] I0801 12:19:57.637627 13610 logs.go:123] Gathering logs for coredns [0a117107f521] ... I0801 12:19:57.637632 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a117107f521" I0801 12:19:57.669306 13610 logs.go:123] Gathering logs for kube-proxy [ad0da7896d64] ... I0801 12:19:57.669354 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad0da7896d64" I0801 12:19:57.700413 13610 logs.go:123] Gathering logs for kube-controller-manager [7c534cd0ade0] ... I0801 12:19:57.700440 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c534cd0ade0" I0801 12:19:57.742217 13610 logs.go:123] Gathering logs for container status ... I0801 12:19:57.742227 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:19:57.772738 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:19:57.772747 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:19:57.844291 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:19:57.844306 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:19:57.855626 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:19:57.855638 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:19:57.924814 13610 logs.go:123] Gathering logs for kube-apiserver [bd2c7a9d3b4b] ... I0801 12:19:57.924823 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd2c7a9d3b4b" I0801 12:19:57.969004 13610 logs.go:123] Gathering logs for Docker ... I0801 12:19:57.969016 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:19:58.011491 13610 logs.go:123] Gathering logs for etcd [a3071594d55a] ... I0801 12:19:58.011502 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3071594d55a" I0801 12:19:58.111725 13610 logs.go:123] Gathering logs for coredns [3267e18a47cb] ... I0801 12:19:58.111739 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3267e18a47cb" I0801 12:19:58.142584 13610 logs.go:123] Gathering logs for kube-scheduler [931bfae7392a] ... I0801 12:19:58.142595 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 931bfae7392a" I0801 12:19:58.183598 13610 logs.go:123] Gathering logs for storage-provisioner [bfb81b8caea5] ... I0801 12:19:58.183612 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb81b8caea5" I0801 12:20:00.715435 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:20:05.729123 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:20:05.846468 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:20:05.900439 13610 logs.go:274] 1 containers: [bd2c7a9d3b4b] I0801 12:20:05.900505 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:20:05.929756 13610 logs.go:274] 1 containers: [a3071594d55a] I0801 12:20:05.929812 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:20:05.973694 13610 logs.go:274] 2 containers: [0a117107f521 3267e18a47cb] I0801 12:20:05.973735 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:20:06.003028 13610 logs.go:274] 1 containers: [931bfae7392a] I0801 12:20:06.003128 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:20:06.031020 13610 logs.go:274] 1 containers: [ad0da7896d64] I0801 12:20:06.031087 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:20:06.059556 13610 logs.go:274] 0 containers: [] W0801 12:20:06.059571 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:20:06.059629 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:20:06.089587 13610 logs.go:274] 1 containers: [bfb81b8caea5] I0801 12:20:06.089667 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:20:06.117944 13610 logs.go:274] 1 containers: [7c534cd0ade0] I0801 12:20:06.117958 13610 logs.go:123] Gathering logs for kube-apiserver [bd2c7a9d3b4b] ... I0801 12:20:06.117964 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd2c7a9d3b4b" I0801 12:20:06.154335 13610 logs.go:123] Gathering logs for etcd [a3071594d55a] ... I0801 12:20:06.154351 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3071594d55a" I0801 12:20:06.250821 13610 logs.go:123] Gathering logs for coredns [0a117107f521] ... I0801 12:20:06.250835 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a117107f521" I0801 12:20:06.284388 13610 logs.go:123] Gathering logs for kube-controller-manager [7c534cd0ade0] ... I0801 12:20:06.284403 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c534cd0ade0" I0801 12:20:06.330782 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:20:06.330793 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:20:06.405909 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:20:06.405921 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:20:06.621497 13610 logs.go:123] Gathering logs for kube-scheduler [931bfae7392a] ... I0801 12:20:06.621519 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 931bfae7392a" I0801 12:20:06.687203 13610 logs.go:123] Gathering logs for kube-proxy [ad0da7896d64] ... I0801 12:20:06.687214 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad0da7896d64" I0801 12:20:06.723793 13610 logs.go:123] Gathering logs for storage-provisioner [bfb81b8caea5] ... I0801 12:20:06.723817 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb81b8caea5" I0801 12:20:06.755944 13610 logs.go:123] Gathering logs for Docker ... I0801 12:20:06.755954 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:20:06.796563 13610 logs.go:123] Gathering logs for container status ... I0801 12:20:06.796575 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:20:06.825978 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:20:06.825988 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:20:06.837283 13610 logs.go:123] Gathering logs for coredns [3267e18a47cb] ... I0801 12:20:06.837298 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3267e18a47cb" I0801 12:20:09.367521 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:20:14.368320 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:20:14.847209 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:20:14.903181 13610 logs.go:274] 1 containers: [bd2c7a9d3b4b] I0801 12:20:14.903228 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:20:14.930959 13610 logs.go:274] 1 containers: [a3071594d55a] I0801 12:20:14.931038 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:20:14.959293 13610 logs.go:274] 2 containers: [0a117107f521 3267e18a47cb] I0801 12:20:14.959344 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:20:14.988395 13610 logs.go:274] 1 containers: [931bfae7392a] I0801 12:20:14.988438 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:20:15.017416 13610 logs.go:274] 1 containers: [ad0da7896d64] I0801 12:20:15.017506 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:20:15.044674 13610 logs.go:274] 0 containers: [] W0801 12:20:15.044690 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:20:15.044810 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:20:15.074202 13610 logs.go:274] 1 containers: [bfb81b8caea5] I0801 12:20:15.074249 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:20:15.106322 13610 logs.go:274] 1 containers: [7c534cd0ade0] I0801 12:20:15.106341 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:20:15.106347 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:20:15.175088 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:20:15.175100 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:20:15.186644 13610 logs.go:123] Gathering logs for etcd [a3071594d55a] ... I0801 12:20:15.186657 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3071594d55a" I0801 12:20:15.286079 13610 logs.go:123] Gathering logs for coredns [3267e18a47cb] ... I0801 12:20:15.286095 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3267e18a47cb" I0801 12:20:15.317065 13610 logs.go:123] Gathering logs for kube-controller-manager [7c534cd0ade0] ... I0801 12:20:15.317083 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c534cd0ade0" I0801 12:20:15.361410 13610 logs.go:123] Gathering logs for Docker ... I0801 12:20:15.361420 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:20:15.410219 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:20:15.410235 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:20:15.489940 13610 logs.go:123] Gathering logs for kube-apiserver [bd2c7a9d3b4b] ... I0801 12:20:15.489949 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd2c7a9d3b4b" I0801 12:20:15.526553 13610 logs.go:123] Gathering logs for coredns [0a117107f521] ... I0801 12:20:15.526562 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a117107f521" I0801 12:20:15.557524 13610 logs.go:123] Gathering logs for kube-scheduler [931bfae7392a] ... I0801 12:20:15.557537 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 931bfae7392a" I0801 12:20:15.600150 13610 logs.go:123] Gathering logs for kube-proxy [ad0da7896d64] ... I0801 12:20:15.600161 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad0da7896d64" I0801 12:20:15.631443 13610 logs.go:123] Gathering logs for storage-provisioner [bfb81b8caea5] ... I0801 12:20:15.631455 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb81b8caea5" I0801 12:20:15.662532 13610 logs.go:123] Gathering logs for container status ... I0801 12:20:15.662541 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:20:18.189665 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:20:23.190972 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:20:23.346388 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:20:23.404770 13610 logs.go:274] 1 containers: [bd2c7a9d3b4b] I0801 12:20:23.404814 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:20:23.435446 13610 logs.go:274] 1 containers: [a3071594d55a] I0801 12:20:23.435496 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:20:23.467676 13610 logs.go:274] 2 containers: [0a117107f521 3267e18a47cb] I0801 12:20:23.467736 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:20:23.496316 13610 logs.go:274] 1 containers: [931bfae7392a] I0801 12:20:23.496359 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:20:23.527037 13610 logs.go:274] 1 containers: [ad0da7896d64] I0801 12:20:23.527082 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:20:23.556622 13610 logs.go:274] 0 containers: [] W0801 12:20:23.556631 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:20:23.556668 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:20:23.586391 13610 logs.go:274] 1 containers: [bfb81b8caea5] I0801 12:20:23.586445 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:20:23.614983 13610 logs.go:274] 1 containers: [7c534cd0ade0] I0801 12:20:23.614999 13610 logs.go:123] Gathering logs for kube-controller-manager [7c534cd0ade0] ... I0801 12:20:23.615005 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c534cd0ade0" I0801 12:20:23.655501 13610 logs.go:123] Gathering logs for Docker ... I0801 12:20:23.655512 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:20:23.695488 13610 logs.go:123] Gathering logs for container status ... I0801 12:20:23.695500 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:20:23.724608 13610 logs.go:123] Gathering logs for kube-apiserver [bd2c7a9d3b4b] ... I0801 12:20:23.724618 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd2c7a9d3b4b" I0801 12:20:23.763923 13610 logs.go:123] Gathering logs for coredns [0a117107f521] ... I0801 12:20:23.763933 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a117107f521" I0801 12:20:23.794943 13610 logs.go:123] Gathering logs for coredns [3267e18a47cb] ... I0801 12:20:23.794970 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3267e18a47cb" I0801 12:20:23.825516 13610 logs.go:123] Gathering logs for storage-provisioner [bfb81b8caea5] ... I0801 12:20:23.825527 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb81b8caea5" I0801 12:20:23.857315 13610 logs.go:123] Gathering logs for kube-scheduler [931bfae7392a] ... I0801 12:20:23.857325 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 931bfae7392a" I0801 12:20:23.901719 13610 logs.go:123] Gathering logs for kube-proxy [ad0da7896d64] ... I0801 12:20:23.901734 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad0da7896d64" I0801 12:20:23.934463 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:20:23.934473 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:20:24.007354 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:20:24.007366 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:20:24.017810 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:20:24.017823 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:20:24.088318 13610 logs.go:123] Gathering logs for etcd [a3071594d55a] ... I0801 12:20:24.088331 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3071594d55a" I0801 12:20:26.698254 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:20:31.699202 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:20:31.846500 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:20:31.906777 13610 logs.go:274] 1 containers: [bd2c7a9d3b4b] I0801 12:20:31.906831 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:20:31.942808 13610 logs.go:274] 1 containers: [a3071594d55a] I0801 12:20:31.942876 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:20:31.978169 13610 logs.go:274] 2 containers: [0a117107f521 3267e18a47cb] I0801 12:20:31.978209 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:20:32.009745 13610 logs.go:274] 1 containers: [931bfae7392a] I0801 12:20:32.009822 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:20:32.039776 13610 logs.go:274] 1 containers: [ad0da7896d64] I0801 12:20:32.039822 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:20:32.069587 13610 logs.go:274] 0 containers: [] W0801 12:20:32.069598 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:20:32.069641 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:20:32.102687 13610 logs.go:274] 1 containers: [bfb81b8caea5] I0801 12:20:32.102743 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:20:32.130958 13610 logs.go:274] 1 containers: [7c534cd0ade0] I0801 12:20:32.130976 13610 logs.go:123] Gathering logs for kube-scheduler [931bfae7392a] ... I0801 12:20:32.130984 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 931bfae7392a" I0801 12:20:32.174702 13610 logs.go:123] Gathering logs for kube-controller-manager [7c534cd0ade0] ... I0801 12:20:32.174713 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c534cd0ade0" I0801 12:20:32.217317 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:20:32.217329 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:20:32.290686 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:20:32.290699 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:20:32.300819 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:20:32.300846 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:20:32.370392 13610 logs.go:123] Gathering logs for kube-apiserver [bd2c7a9d3b4b] ... I0801 12:20:32.370403 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd2c7a9d3b4b" I0801 12:20:32.409466 13610 logs.go:123] Gathering logs for coredns [0a117107f521] ... I0801 12:20:32.409477 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a117107f521" I0801 12:20:32.447649 13610 logs.go:123] Gathering logs for container status ... I0801 12:20:32.447664 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:20:32.493962 13610 logs.go:123] Gathering logs for etcd [a3071594d55a] ... I0801 12:20:32.493980 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3071594d55a" I0801 12:20:32.607834 13610 logs.go:123] Gathering logs for coredns [3267e18a47cb] ... I0801 12:20:32.607845 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3267e18a47cb" I0801 12:20:32.638848 13610 logs.go:123] Gathering logs for kube-proxy [ad0da7896d64] ... I0801 12:20:32.638858 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad0da7896d64" I0801 12:20:32.671244 13610 logs.go:123] Gathering logs for storage-provisioner [bfb81b8caea5] ... I0801 12:20:32.671254 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb81b8caea5" I0801 12:20:32.702089 13610 logs.go:123] Gathering logs for Docker ... I0801 12:20:32.702102 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:20:35.244303 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:20:40.244650 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:20:40.346962 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:20:40.385950 13610 logs.go:274] 1 containers: [bd2c7a9d3b4b] I0801 12:20:40.386006 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:20:40.419609 13610 logs.go:274] 1 containers: [a3071594d55a] I0801 12:20:40.419660 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:20:40.451568 13610 logs.go:274] 2 containers: [0a117107f521 3267e18a47cb] I0801 12:20:40.451618 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:20:40.481473 13610 logs.go:274] 1 containers: [931bfae7392a] I0801 12:20:40.481514 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:20:40.512112 13610 logs.go:274] 1 containers: [ad0da7896d64] I0801 12:20:40.512163 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:20:40.541740 13610 logs.go:274] 0 containers: [] W0801 12:20:40.541763 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:20:40.541798 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:20:40.571928 13610 logs.go:274] 1 containers: [bfb81b8caea5] I0801 12:20:40.571977 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:20:40.602544 13610 logs.go:274] 1 containers: [7c534cd0ade0] I0801 12:20:40.602566 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:20:40.602574 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:20:40.675336 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:20:40.675351 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:20:40.687840 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:20:40.828700 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:20:40.938556 13610 logs.go:123] Gathering logs for kube-apiserver [bd2c7a9d3b4b] ... I0801 12:20:40.938567 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd2c7a9d3b4b" I0801 12:20:40.976606 13610 logs.go:123] Gathering logs for coredns [3267e18a47cb] ... I0801 12:20:40.976617 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3267e18a47cb" I0801 12:20:41.009710 13610 logs.go:123] Gathering logs for storage-provisioner [bfb81b8caea5] ... I0801 12:20:41.009722 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb81b8caea5" I0801 12:20:41.042836 13610 logs.go:123] Gathering logs for container status ... I0801 12:20:41.042846 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:20:41.078992 13610 logs.go:123] Gathering logs for etcd [a3071594d55a] ... I0801 12:20:41.079003 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3071594d55a" I0801 12:20:41.181440 13610 logs.go:123] Gathering logs for coredns [0a117107f521] ... I0801 12:20:41.181451 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a117107f521" I0801 12:20:41.212954 13610 logs.go:123] Gathering logs for kube-scheduler [931bfae7392a] ... I0801 12:20:41.212967 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 931bfae7392a" I0801 12:20:41.259470 13610 logs.go:123] Gathering logs for kube-proxy [ad0da7896d64] ... I0801 12:20:41.259481 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad0da7896d64" I0801 12:20:41.292721 13610 logs.go:123] Gathering logs for kube-controller-manager [7c534cd0ade0] ... I0801 12:20:41.292735 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c534cd0ade0" I0801 12:20:41.333341 13610 logs.go:123] Gathering logs for Docker ... I0801 12:20:41.333351 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:20:43.883297 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:20:48.884229 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:20:49.347011 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:20:49.403781 13610 logs.go:274] 1 containers: [bd2c7a9d3b4b] I0801 12:20:49.403829 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:20:49.431396 13610 logs.go:274] 1 containers: [a3071594d55a] I0801 12:20:49.431451 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:20:49.462651 13610 logs.go:274] 2 containers: [0a117107f521 3267e18a47cb] I0801 12:20:49.462709 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:20:49.490126 13610 logs.go:274] 1 containers: [931bfae7392a] I0801 12:20:49.490170 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:20:49.517342 13610 logs.go:274] 1 containers: [ad0da7896d64] I0801 12:20:49.517405 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:20:49.543991 13610 logs.go:274] 0 containers: [] W0801 12:20:49.544000 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:20:49.544035 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:20:49.573608 13610 logs.go:274] 1 containers: [bfb81b8caea5] I0801 12:20:49.573650 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:20:49.603668 13610 logs.go:274] 1 containers: [7c534cd0ade0] I0801 12:20:49.603686 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:20:49.603694 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:20:49.670107 13610 logs.go:123] Gathering logs for kube-apiserver [bd2c7a9d3b4b] ... I0801 12:20:49.670119 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd2c7a9d3b4b" I0801 12:20:49.706452 13610 logs.go:123] Gathering logs for etcd [a3071594d55a] ... I0801 12:20:49.706463 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3071594d55a" I0801 12:20:49.807439 13610 logs.go:123] Gathering logs for coredns [0a117107f521] ... I0801 12:20:49.807455 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a117107f521" I0801 12:20:49.837566 13610 logs.go:123] Gathering logs for coredns [3267e18a47cb] ... I0801 12:20:49.837579 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3267e18a47cb" I0801 12:20:49.867239 13610 logs.go:123] Gathering logs for storage-provisioner [bfb81b8caea5] ... I0801 12:20:49.867253 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb81b8caea5" I0801 12:20:49.896059 13610 logs.go:123] Gathering logs for Docker ... I0801 12:20:49.896069 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:20:49.937837 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:20:49.937849 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:20:50.009082 13610 logs.go:123] Gathering logs for kube-scheduler [931bfae7392a] ... I0801 12:20:50.009093 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 931bfae7392a" I0801 12:20:50.051874 13610 logs.go:123] Gathering logs for kube-proxy [ad0da7896d64] ... I0801 12:20:50.051887 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad0da7896d64" I0801 12:20:50.084313 13610 logs.go:123] Gathering logs for kube-controller-manager [7c534cd0ade0] ... I0801 12:20:50.084324 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c534cd0ade0" I0801 12:20:50.126016 13610 logs.go:123] Gathering logs for container status ... I0801 12:20:50.126029 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:20:50.152099 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:20:50.152109 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:20:52.663202 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:20:57.664211 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:20:57.846730 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:20:57.906040 13610 logs.go:274] 1 containers: [bd2c7a9d3b4b] I0801 12:20:57.906092 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:20:57.935415 13610 logs.go:274] 1 containers: [a3071594d55a] I0801 12:20:57.935476 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:20:57.963485 13610 logs.go:274] 2 containers: [0a117107f521 3267e18a47cb] I0801 12:20:57.963527 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:20:57.992373 13610 logs.go:274] 1 containers: [931bfae7392a] I0801 12:20:57.992421 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:20:58.022081 13610 logs.go:274] 1 containers: [ad0da7896d64] I0801 12:20:58.022193 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:20:58.051157 13610 logs.go:274] 0 containers: [] W0801 12:20:58.051165 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:20:58.051204 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:20:58.079687 13610 logs.go:274] 1 containers: [bfb81b8caea5] I0801 12:20:58.079737 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:20:58.109138 13610 logs.go:274] 1 containers: [7c534cd0ade0] I0801 12:20:58.109156 13610 logs.go:123] Gathering logs for etcd [a3071594d55a] ... I0801 12:20:58.109163 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3071594d55a" I0801 12:20:58.206630 13610 logs.go:123] Gathering logs for kube-controller-manager [7c534cd0ade0] ... I0801 12:20:58.206644 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c534cd0ade0" I0801 12:20:58.246698 13610 logs.go:123] Gathering logs for Docker ... I0801 12:20:58.246707 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:20:58.287081 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:20:58.287092 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:20:58.356056 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:20:58.356065 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:20:58.365449 13610 logs.go:123] Gathering logs for kube-apiserver [bd2c7a9d3b4b] ... I0801 12:20:58.365465 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd2c7a9d3b4b" I0801 12:20:58.406210 13610 logs.go:123] Gathering logs for coredns [0a117107f521] ... I0801 12:20:58.406220 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a117107f521" I0801 12:20:58.444090 13610 logs.go:123] Gathering logs for coredns [3267e18a47cb] ... I0801 12:20:58.444104 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3267e18a47cb" I0801 12:20:58.476555 13610 logs.go:123] Gathering logs for kube-scheduler [931bfae7392a] ... I0801 12:20:58.476565 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 931bfae7392a" I0801 12:20:58.517885 13610 logs.go:123] Gathering logs for kube-proxy [ad0da7896d64] ... I0801 12:20:58.517901 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad0da7896d64" I0801 12:20:58.548169 13610 logs.go:123] Gathering logs for storage-provisioner [bfb81b8caea5] ... I0801 12:20:58.548179 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb81b8caea5" I0801 12:20:58.578930 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:20:58.578940 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:20:58.648651 13610 logs.go:123] Gathering logs for container status ... I0801 12:20:58.648664 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:21:01.174894 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:21:06.175450 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:21:06.346950 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:21:06.401936 13610 logs.go:274] 1 containers: [bd2c7a9d3b4b] I0801 12:21:06.401990 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:21:06.429771 13610 logs.go:274] 1 containers: [a3071594d55a] I0801 12:21:06.429815 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:21:06.457834 13610 logs.go:274] 2 containers: [0a117107f521 3267e18a47cb] I0801 12:21:06.457886 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:21:06.487002 13610 logs.go:274] 1 containers: [931bfae7392a] I0801 12:21:06.487071 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:21:06.514538 13610 logs.go:274] 1 containers: [ad0da7896d64] I0801 12:21:06.514614 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:21:06.542599 13610 logs.go:274] 0 containers: [] W0801 12:21:06.542613 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:21:06.542685 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:21:06.572092 13610 logs.go:274] 1 containers: [bfb81b8caea5] I0801 12:21:06.572181 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:21:06.613071 13610 logs.go:274] 1 containers: [7c534cd0ade0] I0801 12:21:06.613085 13610 logs.go:123] Gathering logs for kube-controller-manager [7c534cd0ade0] ... I0801 12:21:06.613091 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c534cd0ade0" I0801 12:21:06.659585 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:21:06.659596 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:21:06.731393 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:21:06.731405 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:21:06.803393 13610 logs.go:123] Gathering logs for kube-apiserver [bd2c7a9d3b4b] ... I0801 12:21:06.803406 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd2c7a9d3b4b" I0801 12:21:06.839712 13610 logs.go:123] Gathering logs for etcd [a3071594d55a] ... I0801 12:21:06.839723 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3071594d55a" I0801 12:21:06.937149 13610 logs.go:123] Gathering logs for coredns [0a117107f521] ... I0801 12:21:06.937165 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a117107f521" I0801 12:21:06.968402 13610 logs.go:123] Gathering logs for kube-scheduler [931bfae7392a] ... I0801 12:21:06.968419 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 931bfae7392a" I0801 12:21:07.011839 13610 logs.go:123] Gathering logs for kube-proxy [ad0da7896d64] ... I0801 12:21:07.011852 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad0da7896d64" I0801 12:21:07.048033 13610 logs.go:123] Gathering logs for Docker ... I0801 12:21:07.048044 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:21:07.089274 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:21:07.089285 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:21:07.099321 13610 logs.go:123] Gathering logs for coredns [3267e18a47cb] ... I0801 12:21:07.099344 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3267e18a47cb" I0801 12:21:07.128381 13610 logs.go:123] Gathering logs for storage-provisioner [bfb81b8caea5] ... I0801 12:21:07.128394 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb81b8caea5" I0801 12:21:07.162036 13610 logs.go:123] Gathering logs for container status ... I0801 12:21:07.162046 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:21:09.698946 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:21:14.700174 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:21:14.846680 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:21:14.904322 13610 logs.go:274] 1 containers: [bd2c7a9d3b4b] I0801 12:21:14.904368 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:21:14.931975 13610 logs.go:274] 1 containers: [a3071594d55a] I0801 12:21:14.932037 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:21:14.959641 13610 logs.go:274] 2 containers: [0a117107f521 3267e18a47cb] I0801 12:21:14.959693 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:21:14.988744 13610 logs.go:274] 1 containers: [931bfae7392a] I0801 12:21:14.988797 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:21:15.017776 13610 logs.go:274] 1 containers: [ad0da7896d64] I0801 12:21:15.017830 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:21:15.045228 13610 logs.go:274] 0 containers: [] W0801 12:21:15.045238 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:21:15.045277 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:21:15.075300 13610 logs.go:274] 1 containers: [bfb81b8caea5] I0801 12:21:15.075350 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:21:15.104363 13610 logs.go:274] 1 containers: [7c534cd0ade0] I0801 12:21:15.104378 13610 logs.go:123] Gathering logs for Docker ... I0801 12:21:15.104384 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:21:15.147744 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:21:15.147755 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:21:15.222352 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:21:15.222364 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:21:15.231876 13610 logs.go:123] Gathering logs for kube-apiserver [bd2c7a9d3b4b] ... I0801 12:21:15.231889 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd2c7a9d3b4b" I0801 12:21:15.274822 13610 logs.go:123] Gathering logs for coredns [3267e18a47cb] ... I0801 12:21:15.274833 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3267e18a47cb" I0801 12:21:15.311789 13610 logs.go:123] Gathering logs for storage-provisioner [bfb81b8caea5] ... I0801 12:21:15.311800 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb81b8caea5" I0801 12:21:15.340854 13610 logs.go:123] Gathering logs for kube-controller-manager [7c534cd0ade0] ... I0801 12:21:15.340864 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c534cd0ade0" I0801 12:21:15.385092 13610 logs.go:123] Gathering logs for container status ... I0801 12:21:15.385105 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:21:15.424316 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:21:15.424328 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:21:15.497057 13610 logs.go:123] Gathering logs for etcd [a3071594d55a] ... I0801 12:21:15.497067 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3071594d55a" I0801 12:21:15.600600 13610 logs.go:123] Gathering logs for coredns [0a117107f521] ... I0801 12:21:15.600613 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a117107f521" I0801 12:21:15.631424 13610 logs.go:123] Gathering logs for kube-scheduler [931bfae7392a] ... I0801 12:21:15.631441 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 931bfae7392a" I0801 12:21:15.672792 13610 logs.go:123] Gathering logs for kube-proxy [ad0da7896d64] ... I0801 12:21:15.672804 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad0da7896d64" I0801 12:21:18.205167 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:21:23.205501 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:21:23.346941 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:21:23.404561 13610 logs.go:274] 1 containers: [bd2c7a9d3b4b] I0801 12:21:23.404610 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:21:23.433546 13610 logs.go:274] 1 containers: [a3071594d55a] I0801 12:21:23.433593 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:21:23.462830 13610 logs.go:274] 2 containers: [0a117107f521 3267e18a47cb] I0801 12:21:23.462876 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:21:23.491179 13610 logs.go:274] 1 containers: [931bfae7392a] I0801 12:21:23.491265 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:21:23.520151 13610 logs.go:274] 1 containers: [ad0da7896d64] I0801 12:21:23.520211 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:21:23.549690 13610 logs.go:274] 0 containers: [] W0801 12:21:23.549702 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:21:23.549749 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:21:23.578065 13610 logs.go:274] 1 containers: [bfb81b8caea5] I0801 12:21:23.578107 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:21:23.606383 13610 logs.go:274] 1 containers: [7c534cd0ade0] I0801 12:21:23.606398 13610 logs.go:123] Gathering logs for coredns [0a117107f521] ... I0801 12:21:23.606404 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a117107f521" I0801 12:21:23.637248 13610 logs.go:123] Gathering logs for kube-proxy [ad0da7896d64] ... I0801 12:21:23.637260 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad0da7896d64" I0801 12:21:23.667557 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:21:23.667569 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:21:23.740785 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:21:23.740798 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:21:23.750729 13610 logs.go:123] Gathering logs for kube-apiserver [bd2c7a9d3b4b] ... I0801 12:21:23.750743 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd2c7a9d3b4b" I0801 12:21:23.786055 13610 logs.go:123] Gathering logs for etcd [a3071594d55a] ... I0801 12:21:23.786066 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3071594d55a" I0801 12:21:23.885305 13610 logs.go:123] Gathering logs for kube-controller-manager [7c534cd0ade0] ... I0801 12:21:23.885321 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c534cd0ade0" I0801 12:21:23.925762 13610 logs.go:123] Gathering logs for Docker ... I0801 12:21:23.925773 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:21:23.967974 13610 logs.go:123] Gathering logs for container status ... I0801 12:21:23.967990 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:21:24.004950 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:21:24.004960 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:21:24.076330 13610 logs.go:123] Gathering logs for coredns [3267e18a47cb] ... I0801 12:21:24.076341 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3267e18a47cb" I0801 12:21:24.106685 13610 logs.go:123] Gathering logs for kube-scheduler [931bfae7392a] ... I0801 12:21:24.106696 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 931bfae7392a" I0801 12:21:24.157236 13610 logs.go:123] Gathering logs for storage-provisioner [bfb81b8caea5] ... I0801 12:21:24.157251 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb81b8caea5" I0801 12:21:26.690230 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:21:31.690790 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:21:31.847220 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:21:31.899953 13610 logs.go:274] 1 containers: [bd2c7a9d3b4b] I0801 12:21:31.900015 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:21:31.927200 13610 logs.go:274] 1 containers: [a3071594d55a] I0801 12:21:31.927268 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:21:31.958867 13610 logs.go:274] 2 containers: [0a117107f521 3267e18a47cb] I0801 12:21:31.958908 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:21:31.986919 13610 logs.go:274] 1 containers: [931bfae7392a] I0801 12:21:31.986964 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:21:32.015417 13610 logs.go:274] 1 containers: [ad0da7896d64] I0801 12:21:32.015515 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:21:32.043378 13610 logs.go:274] 0 containers: [] W0801 12:21:32.043387 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:21:32.043423 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:21:32.073475 13610 logs.go:274] 1 containers: [bfb81b8caea5] I0801 12:21:32.073526 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:21:32.102890 13610 logs.go:274] 1 containers: [7c534cd0ade0] I0801 12:21:32.102907 13610 logs.go:123] Gathering logs for Docker ... I0801 12:21:32.102913 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:21:32.143169 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:21:32.143180 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:21:32.214688 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:21:32.214703 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:21:32.286010 13610 logs.go:123] Gathering logs for coredns [0a117107f521] ... I0801 12:21:32.286021 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a117107f521" I0801 12:21:32.317422 13610 logs.go:123] Gathering logs for kube-scheduler [931bfae7392a] ... I0801 12:21:32.317436 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 931bfae7392a" I0801 12:21:32.358799 13610 logs.go:123] Gathering logs for kube-proxy [ad0da7896d64] ... I0801 12:21:32.358811 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad0da7896d64" I0801 12:21:32.393436 13610 logs.go:123] Gathering logs for storage-provisioner [bfb81b8caea5] ... I0801 12:21:32.393446 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb81b8caea5" I0801 12:21:32.428239 13610 logs.go:123] Gathering logs for kube-controller-manager [7c534cd0ade0] ... I0801 12:21:32.428249 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c534cd0ade0" I0801 12:21:32.472734 13610 logs.go:123] Gathering logs for container status ... I0801 12:21:32.472751 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:21:32.522646 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:21:32.522659 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:21:32.533738 13610 logs.go:123] Gathering logs for kube-apiserver [bd2c7a9d3b4b] ... I0801 12:21:32.533754 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd2c7a9d3b4b" I0801 12:21:32.576359 13610 logs.go:123] Gathering logs for etcd [a3071594d55a] ... I0801 12:21:32.576372 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3071594d55a" I0801 12:21:32.683038 13610 logs.go:123] Gathering logs for coredns [3267e18a47cb] ... I0801 12:21:32.683054 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3267e18a47cb" I0801 12:21:35.217731 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:21:40.218831 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:21:40.347237 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:21:40.391771 13610 logs.go:274] 1 containers: [bd2c7a9d3b4b] I0801 12:21:40.391815 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:21:40.423479 13610 logs.go:274] 1 containers: [a3071594d55a] I0801 12:21:40.423562 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:21:40.452021 13610 logs.go:274] 2 containers: [0a117107f521 3267e18a47cb] I0801 12:21:40.452083 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:21:40.478892 13610 logs.go:274] 1 containers: [931bfae7392a] I0801 12:21:40.478965 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:21:40.507699 13610 logs.go:274] 1 containers: [ad0da7896d64] I0801 12:21:40.507758 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:21:40.536666 13610 logs.go:274] 0 containers: [] W0801 12:21:40.536678 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:21:40.536719 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:21:40.564288 13610 logs.go:274] 1 containers: [bfb81b8caea5] I0801 12:21:40.564342 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:21:40.592209 13610 logs.go:274] 1 containers: [7c534cd0ade0] I0801 12:21:40.592227 13610 logs.go:123] Gathering logs for coredns [0a117107f521] ... I0801 12:21:40.592236 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a117107f521" I0801 12:21:40.622089 13610 logs.go:123] Gathering logs for kube-scheduler [931bfae7392a] ... I0801 12:21:40.622100 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 931bfae7392a" I0801 12:21:40.661604 13610 logs.go:123] Gathering logs for kube-controller-manager [7c534cd0ade0] ... I0801 12:21:40.661615 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c534cd0ade0" I0801 12:21:40.712746 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:21:40.731947 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:21:40.830012 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:21:40.830027 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:21:40.898033 13610 logs.go:123] Gathering logs for kube-apiserver [bd2c7a9d3b4b] ... I0801 12:21:40.898045 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd2c7a9d3b4b" I0801 12:21:40.933515 13610 logs.go:123] Gathering logs for etcd [a3071594d55a] ... I0801 12:21:40.933528 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3071594d55a" I0801 12:21:41.052722 13610 logs.go:123] Gathering logs for coredns [3267e18a47cb] ... I0801 12:21:41.052739 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3267e18a47cb" I0801 12:21:41.084076 13610 logs.go:123] Gathering logs for kube-proxy [ad0da7896d64] ... I0801 12:21:41.084092 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad0da7896d64" I0801 12:21:41.113966 13610 logs.go:123] Gathering logs for storage-provisioner [bfb81b8caea5] ... I0801 12:21:41.113977 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb81b8caea5" I0801 12:21:41.144099 13610 logs.go:123] Gathering logs for Docker ... I0801 12:21:41.144108 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:21:41.184611 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:21:41.184625 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:21:41.194869 13610 logs.go:123] Gathering logs for container status ... I0801 12:21:41.194880 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:21:43.731957 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:21:48.733016 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:21:48.847470 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:21:48.899933 13610 logs.go:274] 1 containers: [bd2c7a9d3b4b] I0801 12:21:48.899996 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:21:48.938082 13610 logs.go:274] 1 containers: [a3071594d55a] I0801 12:21:48.938141 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:21:48.973107 13610 logs.go:274] 2 containers: [0a117107f521 3267e18a47cb] I0801 12:21:48.973154 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:21:49.002505 13610 logs.go:274] 1 containers: [931bfae7392a] I0801 12:21:49.002549 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:21:49.040566 13610 logs.go:274] 1 containers: [ad0da7896d64] I0801 12:21:49.040615 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:21:49.074598 13610 logs.go:274] 0 containers: [] W0801 12:21:49.074606 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:21:49.074640 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:21:49.104288 13610 logs.go:274] 1 containers: [bfb81b8caea5] I0801 12:21:49.104341 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:21:49.134881 13610 logs.go:274] 1 containers: [7c534cd0ade0] I0801 12:21:49.134898 13610 logs.go:123] Gathering logs for coredns [0a117107f521] ... I0801 12:21:49.134906 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a117107f521" I0801 12:21:49.166194 13610 logs.go:123] Gathering logs for storage-provisioner [bfb81b8caea5] ... I0801 12:21:49.166208 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb81b8caea5" I0801 12:21:49.202156 13610 logs.go:123] Gathering logs for Docker ... I0801 12:21:49.202167 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:21:49.244132 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:21:49.244143 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:21:49.319873 13610 logs.go:123] Gathering logs for kube-apiserver [bd2c7a9d3b4b] ... I0801 12:21:49.319885 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd2c7a9d3b4b" I0801 12:21:49.358556 13610 logs.go:123] Gathering logs for etcd [a3071594d55a] ... I0801 12:21:49.358568 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3071594d55a" I0801 12:21:49.475384 13610 logs.go:123] Gathering logs for kube-scheduler [931bfae7392a] ... I0801 12:21:49.475395 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 931bfae7392a" I0801 12:21:49.517894 13610 logs.go:123] Gathering logs for kube-proxy [ad0da7896d64] ... I0801 12:21:49.517905 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad0da7896d64" I0801 12:21:49.548282 13610 logs.go:123] Gathering logs for kube-controller-manager [7c534cd0ade0] ... I0801 12:21:49.548310 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c534cd0ade0" I0801 12:21:49.587544 13610 logs.go:123] Gathering logs for container status ... I0801 12:21:49.587555 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:21:49.613556 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:21:49.613566 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:21:49.623467 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:21:49.623481 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:21:49.691514 13610 logs.go:123] Gathering logs for coredns [3267e18a47cb] ... I0801 12:21:49.691523 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3267e18a47cb" I0801 12:21:52.221873 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:21:57.222448 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:21:57.346757 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:21:57.382427 13610 logs.go:274] 1 containers: [bd2c7a9d3b4b] I0801 12:21:57.382481 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:21:57.418586 13610 logs.go:274] 1 containers: [a3071594d55a] I0801 12:21:57.418693 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:21:57.458585 13610 logs.go:274] 2 containers: [0a117107f521 3267e18a47cb] I0801 12:21:57.458636 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:21:57.504673 13610 logs.go:274] 1 containers: [931bfae7392a] I0801 12:21:57.504716 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:21:57.533666 13610 logs.go:274] 1 containers: [ad0da7896d64] I0801 12:21:57.533720 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:21:57.578150 13610 logs.go:274] 0 containers: [] W0801 12:21:57.578158 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:21:57.578191 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:21:57.609920 13610 logs.go:274] 1 containers: [bfb81b8caea5] I0801 12:21:57.609965 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:21:57.638185 13610 logs.go:274] 1 containers: [7c534cd0ade0] I0801 12:21:57.638199 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:21:57.638206 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:21:57.647588 13610 logs.go:123] Gathering logs for kube-apiserver [bd2c7a9d3b4b] ... I0801 12:21:57.647600 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd2c7a9d3b4b" I0801 12:21:57.683013 13610 logs.go:123] Gathering logs for coredns [0a117107f521] ... I0801 12:21:57.683026 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a117107f521" I0801 12:21:57.715476 13610 logs.go:123] Gathering logs for coredns [3267e18a47cb] ... I0801 12:21:57.715498 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3267e18a47cb" I0801 12:21:57.745173 13610 logs.go:123] Gathering logs for storage-provisioner [bfb81b8caea5] ... I0801 12:21:57.745183 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb81b8caea5" I0801 12:21:57.775710 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:21:57.775719 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:21:57.843920 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:21:57.843933 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:21:57.917238 13610 logs.go:123] Gathering logs for etcd [a3071594d55a] ... I0801 12:21:57.917253 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3071594d55a" I0801 12:21:58.016354 13610 logs.go:123] Gathering logs for kube-scheduler [931bfae7392a] ... I0801 12:21:58.016367 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 931bfae7392a" I0801 12:21:58.057389 13610 logs.go:123] Gathering logs for kube-proxy [ad0da7896d64] ... I0801 12:21:58.057401 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad0da7896d64" I0801 12:21:58.104457 13610 logs.go:123] Gathering logs for kube-controller-manager [7c534cd0ade0] ... I0801 12:21:58.104485 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c534cd0ade0" I0801 12:21:58.142526 13610 logs.go:123] Gathering logs for Docker ... I0801 12:21:58.142536 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:21:58.185179 13610 logs.go:123] Gathering logs for container status ... I0801 12:21:58.185191 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:22:00.712248 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:22:05.736562 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:22:05.847072 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:22:05.903454 13610 logs.go:274] 1 containers: [bd2c7a9d3b4b] I0801 12:22:05.903507 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:22:05.931756 13610 logs.go:274] 1 containers: [a3071594d55a] I0801 12:22:05.931832 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:22:05.960055 13610 logs.go:274] 2 containers: [0a117107f521 3267e18a47cb] I0801 12:22:05.960096 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:22:05.995573 13610 logs.go:274] 1 containers: [931bfae7392a] I0801 12:22:05.995619 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:22:06.025554 13610 logs.go:274] 1 containers: [ad0da7896d64] I0801 12:22:06.025602 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:22:06.053941 13610 logs.go:274] 0 containers: [] W0801 12:22:06.053950 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:22:06.053986 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:22:06.081812 13610 logs.go:274] 1 containers: [bfb81b8caea5] I0801 12:22:06.081895 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:22:06.110785 13610 logs.go:274] 1 containers: [7c534cd0ade0] I0801 12:22:06.110803 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:22:06.110809 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:22:06.120481 13610 logs.go:123] Gathering logs for kube-scheduler [931bfae7392a] ... I0801 12:22:06.120495 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 931bfae7392a" I0801 12:22:06.163516 13610 logs.go:123] Gathering logs for storage-provisioner [bfb81b8caea5] ... I0801 12:22:06.163528 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb81b8caea5" I0801 12:22:06.193666 13610 logs.go:123] Gathering logs for container status ... I0801 12:22:06.193676 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:22:06.226186 13610 logs.go:123] Gathering logs for coredns [0a117107f521] ... I0801 12:22:06.226200 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a117107f521" I0801 12:22:06.258036 13610 logs.go:123] Gathering logs for coredns [3267e18a47cb] ... I0801 12:22:06.258048 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3267e18a47cb" I0801 12:22:06.301277 13610 logs.go:123] Gathering logs for kube-proxy [ad0da7896d64] ... I0801 12:22:06.301293 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad0da7896d64" I0801 12:22:06.333293 13610 logs.go:123] Gathering logs for kube-controller-manager [7c534cd0ade0] ... I0801 12:22:06.333309 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c534cd0ade0" I0801 12:22:06.375027 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:22:06.375039 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:22:06.447232 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:22:06.447246 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:22:06.518092 13610 logs.go:123] Gathering logs for kube-apiserver [bd2c7a9d3b4b] ... I0801 12:22:06.518106 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd2c7a9d3b4b" I0801 12:22:06.558956 13610 logs.go:123] Gathering logs for etcd [a3071594d55a] ... I0801 12:22:06.558967 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3071594d55a" I0801 12:22:06.663884 13610 logs.go:123] Gathering logs for Docker ... I0801 12:22:06.663896 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:22:09.208873 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:22:14.209288 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:22:14.346782 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:22:14.402378 13610 logs.go:274] 1 containers: [bd2c7a9d3b4b] I0801 12:22:14.402428 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:22:14.429722 13610 logs.go:274] 1 containers: [a3071594d55a] I0801 12:22:14.429772 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:22:14.459504 13610 logs.go:274] 2 containers: [0a117107f521 3267e18a47cb] I0801 12:22:14.459565 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:22:14.488764 13610 logs.go:274] 1 containers: [931bfae7392a] I0801 12:22:14.488844 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:22:14.516975 13610 logs.go:274] 1 containers: [ad0da7896d64] I0801 12:22:14.517030 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:22:14.544560 13610 logs.go:274] 0 containers: [] W0801 12:22:14.544573 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:22:14.544608 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:22:14.573727 13610 logs.go:274] 1 containers: [bfb81b8caea5] I0801 12:22:14.573791 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:22:14.602122 13610 logs.go:274] 1 containers: [7c534cd0ade0] I0801 12:22:14.602143 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:22:14.602150 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:22:14.671675 13610 logs.go:123] Gathering logs for kube-apiserver [bd2c7a9d3b4b] ... I0801 12:22:14.671687 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd2c7a9d3b4b" I0801 12:22:14.715558 13610 logs.go:123] Gathering logs for kube-scheduler [931bfae7392a] ... I0801 12:22:14.715574 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 931bfae7392a" I0801 12:22:14.757130 13610 logs.go:123] Gathering logs for storage-provisioner [bfb81b8caea5] ... I0801 12:22:14.757141 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb81b8caea5" I0801 12:22:14.787675 13610 logs.go:123] Gathering logs for kube-proxy [ad0da7896d64] ... I0801 12:22:14.787685 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad0da7896d64" I0801 12:22:14.819413 13610 logs.go:123] Gathering logs for kube-controller-manager [7c534cd0ade0] ... I0801 12:22:14.819423 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c534cd0ade0" I0801 12:22:14.858653 13610 logs.go:123] Gathering logs for Docker ... I0801 12:22:14.858664 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:22:14.900355 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:22:14.900368 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:22:14.910354 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:22:14.910366 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:22:14.979138 13610 logs.go:123] Gathering logs for etcd [a3071594d55a] ... I0801 12:22:14.979147 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3071594d55a" I0801 12:22:15.075458 13610 logs.go:123] Gathering logs for coredns [0a117107f521] ... I0801 12:22:15.075469 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a117107f521" I0801 12:22:15.106126 13610 logs.go:123] Gathering logs for coredns [3267e18a47cb] ... I0801 12:22:15.106137 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3267e18a47cb" I0801 12:22:15.135102 13610 logs.go:123] Gathering logs for container status ... I0801 12:22:15.135115 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:22:17.660555 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:22:22.661699 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:22:22.847248 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:22:22.905869 13610 logs.go:274] 1 containers: [bd2c7a9d3b4b] I0801 12:22:22.905927 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:22:22.933999 13610 logs.go:274] 1 containers: [a3071594d55a] I0801 12:22:22.934056 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:22:22.962826 13610 logs.go:274] 2 containers: [0a117107f521 3267e18a47cb] I0801 12:22:22.962873 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:22:22.992279 13610 logs.go:274] 1 containers: [931bfae7392a] I0801 12:22:22.992321 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:22:23.021368 13610 logs.go:274] 1 containers: [ad0da7896d64] I0801 12:22:23.021420 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:22:23.048596 13610 logs.go:274] 0 containers: [] W0801 12:22:23.048610 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:22:23.048655 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:22:23.080116 13610 logs.go:274] 1 containers: [bfb81b8caea5] I0801 12:22:23.080169 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:22:23.134091 13610 logs.go:274] 1 containers: [7c534cd0ade0] I0801 12:22:23.134106 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:22:23.134114 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:22:23.148693 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:22:23.148706 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:22:23.222893 13610 logs.go:123] Gathering logs for coredns [3267e18a47cb] ... I0801 12:22:23.222903 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3267e18a47cb" I0801 12:22:23.253167 13610 logs.go:123] Gathering logs for kube-scheduler [931bfae7392a] ... I0801 12:22:23.253181 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 931bfae7392a" I0801 12:22:23.296432 13610 logs.go:123] Gathering logs for Docker ... I0801 12:22:23.296442 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:22:23.339493 13610 logs.go:123] Gathering logs for container status ... I0801 12:22:23.339509 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:22:23.374230 13610 logs.go:123] Gathering logs for kube-controller-manager [7c534cd0ade0] ... I0801 12:22:23.374242 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c534cd0ade0" I0801 12:22:23.417805 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:22:23.417840 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:22:23.493371 13610 logs.go:123] Gathering logs for kube-apiserver [bd2c7a9d3b4b] ... I0801 12:22:23.493386 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd2c7a9d3b4b" I0801 12:22:23.531691 13610 logs.go:123] Gathering logs for etcd [a3071594d55a] ... I0801 12:22:23.531703 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3071594d55a" I0801 12:22:23.635176 13610 logs.go:123] Gathering logs for coredns [0a117107f521] ... I0801 12:22:23.635188 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a117107f521" I0801 12:22:23.667833 13610 logs.go:123] Gathering logs for kube-proxy [ad0da7896d64] ... I0801 12:22:23.667847 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad0da7896d64" I0801 12:22:23.698975 13610 logs.go:123] Gathering logs for storage-provisioner [bfb81b8caea5] ... I0801 12:22:23.698997 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb81b8caea5" I0801 12:22:26.231662 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:22:31.232364 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:22:31.232538 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0801 12:22:31.271611 13610 logs.go:274] 1 containers: [bd2c7a9d3b4b] I0801 12:22:31.271675 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0801 12:22:31.301204 13610 logs.go:274] 1 containers: [a3071594d55a] I0801 12:22:31.301255 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0801 12:22:31.330581 13610 logs.go:274] 2 containers: [0a117107f521 3267e18a47cb] I0801 12:22:31.330645 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0801 12:22:31.361310 13610 logs.go:274] 1 containers: [931bfae7392a] I0801 12:22:31.361352 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0801 12:22:31.390539 13610 logs.go:274] 1 containers: [ad0da7896d64] I0801 12:22:31.390628 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0801 12:22:31.418504 13610 logs.go:274] 0 containers: [] W0801 12:22:31.418513 13610 logs.go:276] No container was found matching "kubernetes-dashboard" I0801 12:22:31.418557 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0801 12:22:31.449139 13610 logs.go:274] 1 containers: [bfb81b8caea5] I0801 12:22:31.449189 13610 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0801 12:22:31.478358 13610 logs.go:274] 1 containers: [7c534cd0ade0] I0801 12:22:31.478371 13610 logs.go:123] Gathering logs for coredns [3267e18a47cb] ... I0801 12:22:31.478376 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3267e18a47cb" I0801 12:22:31.508154 13610 logs.go:123] Gathering logs for kube-scheduler [931bfae7392a] ... I0801 12:22:31.508165 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 931bfae7392a" I0801 12:22:31.550524 13610 logs.go:123] Gathering logs for kube-controller-manager [7c534cd0ade0] ... I0801 12:22:31.550536 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c534cd0ade0" I0801 12:22:31.591227 13610 logs.go:123] Gathering logs for kubelet ... I0801 12:22:31.591239 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0801 12:22:31.675016 13610 logs.go:123] Gathering logs for dmesg ... I0801 12:22:31.675030 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0801 12:22:31.685581 13610 logs.go:123] Gathering logs for describe nodes ... I0801 12:22:31.685594 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0801 12:22:31.756736 13610 logs.go:123] Gathering logs for kube-apiserver [bd2c7a9d3b4b] ... I0801 12:22:31.756747 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd2c7a9d3b4b" I0801 12:22:31.793521 13610 logs.go:123] Gathering logs for etcd [a3071594d55a] ... I0801 12:22:31.793531 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3071594d55a" I0801 12:22:31.891508 13610 logs.go:123] Gathering logs for Docker ... I0801 12:22:31.891519 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0801 12:22:31.933346 13610 logs.go:123] Gathering logs for coredns [0a117107f521] ... I0801 12:22:31.933357 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a117107f521" I0801 12:22:31.963054 13610 logs.go:123] Gathering logs for kube-proxy [ad0da7896d64] ... I0801 12:22:31.963066 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad0da7896d64" I0801 12:22:31.995110 13610 logs.go:123] Gathering logs for storage-provisioner [bfb81b8caea5] ... I0801 12:22:31.995121 13610 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfb81b8caea5" I0801 12:22:32.025148 13610 logs.go:123] Gathering logs for container status ... I0801 12:22:32.025157 13610 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0801 12:22:34.552221 13610 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0801 12:22:39.552636 13610 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0801 12:22:39.598352 13610 out.go:177] W0801 12:22:39.640164 13610 out.go:239] โŒ Exiting due to GUEST_START: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: timed out waiting for the condition W0801 12:22:39.640209 13610 out.go:239] W0801 12:22:39.642009 13610 out.go:239] โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ โ”‚ โ”‚ ๐Ÿ˜ฟ If the above advice does not help, please let us know: โ”‚ โ”‚ ๐Ÿ‘‰ https://github.com/kubernetes/minikube/issues/new/choose โ”‚ โ”‚ โ”‚ โ”‚ Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. โ”‚ โ”‚ โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ I0801 12:22:39.715221 13610 out.go:177] * * ==> Docker <== * -- Logs begin at Mon 2022-08-01 05:26:07 UTC, end at Mon 2022-08-01 06:53:40 UTC. -- Aug 01 06:39:57 minikube dockerd[52551]: time="2022-08-01T06:39:57.376225318Z" level=info msg="ignoring event" container=e532d8496e0ae28fab677f4ed6eb64b90ccf859d04a559b99600c110f78d5476 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 01 06:40:02 minikube dockerd[52551]: time="2022-08-01T06:40:02.024359847Z" level=info msg="ignoring event" container=e8792c88b2495d7e86fb377155ff7df2f5e8b9e5c77a212069305f95a8c67152 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 01 06:40:02 minikube dockerd[52551]: time="2022-08-01T06:40:02.052019748Z" level=info msg="ignoring event" container=667e920428797ac8f3c94051ccb25cecd09d7a69bab8ad45cfff18bc88dad078 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 01 06:40:07 minikube dockerd[52551]: time="2022-08-01T06:40:07.024821556Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=126145f9e140a31d037c4b869242c5a7831c51fbb23b8ea205acfe7e164f7187 Aug 01 06:40:07 minikube dockerd[52551]: time="2022-08-01T06:40:07.031765812Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=1df6f6450fe8f890e0ff8a4e5a36c820c0bff006add6fa56fc463183d4f1af92 Aug 01 06:40:07 minikube dockerd[52551]: time="2022-08-01T06:40:07.113938110Z" level=info msg="ignoring event" container=126145f9e140a31d037c4b869242c5a7831c51fbb23b8ea205acfe7e164f7187 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 01 06:40:07 minikube dockerd[52551]: time="2022-08-01T06:40:07.143850448Z" level=info msg="ignoring event" container=1df6f6450fe8f890e0ff8a4e5a36c820c0bff006add6fa56fc463183d4f1af92 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 01 06:40:08 minikube dockerd[52551]: time="2022-08-01T06:40:08.032976324Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby Aug 01 06:40:08 minikube dockerd[52551]: time="2022-08-01T06:40:08.034088082Z" level=info msg="Daemon shutdown complete" Aug 01 06:40:08 minikube systemd[1]: docker.service: Succeeded. Aug 01 06:40:08 minikube systemd[1]: Stopped Docker Application Container Engine. Aug 01 06:40:08 minikube systemd[1]: docker.service: Consumed 37.037s CPU time. Aug 01 06:40:08 minikube systemd[1]: Starting Docker Application Container Engine... Aug 01 06:40:08 minikube dockerd[74992]: time="2022-08-01T06:40:08.109897277Z" level=info msg="Starting up" Aug 01 06:40:08 minikube dockerd[74992]: time="2022-08-01T06:40:08.112035250Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 01 06:40:08 minikube dockerd[74992]: time="2022-08-01T06:40:08.112065922Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 01 06:40:08 minikube dockerd[74992]: time="2022-08-01T06:40:08.112107518Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Aug 01 06:40:08 minikube dockerd[74992]: time="2022-08-01T06:40:08.112123562Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 01 06:40:08 minikube dockerd[74992]: time="2022-08-01T06:40:08.114379086Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 01 06:40:08 minikube dockerd[74992]: time="2022-08-01T06:40:08.114446991Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 01 06:40:08 minikube dockerd[74992]: time="2022-08-01T06:40:08.114471567Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Aug 01 06:40:08 minikube dockerd[74992]: time="2022-08-01T06:40:08.114489131Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 01 06:40:08 minikube dockerd[74992]: time="2022-08-01T06:40:08.211515312Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Aug 01 06:40:08 minikube dockerd[74992]: time="2022-08-01T06:40:08.361903793Z" level=info msg="Loading containers: start." Aug 01 06:40:08 minikube dockerd[74992]: time="2022-08-01T06:40:08.362813884Z" level=error msg="failed to load container" container=330bb37993a6e825c6c315931a712f1b0401367997f185a3ff729193833c0a23 error="open /var/lib/docker/containers/330bb37993a6e825c6c315931a712f1b0401367997f185a3ff729193833c0a23/config.v2.json: no such file or directory" Aug 01 06:40:08 minikube dockerd[74992]: time="2022-08-01T06:40:08.365491034Z" level=error msg="failed to load container" container=d9b2bb7ee1c27fd742a63c181e6d3bd6ad3b162b9f472db190eab2a01e2d5a33 error="open /var/lib/docker/containers/d9b2bb7ee1c27fd742a63c181e6d3bd6ad3b162b9f472db190eab2a01e2d5a33/config.v2.json: no such file or directory" Aug 01 06:40:08 minikube dockerd[74992]: time="2022-08-01T06:40:08.365886031Z" level=error msg="failed to load container" container=bb09ece6ba8242304e333a903c9d5877038e4cd099b47f6917cd9efa0433416f error="open /var/lib/docker/containers/bb09ece6ba8242304e333a903c9d5877038e4cd099b47f6917cd9efa0433416f/config.v2.json: no such file or directory" Aug 01 06:40:09 minikube dockerd[74992]: time="2022-08-01T06:40:09.317115090Z" level=error msg="stream copy error: reading from a closed fifo" Aug 01 06:40:09 minikube dockerd[74992]: time="2022-08-01T06:40:09.317152482Z" level=error msg="stream copy error: reading from a closed fifo" Aug 01 06:40:09 minikube dockerd[74992]: time="2022-08-01T06:40:09.325311645Z" level=info msg="ignoring event" container=d34ef40886ea1ee50e13141ae028150b791ede612a3bcb515dd30a8da764fc2f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 01 06:40:09 minikube dockerd[74992]: time="2022-08-01T06:40:09.843743987Z" level=info msg="ignoring event" container=c72aaf4d122f5ed54cf16b2f52c510f1849f768ac47a526c97700c404f4226f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 01 06:40:09 minikube dockerd[74992]: time="2022-08-01T06:40:09.939910369Z" level=info msg="ignoring event" container=a7755f2015c8016bf5e0385d6de5ffea786f7c3ea4bb001b37becafe2c5fc1f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 01 06:40:09 minikube dockerd[74992]: time="2022-08-01T06:40:09.939967642Z" level=info msg="ignoring event" container=0886bb3d3899db36712549670de6e4f5ea04e8319be47361c18afe3f6820095b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 01 06:40:10 minikube dockerd[74992]: time="2022-08-01T06:40:10.808402548Z" level=info msg="Removing stale sandbox bc6a0e71c0f61406a4d852bc70fc1ecad34f919f489a2cabfb1fc01552366fa6 (a7755f2015c8016bf5e0385d6de5ffea786f7c3ea4bb001b37becafe2c5fc1f3)" Aug 01 06:40:10 minikube dockerd[74992]: time="2022-08-01T06:40:10.844174443Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 435cdafca526d6f9248443aaae3bb81112dbaf69b1c7e000ee494f1f04adcffc 8900ca8d50977291c4d9cd33ff6ba412634f48d6f678759d8b7a55eb98e9ed76], retrying...." Aug 01 06:40:11 minikube dockerd[74992]: time="2022-08-01T06:40:11.065413723Z" level=info msg="Removing stale sandbox d9acb71666fc5ebd1176bd295acc8953f76d6c2a2a54d434f3b42c684688cb6c (0886bb3d3899db36712549670de6e4f5ea04e8319be47361c18afe3f6820095b)" Aug 01 06:40:11 minikube dockerd[74992]: time="2022-08-01T06:40:11.101911752Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 435cdafca526d6f9248443aaae3bb81112dbaf69b1c7e000ee494f1f04adcffc 498334e705d53dc62cb8c6cddcaca9d67d368dc7147fa023ecc93c386d982f54], retrying...." Aug 01 06:40:11 minikube dockerd[74992]: time="2022-08-01T06:40:11.377101829Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 01 06:40:11 minikube dockerd[74992]: time="2022-08-01T06:40:11.697623772Z" level=info msg="Loading containers: done." Aug 01 06:40:11 minikube dockerd[74992]: time="2022-08-01T06:40:11.826908418Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17 Aug 01 06:40:11 minikube dockerd[74992]: time="2022-08-01T06:40:11.827078483Z" level=info msg="Daemon has completed initialization" Aug 01 06:40:11 minikube systemd[1]: Started Docker Application Container Engine. Aug 01 06:40:11 minikube dockerd[74992]: time="2022-08-01T06:40:11.994855541Z" level=info msg="API listen on [::]:2376" Aug 01 06:40:12 minikube dockerd[74992]: time="2022-08-01T06:40:12.005674985Z" level=info msg="API listen on /var/run/docker.sock" Aug 01 06:44:30 minikube dockerd[74992]: time="2022-08-01T06:44:30.645502361Z" level=info msg="ignoring event" container=f605ed309a4712ec333424fa5eeccd9081547fad0ae545617132935357de864f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 01 06:44:31 minikube dockerd[74992]: time="2022-08-01T06:44:31.579577239Z" level=info msg="ignoring event" container=7a0b604c74edbdc7f0ceacea5d4b64f83233c509f2204f9c52f5da4c15e7e63c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 01 06:44:32 minikube dockerd[74992]: time="2022-08-01T06:44:32.475539501Z" level=info msg="ignoring event" container=5a20fc3083dc987962d335e91d00a80d1b5830360bacead721147b5b2f61602a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 01 06:44:33 minikube dockerd[74992]: time="2022-08-01T06:44:33.554688254Z" level=info msg="ignoring event" container=795b2fd54b9ab3e5d20f4b0229807e2b4d344f4bee51e29e1dc0009d529eef67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 01 06:44:34 minikube dockerd[74992]: time="2022-08-01T06:44:34.537837954Z" level=info msg="ignoring event" container=8361f37877ef3382ac141bf5e6baa585a42e3da96cccdc3cd13925f89d98f705 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 01 06:44:35 minikube dockerd[74992]: time="2022-08-01T06:44:35.608853854Z" level=info msg="ignoring event" container=c191559d0caf0cb10ae0310d1551bfb41d0ffb4850b8ab309b19fd1400adf675 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 01 06:44:36 minikube dockerd[74992]: time="2022-08-01T06:44:36.529741687Z" level=info msg="ignoring event" container=4dd267834632b48aedba038a6c67b62d701bb9120cc6c5fe7a489879b1ed84fc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 01 06:44:37 minikube dockerd[74992]: time="2022-08-01T06:44:37.765021933Z" level=info msg="ignoring event" container=1e90427114951fc863553e1cbf5cdb6d162f822848d4b50e249317ef8d5f9697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 01 06:44:38 minikube dockerd[74992]: time="2022-08-01T06:44:38.775687692Z" level=info msg="ignoring event" container=bd1cd18cba79e995ff920f2d338c263ddd25f872a9f3a8bf0324999035a713d7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 01 06:44:39 minikube dockerd[74992]: time="2022-08-01T06:44:39.768117633Z" level=info msg="ignoring event" container=6fb747a9eb01274dde2df59520155642a2c5db986f5c0e7dd280a16f517a7a92 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 01 06:44:40 minikube dockerd[74992]: time="2022-08-01T06:44:40.696813117Z" level=info msg="ignoring event" container=19d7e089742832c9780208fd6033a173d45e48914adffe8acfebea353469c9dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 01 06:44:41 minikube dockerd[74992]: time="2022-08-01T06:44:41.821616952Z" level=info msg="ignoring event" container=6b0c7e1f887448d740265c2475eb80f019396b4944462987b164f4d3c3775896 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 01 06:44:43 minikube dockerd[74992]: time="2022-08-01T06:44:43.018262299Z" level=info msg="ignoring event" container=8a07658ddcc8338693738430898a90b5f0464f31d2b2a990fb3175c507842b34 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 01 06:44:44 minikube dockerd[74992]: time="2022-08-01T06:44:44.383303582Z" level=info msg="ignoring event" container=490f7678f689b7c9c5be68eacf34541889d3b737e5d5f99fa211a0242c098355 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 01 06:44:45 minikube dockerd[74992]: time="2022-08-01T06:44:45.331289111Z" level=info msg="ignoring event" container=24bc3ee4e61a7af8141d59c004f77aad3ab784db202731a83d5064c3f7830c3f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 01 06:44:46 minikube dockerd[74992]: time="2022-08-01T06:44:46.333836678Z" level=info msg="ignoring event" container=88210294c08fad47123cebff8354ed6d397d843222346b876799063482d51531 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID bfb81b8caea5b 6e38f40d628db 5 minutes ago Running storage-provisioner 0 ff4b8dd52f2a0 0a117107f521c a4ca41631cc7a 8 minutes ago Running coredns 0 eced8f0e4c81e 3267e18a47cb1 a4ca41631cc7a 8 minutes ago Running coredns 0 4741b3bf10b6f ad0da7896d649 beb86f5d8e6cd 8 minutes ago Running kube-proxy 0 7d4fc74b5ebad 7c534cd0ade0b b4ea7e648530d 8 minutes ago Running kube-controller-manager 0 19b443d0c3eb7 931bfae7392a5 18688a72645c5 8 minutes ago Running kube-scheduler 0 4f8f4b8fe3def a3071594d55a1 aebe758cef4cd 8 minutes ago Running etcd 0 7f827deca4242 bd2c7a9d3b4ba e9f4b425f9192 8 minutes ago Running kube-apiserver 0 c088c5a3b7c3c * * ==> coredns [0a117107f521] <== * .:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.8.6 linux/amd64, go1.17.1, 13a9191 [INFO] Reloading [INFO] plugin/health: Going into lameduck mode for 5s [INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031 [INFO] Reloading complete * * ==> coredns [3267e18a47cb] <== * .:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.8.6 linux/amd64, go1.17.1, 13a9191 [INFO] Reloading [INFO] plugin/health: Going into lameduck mode for 5s [INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031 [INFO] Reloading complete * * ==> describe nodes <== * Name: minikube Roles: control-plane Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=f4b412861bb746be73053c9f6d2895f12cf78565 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2022_08_01T12_15_14_0700 minikube.k8s.io/version=v1.26.0 node-role.kubernetes.io/control-plane= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 01 Aug 2022 06:45:04 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Mon, 01 Aug 2022 06:53:35 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Mon, 01 Aug 2022 06:50:31 +0000 Mon, 01 Aug 2022 06:45:04 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 01 Aug 2022 06:50:31 +0000 Mon, 01 Aug 2022 06:45:04 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 01 Aug 2022 06:50:31 +0000 Mon, 01 Aug 2022 06:45:04 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 01 Aug 2022 06:50:31 +0000 Mon, 01 Aug 2022 06:45:24 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: minikube Capacity: cpu: 4 ephemeral-storage: 65792556Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 3899548Ki pods: 110 Allocatable: cpu: 4 ephemeral-storage: 65792556Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 3899548Ki pods: 110 System Info: Machine ID: d8902d1345bb469697278da23257a8d2 System UUID: d8902d1345bb469697278da23257a8d2 Boot ID: 61eb7b8e-5f2c-4a15-a34c-51dbb8bc8a25 Kernel Version: 5.10.104-linuxkit OS Image: Ubuntu 20.04.4 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.17 Kubelet Version: v1.24.1 Kube-Proxy Version: v1.24.1 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-6d4b75cb6d-489z5 100m (2%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (4%!)(MISSING) 8m15s kube-system coredns-6d4b75cb6d-w8nq4 100m (2%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (4%!)(MISSING) 8m15s kube-system etcd-minikube 100m (2%!)(MISSING) 0 (0%!)(MISSING) 100Mi (2%!)(MISSING) 0 (0%!)(MISSING) 8m29s kube-system kube-apiserver-minikube 250m (6%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m34s kube-system kube-controller-manager-minikube 200m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m34s kube-system kube-proxy-rhrq7 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m15s kube-system kube-scheduler-minikube 100m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m29s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m22s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 850m (21%!)(MISSING) 0 (0%!)(MISSING) memory 240Mi (6%!)(MISSING) 340Mi (8%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 8m6s kube-proxy Normal NodeAllocatableEnforced 8m28s kubelet Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 8m28s kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 8m28s kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 8m28s kubelet Node minikube status is now: NodeHasSufficientPID Normal Starting 8m28s kubelet Starting kubelet. Normal NodeReady 8m17s kubelet Node minikube status is now: NodeReady Normal RegisteredNode 8m16s node-controller Node minikube event: Registered Node minikube in Controller * * ==> dmesg <== * [Aug 1 05:16] #2 [ +0.002962] #3 [ +1.621963] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds). [ +0.008269] the cryptoloop driver has been deprecated and will be removed in in Linux 5.16 [Aug 1 05:17] grpcfuse: loading out-of-tree module taints kernel. * * ==> etcd [a3071594d55a] <== * {"level":"info","ts":"2022-08-01T06:45:37.359Z","caller":"traceutil/trace.go:171","msg":"trace[1943201444] transaction","detail":"{read_only:false; response_revision:386; number_of_response:1; }","duration":"148.021156ms","start":"2022-08-01T06:45:37.211Z","end":"2022-08-01T06:45:37.359Z","steps":["trace[1943201444] 'process raft request' (duration: 147.206144ms)"],"step_count":1} {"level":"warn","ts":"2022-08-01T06:46:01.390Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"105.964571ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"} {"level":"info","ts":"2022-08-01T06:46:01.391Z","caller":"traceutil/trace.go:171","msg":"trace[1308361869] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:393; }","duration":"106.143399ms","start":"2022-08-01T06:46:01.284Z","end":"2022-08-01T06:46:01.391Z","steps":["trace[1308361869] 'agreement among raft nodes before linearized reading' (duration: 15.987475ms)","trace[1308361869] 'range keys from in-memory index tree' (duration: 89.918978ms)"],"step_count":2} {"level":"info","ts":"2022-08-01T06:46:11.373Z","caller":"traceutil/trace.go:171","msg":"trace[1917668402] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"147.461908ms","start":"2022-08-01T06:46:11.226Z","end":"2022-08-01T06:46:11.373Z","steps":["trace[1917668402] 'process raft request' (duration: 114.898878ms)","trace[1917668402] 'compare' (duration: 32.309874ms)"],"step_count":2} {"level":"info","ts":"2022-08-01T06:46:31.389Z","caller":"traceutil/trace.go:171","msg":"trace[1226840689] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"161.731719ms","start":"2022-08-01T06:46:31.227Z","end":"2022-08-01T06:46:31.389Z","steps":["trace[1226840689] 'process raft request' (duration: 83.050951ms)","trace[1226840689] 'compare' (duration: 78.481472ms)"],"step_count":2} {"level":"warn","ts":"2022-08-01T06:46:41.363Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"112.614807ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"} {"level":"info","ts":"2022-08-01T06:46:41.363Z","caller":"traceutil/trace.go:171","msg":"trace[1518898377] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:401; }","duration":"112.955908ms","start":"2022-08-01T06:46:41.250Z","end":"2022-08-01T06:46:41.363Z","steps":["trace[1518898377] 'agreement among raft nodes before linearized reading' (duration: 47.626368ms)","trace[1518898377] 'range keys from in-memory index tree' (duration: 64.922962ms)"],"step_count":2} {"level":"info","ts":"2022-08-01T06:47:01.410Z","caller":"traceutil/trace.go:171","msg":"trace[1361527923] linearizableReadLoop","detail":"{readStateIndex:441; appliedIndex:441; }","duration":"147.243083ms","start":"2022-08-01T06:47:01.263Z","end":"2022-08-01T06:47:01.410Z","steps":["trace[1361527923] 'read index received' (duration: 147.233806ms)","trace[1361527923] 'applied index is now lower than readState.Index' (duration: 7.661ยตs)"],"step_count":2} {"level":"warn","ts":"2022-08-01T06:47:01.493Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"230.212238ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"} {"level":"info","ts":"2022-08-01T06:47:01.493Z","caller":"traceutil/trace.go:171","msg":"trace[1529312939] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:405; }","duration":"230.338639ms","start":"2022-08-01T06:47:01.263Z","end":"2022-08-01T06:47:01.493Z","steps":["trace[1529312939] 'agreement among raft nodes before linearized reading' (duration: 147.403895ms)","trace[1529312939] 'range keys from in-memory index tree' (duration: 82.755543ms)"],"step_count":2} {"level":"warn","ts":"2022-08-01T06:47:01.493Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"196.927002ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:7"} {"level":"info","ts":"2022-08-01T06:47:01.493Z","caller":"traceutil/trace.go:171","msg":"trace[128396981] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:405; }","duration":"197.152446ms","start":"2022-08-01T06:47:01.296Z","end":"2022-08-01T06:47:01.493Z","steps":["trace[128396981] 'agreement among raft nodes before linearized reading' (duration: 114.280551ms)","trace[128396981] 'count revisions from in-memory index tree' (duration: 82.621971ms)"],"step_count":2} {"level":"warn","ts":"2022-08-01T06:47:41.459Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"173.644362ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"} {"level":"info","ts":"2022-08-01T06:47:41.459Z","caller":"traceutil/trace.go:171","msg":"trace[1888694114] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:413; }","duration":"173.728607ms","start":"2022-08-01T06:47:41.285Z","end":"2022-08-01T06:47:41.459Z","steps":["trace[1888694114] 'agreement among raft nodes before linearized reading' (duration: 75.067901ms)","trace[1888694114] 'range keys from in-memory index tree' (duration: 98.531366ms)"],"step_count":2} {"level":"warn","ts":"2022-08-01T06:48:19.423Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"130.285689ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/kube-system/system:persistent-volume-provisioner\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-08-01T06:48:19.423Z","caller":"traceutil/trace.go:171","msg":"trace[1202340062] range","detail":"{range_begin:/registry/roles/kube-system/system:persistent-volume-provisioner; range_end:; response_count:0; response_revision:423; }","duration":"130.450022ms","start":"2022-08-01T06:48:19.292Z","end":"2022-08-01T06:48:19.423Z","steps":["trace[1202340062] 'agreement among raft nodes before linearized reading' (duration: 38.846743ms)","trace[1202340062] 'range keys from in-memory index tree' (duration: 91.403596ms)"],"step_count":2} {"level":"warn","ts":"2022-08-01T06:48:19.674Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"141.061428ms","expected-duration":"100ms","prefix":"","request":"header: txn: success:> failure: >>","response":"size:16"} {"level":"info","ts":"2022-08-01T06:48:19.674Z","caller":"traceutil/trace.go:171","msg":"trace[214380439] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"146.843853ms","start":"2022-08-01T06:48:19.527Z","end":"2022-08-01T06:48:19.674Z","steps":["trace[214380439] 'compare' (duration: 140.945127ms)"],"step_count":1} {"level":"warn","ts":"2022-08-01T06:48:21.471Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"203.252205ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"} {"level":"info","ts":"2022-08-01T06:48:21.471Z","caller":"traceutil/trace.go:171","msg":"trace[1545189762] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:431; }","duration":"203.396093ms","start":"2022-08-01T06:48:21.268Z","end":"2022-08-01T06:48:21.471Z","steps":["trace[1545189762] 'agreement among raft nodes before linearized reading' (duration: 89.802541ms)","trace[1545189762] 'range keys from in-memory index tree' (duration: 113.37192ms)"],"step_count":2} {"level":"warn","ts":"2022-08-01T06:48:23.206Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"111.585451ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"} {"level":"info","ts":"2022-08-01T06:48:23.206Z","caller":"traceutil/trace.go:171","msg":"trace[532886183] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:434; }","duration":"111.710507ms","start":"2022-08-01T06:48:23.095Z","end":"2022-08-01T06:48:23.206Z","steps":["trace[532886183] 'agreement among raft nodes before linearized reading' (duration: 72.578736ms)","trace[532886183] 'range keys from in-memory index tree' (duration: 38.962206ms)"],"step_count":2} {"level":"info","ts":"2022-08-01T06:48:23.207Z","caller":"traceutil/trace.go:171","msg":"trace[904927288] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"110.737793ms","start":"2022-08-01T06:48:23.096Z","end":"2022-08-01T06:48:23.207Z","steps":["trace[904927288] 'process raft request' (duration: 71.486601ms)","trace[904927288] 'compare' (duration: 38.27481ms)"],"step_count":2} {"level":"warn","ts":"2022-08-01T06:48:53.829Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"117.444133ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-08-01T06:48:53.829Z","caller":"traceutil/trace.go:171","msg":"trace[1861618542] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:458; }","duration":"117.567797ms","start":"2022-08-01T06:48:53.711Z","end":"2022-08-01T06:48:53.829Z","steps":["trace[1861618542] 'range keys from in-memory index tree' (duration: 117.294397ms)"],"step_count":1} {"level":"info","ts":"2022-08-01T06:48:57.947Z","caller":"traceutil/trace.go:171","msg":"trace[638370654] linearizableReadLoop","detail":"{readStateIndex:520; appliedIndex:520; }","duration":"237.862432ms","start":"2022-08-01T06:48:57.710Z","end":"2022-08-01T06:48:57.947Z","steps":["trace[638370654] 'read index received' (duration: 237.851175ms)","trace[638370654] 'applied index is now lower than readState.Index' (duration: 9.129ยตs)"],"step_count":2} {"level":"warn","ts":"2022-08-01T06:48:58.053Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"343.83298ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-08-01T06:48:58.054Z","caller":"traceutil/trace.go:171","msg":"trace[660640527] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:460; }","duration":"343.970253ms","start":"2022-08-01T06:48:57.710Z","end":"2022-08-01T06:48:58.054Z","steps":["trace[660640527] 'agreement among raft nodes before linearized reading' (duration: 238.008463ms)","trace[660640527] 'range keys from in-memory index tree' (duration: 105.797309ms)"],"step_count":2} {"level":"warn","ts":"2022-08-01T06:48:58.054Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-08-01T06:48:57.710Z","time spent":"344.08664ms","remote":"127.0.0.1:56348","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "} {"level":"warn","ts":"2022-08-01T06:48:59.820Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"109.14509ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-08-01T06:48:59.820Z","caller":"traceutil/trace.go:171","msg":"trace[2095567749] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:462; }","duration":"109.456209ms","start":"2022-08-01T06:48:59.711Z","end":"2022-08-01T06:48:59.820Z","steps":["trace[2095567749] 'range keys from in-memory index tree' (duration: 106.244495ms)"],"step_count":1} {"level":"warn","ts":"2022-08-01T06:50:14.848Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"137.820785ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-08-01T06:50:14.848Z","caller":"traceutil/trace.go:171","msg":"trace[1063568948] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:516; }","duration":"138.071769ms","start":"2022-08-01T06:50:14.710Z","end":"2022-08-01T06:50:14.848Z","steps":["trace[1063568948] 'agreement among raft nodes before linearized reading' (duration: 32.189028ms)","trace[1063568948] 'range keys from in-memory index tree' (duration: 105.597309ms)"],"step_count":2} {"level":"warn","ts":"2022-08-01T06:50:20.923Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"122.350542ms","expected-duration":"100ms","prefix":"","request":"header: txn: success:> failure: >>","response":"size:16"} {"level":"info","ts":"2022-08-01T06:50:20.923Z","caller":"traceutil/trace.go:171","msg":"trace[1116596240] transaction","detail":"{read_only:false; response_revision:520; number_of_response:1; }","duration":"123.399233ms","start":"2022-08-01T06:50:20.800Z","end":"2022-08-01T06:50:20.923Z","steps":["trace[1116596240] 'compare' (duration: 122.193288ms)"],"step_count":1} {"level":"info","ts":"2022-08-01T06:50:21.415Z","caller":"traceutil/trace.go:171","msg":"trace[723538422] transaction","detail":"{read_only:false; response_revision:521; number_of_response:1; }","duration":"178.028429ms","start":"2022-08-01T06:50:21.237Z","end":"2022-08-01T06:50:21.415Z","steps":["trace[723538422] 'process raft request' (duration: 96.460163ms)","trace[723538422] 'compare' (duration: 81.404446ms)"],"step_count":2} {"level":"warn","ts":"2022-08-01T06:50:26.923Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"114.370988ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" count_only:true ","response":"range_response_count:0 size:7"} {"level":"info","ts":"2022-08-01T06:50:26.923Z","caller":"traceutil/trace.go:171","msg":"trace[1361938942] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; response_count:0; response_revision:524; }","duration":"114.552489ms","start":"2022-08-01T06:50:26.808Z","end":"2022-08-01T06:50:26.923Z","steps":["trace[1361938942] 'agreement among raft nodes before linearized reading' (duration: 61.995222ms)","trace[1361938942] 'count revisions from in-memory index tree' (duration: 52.337487ms)"],"step_count":2} {"level":"info","ts":"2022-08-01T06:51:27.913Z","caller":"traceutil/trace.go:171","msg":"trace[1755267957] linearizableReadLoop","detail":"{readStateIndex:657; appliedIndex:657; }","duration":"356.628968ms","start":"2022-08-01T06:51:27.557Z","end":"2022-08-01T06:51:27.913Z","steps":["trace[1755267957] 'read index received' (duration: 356.61832ms)","trace[1755267957] 'applied index is now lower than readState.Index' (duration: 8.631ยตs)"],"step_count":2} {"level":"warn","ts":"2022-08-01T06:51:27.944Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"149.673551ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" count_only:true ","response":"range_response_count:0 size:7"} {"level":"warn","ts":"2022-08-01T06:51:27.944Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"150.05676ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"range_response_count:0 size:5"} {"level":"warn","ts":"2022-08-01T06:51:27.944Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"387.200609ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-08-01T06:51:27.944Z","caller":"traceutil/trace.go:171","msg":"trace[2140874813] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; response_count:0; response_revision:567; }","duration":"150.171868ms","start":"2022-08-01T06:51:27.794Z","end":"2022-08-01T06:51:27.944Z","steps":["trace[2140874813] 'agreement among raft nodes before linearized reading' (duration: 119.79914ms)","trace[2140874813] 'count revisions from in-memory index tree' (duration: 30.240426ms)"],"step_count":2} {"level":"info","ts":"2022-08-01T06:51:27.944Z","caller":"traceutil/trace.go:171","msg":"trace[855198267] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; response_count:0; response_revision:567; }","duration":"387.287112ms","start":"2022-08-01T06:51:27.557Z","end":"2022-08-01T06:51:27.944Z","steps":["trace[855198267] 'agreement among raft nodes before linearized reading' (duration: 356.867281ms)","trace[855198267] 'count revisions from in-memory index tree' (duration: 30.315862ms)"],"step_count":2} {"level":"warn","ts":"2022-08-01T06:51:27.944Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"234.185565ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"warn","ts":"2022-08-01T06:51:27.944Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-08-01T06:51:27.557Z","time spent":"387.38938ms","remote":"127.0.0.1:56370","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":29,"request content":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true "} {"level":"info","ts":"2022-08-01T06:51:27.944Z","caller":"traceutil/trace.go:171","msg":"trace[494841382] range","detail":"{range_begin:/registry/deployments/; range_end:/registry/deployments0; response_count:0; response_revision:567; }","duration":"149.86819ms","start":"2022-08-01T06:51:27.794Z","end":"2022-08-01T06:51:27.944Z","steps":["trace[494841382] 'agreement among raft nodes before linearized reading' (duration: 119.52352ms)","trace[494841382] 'count revisions from in-memory index tree' (duration: 30.102718ms)"],"step_count":2} {"level":"info","ts":"2022-08-01T06:51:27.944Z","caller":"traceutil/trace.go:171","msg":"trace[1623179929] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:567; }","duration":"234.246368ms","start":"2022-08-01T06:51:27.710Z","end":"2022-08-01T06:51:27.944Z","steps":["trace[1623179929] 'agreement among raft nodes before linearized reading' (duration: 203.678355ms)","trace[1623179929] 'range keys from in-memory index tree' (duration: 30.445162ms)"],"step_count":2} {"level":"info","ts":"2022-08-01T06:51:31.376Z","caller":"traceutil/trace.go:171","msg":"trace[1903629364] linearizableReadLoop","detail":"{readStateIndex:660; appliedIndex:659; }","duration":"121.117388ms","start":"2022-08-01T06:51:31.254Z","end":"2022-08-01T06:51:31.375Z","steps":["trace[1903629364] 'read index received' (duration: 35.373628ms)","trace[1903629364] 'applied index is now lower than readState.Index' (duration: 85.741967ms)"],"step_count":2} {"level":"warn","ts":"2022-08-01T06:51:31.376Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"121.300242ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-08-01T06:51:31.376Z","caller":"traceutil/trace.go:171","msg":"trace[2136708914] transaction","detail":"{read_only:false; response_revision:569; number_of_response:1; }","duration":"134.909137ms","start":"2022-08-01T06:51:31.241Z","end":"2022-08-01T06:51:31.376Z","steps":["trace[2136708914] 'process raft request' (duration: 49.096192ms)","trace[2136708914] 'compare' (duration: 85.45945ms)"],"step_count":2} {"level":"info","ts":"2022-08-01T06:51:31.376Z","caller":"traceutil/trace.go:171","msg":"trace[1210547243] range","detail":"{range_begin:/registry/poddisruptionbudgets/; range_end:/registry/poddisruptionbudgets0; response_count:0; response_revision:569; }","duration":"121.432639ms","start":"2022-08-01T06:51:31.254Z","end":"2022-08-01T06:51:31.376Z","steps":["trace[1210547243] 'agreement among raft nodes before linearized reading' (duration: 121.194994ms)"],"step_count":1} {"level":"warn","ts":"2022-08-01T06:51:31.651Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"168.771701ms","expected-duration":"100ms","prefix":"","request":"header: txn: success:> failure: >>","response":"size:16"} {"level":"info","ts":"2022-08-01T06:51:31.651Z","caller":"traceutil/trace.go:171","msg":"trace[197524368] transaction","detail":"{read_only:false; response_revision:570; number_of_response:1; }","duration":"169.308657ms","start":"2022-08-01T06:51:31.482Z","end":"2022-08-01T06:51:31.651Z","steps":["trace[197524368] 'compare' (duration: 168.597645ms)"],"step_count":1} {"level":"warn","ts":"2022-08-01T06:51:39.951Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"157.564258ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"} {"level":"info","ts":"2022-08-01T06:51:39.951Z","caller":"traceutil/trace.go:171","msg":"trace[1309784183] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:0; response_revision:575; }","duration":"157.757859ms","start":"2022-08-01T06:51:39.793Z","end":"2022-08-01T06:51:39.951Z","steps":["trace[1309784183] 'count revisions from in-memory index tree' (duration: 157.331109ms)"],"step_count":1} {"level":"info","ts":"2022-08-01T06:52:24.473Z","caller":"traceutil/trace.go:171","msg":"trace[1743809628] transaction","detail":"{read_only:false; response_revision:607; number_of_response:1; }","duration":"244.748966ms","start":"2022-08-01T06:52:24.229Z","end":"2022-08-01T06:52:24.473Z","steps":["trace[1743809628] 'process raft request' (duration: 181.316303ms)","trace[1743809628] 'compare' (duration: 63.292701ms)"],"step_count":2} {"level":"info","ts":"2022-08-01T06:53:05.840Z","caller":"traceutil/trace.go:171","msg":"trace[1951842986] linearizableReadLoop","detail":"{readStateIndex:744; appliedIndex:744; }","duration":"132.868701ms","start":"2022-08-01T06:53:05.707Z","end":"2022-08-01T06:53:05.840Z","steps":["trace[1951842986] 'read index received' (duration: 132.852735ms)","trace[1951842986] 'applied index is now lower than readState.Index' (duration: 13.58ยตs)"],"step_count":2} {"level":"warn","ts":"2022-08-01T06:53:05.870Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"162.718426ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-08-01T06:53:05.870Z","caller":"traceutil/trace.go:171","msg":"trace[1422601985] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:635; }","duration":"162.891331ms","start":"2022-08-01T06:53:05.707Z","end":"2022-08-01T06:53:05.870Z","steps":["trace[1422601985] 'agreement among raft nodes before linearized reading' (duration: 133.111576ms)","trace[1422601985] 'range keys from in-memory index tree' (duration: 29.513752ms)"],"step_count":2} * * ==> kernel <== * 06:53:42 up 1:36, 0 users, load average: 0.70, 0.81, 0.80 Linux minikube 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.4 LTS" * * ==> kube-apiserver [bd2c7a9d3b4b] <== * I0801 06:45:02.452125 1 controller.go:83] Starting OpenAPI AggregationController I0801 06:45:02.452260 1 controller.go:80] Starting OpenAPI V3 AggregationController I0801 06:45:02.453765 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0801 06:45:02.454007 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0801 06:45:02.454630 1 customresource_discovery_controller.go:209] Starting DiscoveryController I0801 06:45:02.454830 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0801 06:45:02.454845 1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister I0801 06:45:02.476138 1 controller.go:85] Starting OpenAPI controller I0801 06:45:02.476286 1 controller.go:85] Starting OpenAPI V3 controller I0801 06:45:02.476379 1 naming_controller.go:291] Starting NamingConditionController I0801 06:45:02.476461 1 establishing_controller.go:76] Starting EstablishingController I0801 06:45:02.476544 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0801 06:45:02.476606 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0801 06:45:02.476711 1 crd_finalizer.go:266] Starting CRDFinalizer I0801 06:45:02.506884 1 shared_informer.go:262] Caches are synced for node_authorizer I0801 06:45:02.538601 1 cache.go:39] Caches are synced for AvailableConditionController controller I0801 06:45:02.538624 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0801 06:45:02.539001 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller I0801 06:45:02.541444 1 controller.go:611] quota admission added evaluator for: namespaces I0801 06:45:02.551666 1 apf_controller.go:322] Running API Priority and Fairness config worker I0801 06:45:02.551702 1 cache.go:39] Caches are synced for autoregister controller I0801 06:45:02.554995 1 shared_informer.go:262] Caches are synced for crd-autoregister I0801 06:45:03.252187 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0801 06:45:03.487765 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000 I0801 06:45:03.687211 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000 I0801 06:45:03.687352 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist. I0801 06:45:10.409757 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0801 06:45:10.768847 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0801 06:45:11.370604 1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0801 06:45:11.440453 1 lease.go:234] Resetting endpoints for master service "kubernetes" to [192.168.49.2] I0801 06:45:11.445101 1 controller.go:611] quota admission added evaluator for: endpoints I0801 06:45:11.459650 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io I0801 06:45:11.754163 1 controller.go:611] quota admission added evaluator for: serviceaccounts I0801 06:45:13.912361 1 trace.go:205] Trace[1710449075]: "Create" url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings,user-agent:kubeadm/v1.24.1 (linux/amd64) kubernetes/3ddd0f4,audit-id:4f150fae-cc8a-4889-9dda-08935d14a89c,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (01-Aug-2022 06:45:13.354) (total time: 558ms): Trace[1710449075]: ---"Object stored in database" 557ms (06:45:13.912) Trace[1710449075]: [558.130169ms] [558.130169ms] END I0801 06:45:13.912465 1 trace.go:205] Trace[2109915174]: "Create" url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.24.1 (linux/amd64) kubernetes/3ddd0f4,audit-id:f4d1334f-b158-4c1c-8130-7304fd90b31a,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (01-Aug-2022 06:45:13.368) (total time: 543ms): Trace[2109915174]: ---"Object stored in database" 543ms (06:45:13.912) Trace[2109915174]: [543.534843ms] [543.534843ms] END I0801 06:45:13.912529 1 trace.go:205] Trace[720357706]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube,user-agent:kubelet/v1.24.1 (linux/amd64) kubernetes/3ddd0f4,audit-id:70dc7195-0cc4-4824-b5c0-3dba9da884ff,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (01-Aug-2022 06:45:13.372) (total time: 540ms): Trace[720357706]: [540.388031ms] [540.388031ms] END I0801 06:45:13.913182 1 trace.go:205] Trace[156859911]: "Get" url:/apis/storage.k8s.io/v1/csinodes/minikube,user-agent:kubelet/v1.24.1 (linux/amd64) kubernetes/3ddd0f4,audit-id:0fb65ed5-5d20-4be5-8f97-ec9236e44c3e,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (01-Aug-2022 06:45:13.368) (total time: 544ms): Trace[156859911]: ---"About to write a response" 544ms (06:45:13.913) Trace[156859911]: [544.89215ms] [544.89215ms] END I0801 06:45:13.924404 1 controller.go:611] quota admission added evaluator for: deployments.apps I0801 06:45:13.946849 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io I0801 06:45:14.177158 1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0801 06:45:14.208586 1 controller.go:611] quota admission added evaluator for: daemonsets.apps I0801 06:45:26.270563 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps I0801 06:45:26.519801 1 controller.go:611] quota admission added evaluator for: replicasets.apps I0801 06:45:28.104218 1 trace.go:205] Trace[1476790557]: "GuaranteedUpdate etcd3" type:*discovery.EndpointSlice (01-Aug-2022 06:45:27.557) (total time: 546ms): Trace[1476790557]: ---"Transaction committed" 546ms (06:45:28.104) Trace[1476790557]: [546.867842ms] [546.867842ms] END I0801 06:45:28.105570 1 trace.go:205] Trace[789943448]: "Update" url:/apis/discovery.k8s.io/v1/namespaces/kube-system/endpointslices/kube-dns-v5msg,user-agent:kube-controller-manager/v1.24.1 (linux/amd64) kubernetes/3ddd0f4/system:serviceaccount:kube-system:endpointslice-controller,audit-id:f5efce04-1cbb-4289-b23f-2637a5daa48d,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (01-Aug-2022 06:45:27.557) (total time: 548ms): Trace[789943448]: ---"Object stored in database" 547ms (06:45:28.104) Trace[789943448]: [548.405171ms] [548.405171ms] END I0801 06:45:31.995644 1 trace.go:205] Trace[1504337849]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.24.1 (linux/amd64) kubernetes/3ddd0f4,audit-id:460f372c-f4e9-478b-9540-b54bb72907fb,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (01-Aug-2022 06:45:31.215) (total time: 780ms): Trace[1504337849]: ---"About to write a response" 780ms (06:45:31.995) Trace[1504337849]: [780.418019ms] [780.418019ms] END I0801 06:45:35.253214 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io * * ==> kube-controller-manager [7c534cd0ade0] <== * I0801 06:45:25.261667 1 shared_informer.go:255] Waiting for caches to sync for service account I0801 06:45:25.509285 1 controllermanager.go:593] Started "garbagecollector" I0801 06:45:25.509393 1 garbagecollector.go:149] Starting garbage collector controller I0801 06:45:25.509594 1 shared_informer.go:255] Waiting for caches to sync for garbage collector I0801 06:45:25.509671 1 graph_builder.go:289] GraphBuilder running I0801 06:45:25.523104 1 shared_informer.go:255] Waiting for caches to sync for resource quota I0801 06:45:25.533086 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving I0801 06:45:25.533108 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client I0801 06:45:25.533380 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client I0801 06:45:25.533989 1 shared_informer.go:262] Caches are synced for namespace I0801 06:45:25.534057 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown W0801 06:45:25.534635 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I0801 06:45:25.536926 1 shared_informer.go:262] Caches are synced for job I0801 06:45:25.545022 1 shared_informer.go:262] Caches are synced for certificate-csrapproving I0801 06:45:25.560155 1 shared_informer.go:262] Caches are synced for ephemeral I0801 06:45:25.560354 1 shared_informer.go:262] Caches are synced for daemon sets I0801 06:45:25.562406 1 shared_informer.go:262] Caches are synced for stateful set I0801 06:45:25.562538 1 shared_informer.go:262] Caches are synced for disruption I0801 06:45:25.562566 1 disruption.go:371] Sending events to api server. I0801 06:45:25.562664 1 shared_informer.go:262] Caches are synced for service account I0801 06:45:25.563958 1 shared_informer.go:262] Caches are synced for PVC protection I0801 06:45:25.567329 1 shared_informer.go:262] Caches are synced for ReplicaSet I0801 06:45:25.567493 1 shared_informer.go:262] Caches are synced for expand I0801 06:45:25.579919 1 shared_informer.go:262] Caches are synced for endpoint_slice I0801 06:45:25.583450 1 shared_informer.go:262] Caches are synced for PV protection I0801 06:45:25.586981 1 shared_informer.go:262] Caches are synced for TTL I0801 06:45:25.588361 1 shared_informer.go:262] Caches are synced for cronjob I0801 06:45:25.605938 1 shared_informer.go:262] Caches are synced for taint I0801 06:45:25.606180 1 taint_manager.go:187] "Starting NoExecuteTaintManager" I0801 06:45:25.606588 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: I0801 06:45:25.606700 1 event.go:294] "Event occurred" object="minikube" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" W0801 06:45:25.606866 1 node_lifecycle_controller.go:1014] Missing timestamp for Node minikube. Assuming now as a timestamp. I0801 06:45:25.606904 1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator I0801 06:45:25.607623 1 shared_informer.go:262] Caches are synced for ReplicationController I0801 06:45:25.608333 1 node_lifecycle_controller.go:1215] Controller detected that zone is now in state Normal. I0801 06:45:25.609207 1 shared_informer.go:262] Caches are synced for TTL after finished I0801 06:45:25.610663 1 shared_informer.go:262] Caches are synced for persistent volume I0801 06:45:25.610732 1 shared_informer.go:262] Caches are synced for HPA I0801 06:45:25.611856 1 shared_informer.go:262] Caches are synced for deployment I0801 06:45:25.612643 1 shared_informer.go:262] Caches are synced for crt configmap I0801 06:45:25.625618 1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring I0801 06:45:25.625730 1 shared_informer.go:262] Caches are synced for bootstrap_signer I0801 06:45:25.628314 1 shared_informer.go:262] Caches are synced for node I0801 06:45:25.628471 1 range_allocator.go:173] Starting range CIDR allocator I0801 06:45:25.628568 1 shared_informer.go:255] Waiting for caches to sync for cidrallocator I0801 06:45:25.628379 1 shared_informer.go:262] Caches are synced for GC I0801 06:45:25.628671 1 shared_informer.go:262] Caches are synced for cidrallocator I0801 06:45:25.650465 1 shared_informer.go:262] Caches are synced for endpoint I0801 06:45:25.723666 1 shared_informer.go:262] Caches are synced for resource quota I0801 06:45:25.734175 1 shared_informer.go:262] Caches are synced for resource quota I0801 06:45:25.831254 1 range_allocator.go:374] Set node minikube PodCIDR to [10.244.0.0/24] I0801 06:45:25.843330 1 shared_informer.go:262] Caches are synced for attach detach I0801 06:45:25.942502 1 shared_informer.go:255] Waiting for caches to sync for garbage collector I0801 06:45:26.242749 1 shared_informer.go:262] Caches are synced for garbage collector I0801 06:45:26.306357 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rhrq7" I0801 06:45:26.310185 1 shared_informer.go:262] Caches are synced for garbage collector I0801 06:45:26.310214 1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0801 06:45:26.556339 1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2" I0801 06:45:26.775728 1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-489z5" I0801 06:45:26.808664 1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-w8nq4" * * ==> kube-proxy [ad0da7896d64] <== * I0801 06:45:34.204450 1 node.go:163] Successfully retrieved node IP: 192.168.49.2 I0801 06:45:34.205017 1 server_others.go:138] "Detected node IP" address="192.168.49.2" I0801 06:45:34.205587 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode="" I0801 06:45:35.239139 1 server_others.go:206] "Using iptables Proxier" I0801 06:45:35.239308 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0801 06:45:35.239365 1 server_others.go:214] "Creating dualStackProxier for iptables" I0801 06:45:35.239424 1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0801 06:45:35.239550 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259" I0801 06:45:35.239954 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259" I0801 06:45:35.241307 1 server.go:661] "Version info" version="v1.24.1" I0801 06:45:35.241438 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0801 06:45:35.244009 1 config.go:317] "Starting service config controller" I0801 06:45:35.244060 1 shared_informer.go:255] Waiting for caches to sync for service config I0801 06:45:35.244320 1 config.go:226] "Starting endpoint slice config controller" I0801 06:45:35.244439 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config I0801 06:45:35.245351 1 config.go:444] "Starting node config controller" I0801 06:45:35.245379 1 shared_informer.go:255] Waiting for caches to sync for node config I0801 06:45:35.345103 1 shared_informer.go:262] Caches are synced for endpoint slice config I0801 06:45:35.345210 1 shared_informer.go:262] Caches are synced for service config I0801 06:45:35.345439 1 shared_informer.go:262] Caches are synced for node config * * ==> kube-scheduler [931bfae7392a] <== * E0801 06:45:03.404405 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0801 06:45:03.458932 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0801 06:45:03.458969 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0801 06:45:03.470632 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0801 06:45:03.470654 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0801 06:45:03.579512 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0801 06:45:03.579581 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0801 06:45:03.661177 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0801 06:45:03.661290 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0801 06:45:03.671446 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0801 06:45:03.671494 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0801 06:45:03.719355 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0801 06:45:03.719401 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0801 06:45:03.719595 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0801 06:45:03.719646 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0801 06:45:03.736215 1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0801 06:45:03.736261 1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0801 06:45:03.745222 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0801 06:45:03.745268 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0801 06:45:03.761006 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0801 06:45:03.761067 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0801 06:45:03.961844 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0801 06:45:03.961903 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0801 06:45:03.997693 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0801 06:45:03.997744 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0801 06:45:04.052898 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0801 06:45:04.052949 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0801 06:45:04.077770 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0801 06:45:04.077905 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0801 06:45:05.351914 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0801 06:45:05.352117 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0801 06:45:05.366089 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0801 06:45:05.366357 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0801 06:45:05.437528 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0801 06:45:05.437562 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0801 06:45:05.456505 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0801 06:45:05.456545 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0801 06:45:05.804861 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0801 06:45:05.805005 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0801 06:45:05.867820 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0801 06:45:05.867864 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0801 06:45:05.869112 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0801 06:45:05.869240 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0801 06:45:05.958339 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0801 06:45:05.958386 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0801 06:45:05.985691 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0801 06:45:05.985779 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0801 06:45:06.114341 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0801 06:45:06.114639 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0801 06:45:06.222405 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0801 06:45:06.222443 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0801 06:45:06.478338 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0801 06:45:06.478453 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0801 06:45:06.553226 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0801 06:45:06.553276 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0801 06:45:06.809782 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0801 06:45:06.809842 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0801 06:45:06.905379 1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0801 06:45:06.905459 1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" I0801 06:45:13.192585 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Mon 2022-08-01 05:26:07 UTC, end at Mon 2022-08-01 06:53:43 UTC. -- Aug 01 06:45:13 minikube kubelet[86016]: E0801 06:45:13.445067 86016 kubelet.go:1998] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Aug 01 06:45:13 minikube kubelet[86016]: I0801 06:45:13.479003 86016 kubelet_node_status.go:70] "Attempting to register node" node="minikube" Aug 01 06:45:13 minikube kubelet[86016]: I0801 06:45:13.545838 86016 topology_manager.go:200] "Topology Admit Handler" Aug 01 06:45:13 minikube kubelet[86016]: I0801 06:45:13.546093 86016 topology_manager.go:200] "Topology Admit Handler" Aug 01 06:45:13 minikube kubelet[86016]: I0801 06:45:13.546204 86016 topology_manager.go:200] "Topology Admit Handler" Aug 01 06:45:13 minikube kubelet[86016]: I0801 06:45:13.546275 86016 topology_manager.go:200] "Topology Admit Handler" Aug 01 06:45:13 minikube kubelet[86016]: I0801 06:45:13.573238 86016 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bab0508344d11c6fdb45b1f91c440ff5-kubeconfig\") pod \"kube-scheduler-minikube\" (UID: \"bab0508344d11c6fdb45b1f91c440ff5\") " pod="kube-system/kube-scheduler-minikube" Aug 01 06:45:13 minikube kubelet[86016]: I0801 06:45:13.573337 86016 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6580cebb2d04c6c59385cf58e278b0a6-k8s-certs\") pod \"kube-apiserver-minikube\" (UID: \"6580cebb2d04c6c59385cf58e278b0a6\") " pod="kube-system/kube-apiserver-minikube" Aug 01 06:45:13 minikube kubelet[86016]: I0801 06:45:13.573394 86016 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b4f7419eaf4a6f0ee6121d47723a0c8d-ca-certs\") pod \"kube-controller-manager-minikube\" (UID: \"b4f7419eaf4a6f0ee6121d47723a0c8d\") " pod="kube-system/kube-controller-manager-minikube" Aug 01 06:45:13 minikube kubelet[86016]: I0801 06:45:13.573438 86016 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b4f7419eaf4a6f0ee6121d47723a0c8d-etc-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"b4f7419eaf4a6f0ee6121d47723a0c8d\") " pod="kube-system/kube-controller-manager-minikube" Aug 01 06:45:13 minikube kubelet[86016]: I0801 06:45:13.573480 86016 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b4f7419eaf4a6f0ee6121d47723a0c8d-kubeconfig\") pod \"kube-controller-manager-minikube\" (UID: \"b4f7419eaf4a6f0ee6121d47723a0c8d\") " pod="kube-system/kube-controller-manager-minikube" Aug 01 06:45:13 minikube kubelet[86016]: I0801 06:45:13.573559 86016 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6580cebb2d04c6c59385cf58e278b0a6-usr-local-share-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"6580cebb2d04c6c59385cf58e278b0a6\") " pod="kube-system/kube-apiserver-minikube" Aug 01 06:45:13 minikube kubelet[86016]: I0801 06:45:13.573617 86016 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b4f7419eaf4a6f0ee6121d47723a0c8d-usr-share-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"b4f7419eaf4a6f0ee6121d47723a0c8d\") " pod="kube-system/kube-controller-manager-minikube" Aug 01 06:45:13 minikube kubelet[86016]: I0801 06:45:13.573658 86016 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/906edd533192a4db2396a938662a5271-etcd-data\") pod \"etcd-minikube\" (UID: \"906edd533192a4db2396a938662a5271\") " pod="kube-system/etcd-minikube" Aug 01 06:45:13 minikube kubelet[86016]: I0801 06:45:13.573707 86016 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6580cebb2d04c6c59385cf58e278b0a6-ca-certs\") pod \"kube-apiserver-minikube\" (UID: \"6580cebb2d04c6c59385cf58e278b0a6\") " pod="kube-system/kube-apiserver-minikube" Aug 01 06:45:13 minikube kubelet[86016]: I0801 06:45:13.573746 86016 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6580cebb2d04c6c59385cf58e278b0a6-etc-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"6580cebb2d04c6c59385cf58e278b0a6\") " pod="kube-system/kube-apiserver-minikube" Aug 01 06:45:13 minikube kubelet[86016]: I0801 06:45:13.573785 86016 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b4f7419eaf4a6f0ee6121d47723a0c8d-flexvolume-dir\") pod \"kube-controller-manager-minikube\" (UID: \"b4f7419eaf4a6f0ee6121d47723a0c8d\") " pod="kube-system/kube-controller-manager-minikube" Aug 01 06:45:13 minikube kubelet[86016]: I0801 06:45:13.573828 86016 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b4f7419eaf4a6f0ee6121d47723a0c8d-usr-local-share-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"b4f7419eaf4a6f0ee6121d47723a0c8d\") " pod="kube-system/kube-controller-manager-minikube" Aug 01 06:45:13 minikube kubelet[86016]: I0801 06:45:13.573873 86016 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/906edd533192a4db2396a938662a5271-etcd-certs\") pod \"etcd-minikube\" (UID: \"906edd533192a4db2396a938662a5271\") " pod="kube-system/etcd-minikube" Aug 01 06:45:13 minikube kubelet[86016]: I0801 06:45:13.573918 86016 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6580cebb2d04c6c59385cf58e278b0a6-usr-share-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"6580cebb2d04c6c59385cf58e278b0a6\") " pod="kube-system/kube-apiserver-minikube" Aug 01 06:45:13 minikube kubelet[86016]: I0801 06:45:13.573959 86016 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b4f7419eaf4a6f0ee6121d47723a0c8d-k8s-certs\") pod \"kube-controller-manager-minikube\" (UID: \"b4f7419eaf4a6f0ee6121d47723a0c8d\") " pod="kube-system/kube-controller-manager-minikube" Aug 01 06:45:13 minikube kubelet[86016]: E0801 06:45:13.916863 86016 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-minikube\" already exists" pod="kube-system/kube-controller-manager-minikube" Aug 01 06:45:13 minikube kubelet[86016]: E0801 06:45:13.917738 86016 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-minikube\" already exists" pod="kube-system/kube-scheduler-minikube" Aug 01 06:45:13 minikube kubelet[86016]: E0801 06:45:13.918140 86016 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-minikube\" already exists" pod="kube-system/etcd-minikube" Aug 01 06:45:13 minikube kubelet[86016]: E0801 06:45:13.918274 86016 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-minikube\" already exists" pod="kube-system/kube-apiserver-minikube" Aug 01 06:45:14 minikube kubelet[86016]: I0801 06:45:14.004969 86016 kubelet_node_status.go:108] "Node was previously registered" node="minikube" Aug 01 06:45:14 minikube kubelet[86016]: I0801 06:45:14.005272 86016 kubelet_node_status.go:73] "Successfully registered node" node="minikube" Aug 01 06:45:14 minikube kubelet[86016]: I0801 06:45:14.351589 86016 apiserver.go:52] "Watching apiserver" Aug 01 06:45:14 minikube kubelet[86016]: I0801 06:45:14.583898 86016 reconciler.go:157] "Reconciler: start to sync state" Aug 01 06:45:15 minikube kubelet[86016]: E0801 06:45:15.001378 86016 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-minikube\" already exists" pod="kube-system/kube-controller-manager-minikube" Aug 01 06:45:15 minikube kubelet[86016]: E0801 06:45:15.200137 86016 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-minikube\" already exists" pod="kube-system/kube-apiserver-minikube" Aug 01 06:45:15 minikube kubelet[86016]: E0801 06:45:15.409788 86016 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-minikube\" already exists" pod="kube-system/etcd-minikube" Aug 01 06:45:25 minikube kubelet[86016]: I0801 06:45:25.848197 86016 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24" Aug 01 06:45:25 minikube kubelet[86016]: I0801 06:45:25.849154 86016 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24" Aug 01 06:45:26 minikube kubelet[86016]: I0801 06:45:26.491382 86016 topology_manager.go:200] "Topology Admit Handler" Aug 01 06:45:26 minikube kubelet[86016]: I0801 06:45:26.570679 86016 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3caabb21-e6c7-40c2-9668-431f4dc8f2ab-lib-modules\") pod \"kube-proxy-rhrq7\" (UID: \"3caabb21-e6c7-40c2-9668-431f4dc8f2ab\") " pod="kube-system/kube-proxy-rhrq7" Aug 01 06:45:26 minikube kubelet[86016]: I0801 06:45:26.570742 86016 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cklt\" (UniqueName: \"kubernetes.io/projected/3caabb21-e6c7-40c2-9668-431f4dc8f2ab-kube-api-access-6cklt\") pod \"kube-proxy-rhrq7\" (UID: \"3caabb21-e6c7-40c2-9668-431f4dc8f2ab\") " pod="kube-system/kube-proxy-rhrq7" Aug 01 06:45:26 minikube kubelet[86016]: I0801 06:45:26.570770 86016 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3caabb21-e6c7-40c2-9668-431f4dc8f2ab-xtables-lock\") pod \"kube-proxy-rhrq7\" (UID: \"3caabb21-e6c7-40c2-9668-431f4dc8f2ab\") " pod="kube-system/kube-proxy-rhrq7" Aug 01 06:45:26 minikube kubelet[86016]: I0801 06:45:26.570793 86016 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3caabb21-e6c7-40c2-9668-431f4dc8f2ab-kube-proxy\") pod \"kube-proxy-rhrq7\" (UID: \"3caabb21-e6c7-40c2-9668-431f4dc8f2ab\") " pod="kube-system/kube-proxy-rhrq7" Aug 01 06:45:26 minikube kubelet[86016]: E0801 06:45:26.752837 86016 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 01 06:45:26 minikube kubelet[86016]: E0801 06:45:26.752905 86016 projected.go:192] Error preparing data for projected volume kube-api-access-6cklt for pod kube-system/kube-proxy-rhrq7: configmap "kube-root-ca.crt" not found Aug 01 06:45:26 minikube kubelet[86016]: E0801 06:45:26.753092 86016 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/3caabb21-e6c7-40c2-9668-431f4dc8f2ab-kube-api-access-6cklt podName:3caabb21-e6c7-40c2-9668-431f4dc8f2ab nodeName:}" failed. No retries permitted until 2022-08-01 06:45:27.252995773 +0000 UTC m=+13.973262025 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6cklt" (UniqueName: "kubernetes.io/projected/3caabb21-e6c7-40c2-9668-431f4dc8f2ab-kube-api-access-6cklt") pod "kube-proxy-rhrq7" (UID: "3caabb21-e6c7-40c2-9668-431f4dc8f2ab") : configmap "kube-root-ca.crt" not found Aug 01 06:45:26 minikube kubelet[86016]: I0801 06:45:26.808531 86016 topology_manager.go:200] "Topology Admit Handler" Aug 01 06:45:26 minikube kubelet[86016]: I0801 06:45:26.873814 86016 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0c84581-b892-4d9e-85f0-f66507eb3b05-config-volume\") pod \"coredns-6d4b75cb6d-489z5\" (UID: \"e0c84581-b892-4d9e-85f0-f66507eb3b05\") " pod="kube-system/coredns-6d4b75cb6d-489z5" Aug 01 06:45:26 minikube kubelet[86016]: I0801 06:45:26.873981 86016 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlknf\" (UniqueName: \"kubernetes.io/projected/e0c84581-b892-4d9e-85f0-f66507eb3b05-kube-api-access-nlknf\") pod \"coredns-6d4b75cb6d-489z5\" (UID: \"e0c84581-b892-4d9e-85f0-f66507eb3b05\") " pod="kube-system/coredns-6d4b75cb6d-489z5" Aug 01 06:45:26 minikube kubelet[86016]: I0801 06:45:26.991500 86016 topology_manager.go:200] "Topology Admit Handler" Aug 01 06:45:27 minikube kubelet[86016]: I0801 06:45:27.076344 86016 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfj9f\" (UniqueName: \"kubernetes.io/projected/16dc8cc7-1895-4dce-b51e-31027ed2ce27-kube-api-access-zfj9f\") pod \"coredns-6d4b75cb6d-w8nq4\" (UID: \"16dc8cc7-1895-4dce-b51e-31027ed2ce27\") " pod="kube-system/coredns-6d4b75cb6d-w8nq4" Aug 01 06:45:27 minikube kubelet[86016]: I0801 06:45:27.076463 86016 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16dc8cc7-1895-4dce-b51e-31027ed2ce27-config-volume\") pod \"coredns-6d4b75cb6d-w8nq4\" (UID: \"16dc8cc7-1895-4dce-b51e-31027ed2ce27\") " pod="kube-system/coredns-6d4b75cb6d-w8nq4" Aug 01 06:45:28 minikube kubelet[86016]: E0801 06:45:28.561936 86016 kuberuntime_manager.go:1051] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unknown desc = Error: No such container: 7d4fc74b5ebadbe415a557ab7a5b1ec71fab3303676cdd5efab0444e5e766acf" podSandboxID="7d4fc74b5ebadbe415a557ab7a5b1ec71fab3303676cdd5efab0444e5e766acf" pod="kube-system/kube-proxy-rhrq7" Aug 01 06:45:28 minikube kubelet[86016]: E0801 06:45:28.561988 86016 generic.go:415] "PLEG: Write status" err="rpc error: code = Unknown desc = Error: No such container: 7d4fc74b5ebadbe415a557ab7a5b1ec71fab3303676cdd5efab0444e5e766acf" pod="kube-system/kube-proxy-rhrq7" Aug 01 06:45:29 minikube kubelet[86016]: I0801 06:45:29.047642 86016 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="4741b3bf10b6f0b02cca1ccf23ff9505f8887d1d24c54a70c57cd5a37d1b022b" Aug 01 06:45:29 minikube kubelet[86016]: I0801 06:45:29.170864 86016 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="eced8f0e4c81ee617a71d2586fd9e271e3a010e73ec33fe11cc7fdc26629a2f8" Aug 01 06:45:31 minikube kubelet[86016]: I0801 06:45:31.138873 86016 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="7d4fc74b5ebadbe415a557ab7a5b1ec71fab3303676cdd5efab0444e5e766acf" Aug 01 06:48:19 minikube kubelet[86016]: I0801 06:48:19.677134 86016 topology_manager.go:200] "Topology Admit Handler" Aug 01 06:48:19 minikube kubelet[86016]: I0801 06:48:19.797190 86016 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8d88dec3-9d49-423e-b7dc-a578398878df-tmp\") pod \"storage-provisioner\" (UID: \"8d88dec3-9d49-423e-b7dc-a578398878df\") " pod="kube-system/storage-provisioner" Aug 01 06:48:19 minikube kubelet[86016]: I0801 06:48:19.797344 86016 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmlcv\" (UniqueName: \"kubernetes.io/projected/8d88dec3-9d49-423e-b7dc-a578398878df-kube-api-access-gmlcv\") pod \"storage-provisioner\" (UID: \"8d88dec3-9d49-423e-b7dc-a578398878df\") " pod="kube-system/storage-provisioner" Aug 01 06:48:20 minikube kubelet[86016]: E0801 06:48:20.540326 86016 kuberuntime_manager.go:1051] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unknown desc = Error: No such container: ff4b8dd52f2a0126cba5e73f23fd40209c3d3fa256cd84372b02ca59ea5f0f92" podSandboxID="ff4b8dd52f2a0126cba5e73f23fd40209c3d3fa256cd84372b02ca59ea5f0f92" pod="kube-system/storage-provisioner" Aug 01 06:48:20 minikube kubelet[86016]: E0801 06:48:20.540402 86016 generic.go:415] "PLEG: Write status" err="rpc error: code = Unknown desc = Error: No such container: ff4b8dd52f2a0126cba5e73f23fd40209c3d3fa256cd84372b02ca59ea5f0f92" pod="kube-system/storage-provisioner" Aug 01 06:48:21 minikube kubelet[86016]: I0801 06:48:21.809567 86016 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="ff4b8dd52f2a0126cba5e73f23fd40209c3d3fa256cd84372b02ca59ea5f0f92" Aug 01 06:50:13 minikube kubelet[86016]: W0801 06:50:13.421021 86016 sysinfo.go:203] Nodes topology is not available, providing CPU topology * * ==> storage-provisioner [bfb81b8caea5] <== * I0801 06:48:23.064388 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0801 06:48:23.076375 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0801 06:48:23.076439 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0801 06:48:23.093481 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0801 06:48:23.093755 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_83cb8aae-4409-4b6b-9464-90a2b928abe2! I0801 06:48:23.094454 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f8beb859-9b69-4cf8-b5c8-bd2c5d12e245", APIVersion:"v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_83cb8aae-4409-4b6b-9464-90a2b928abe2 became leader I0801 06:48:23.194218 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_83cb8aae-4409-4b6b-9464-90a2b928abe2!