==> Audit <== |---------|-------------------------------------------------------------|----------|------|---------|---------------------|---------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|-------------------------------------------------------------|----------|------|---------|---------------------|---------------------| | addons | enable ingress | minikube | mhr | v1.32.0 | 18 Apr 24 18:29 PDT | 18 Apr 24 18:29 PDT | | addons | enable ingress-dns --registries | minikube | mhr | v1.32.0 | 18 Apr 24 18:29 PDT | 18 Apr 24 18:29 PDT | | | IngressDNS=public.ecr.aws --images | | | | | | | | IngressDNS=h7i3i2k3/minikube-ingress-dns:0.0.3 | | | | | | | ip | | minikube | mhr | v1.32.0 | 18 Apr 24 18:30 PDT | 18 Apr 24 18:30 PDT | | addons | configure ingress | minikube | mhr | v1.32.0 | 18 Apr 24 18:31 PDT | | | addons | enable ingress | minikube | mhr | v1.32.0 | 18 Apr 24 18:31 PDT | 18 Apr 24 18:31 PDT | | addons | enable ingress-dns --registries | minikube | mhr | v1.32.0 | 18 Apr 24 18:31 PDT | 18 Apr 24 18:31 PDT | | | IngressDNS=public.ecr.aws --images | | | | | | | | IngressDNS=h7i3i2k3/minikube-ingress-dns:0.0.3 | | | | | | | delete | | minikube | mhr | v1.32.0 | 19 Apr 24 12:19 PDT | 19 Apr 24 12:19 PDT | | start | -p minikube | minikube | mhr | v1.32.0 | 19 Apr 24 12:19 PDT | 19 Apr 24 12:19 PDT | | | --extra-config=apiserver.service-node-port-range=1024-65535 | | | | | | | | --driver qemu --cpus 4 --memory 16G --disk-size | | | | | | | | 128G --network socket_vmnet --socket-vmnet-path | | | | | | | | /opt/homebrew/var/run/socket_vmnet | | | | | | | | --socket-vmnet-client-path | | | | | | | | /opt/homebrew/opt/socket_vmnet/bin/socket_vmnet_client | | | | | | | ip | | minikube | mhr | v1.32.0 | 19 Apr 24 12:20 PDT | 19 Apr 24 12:20 PDT | | addons | configure ingress | minikube | mhr | v1.32.0 | 19 Apr 24 12:22 PDT | 19 Apr 24 12:22 PDT | | addons | enable ingress | minikube | mhr | v1.32.0 | 19 Apr 24 12:22 PDT | 19 Apr 24 12:23 PDT | | addons | enable ingress-dns --registries | minikube | mhr | v1.32.0 | 19 Apr 24 12:23 PDT | 19 Apr 24 12:23 PDT | | | IngressDNS=public.ecr.aws --images | | | | | | | | IngressDNS=h7i3i2k3/minikube-ingress-dns:0.0.3 | | | | | | | delete | | minikube | mhr | v1.32.0 | 19 Apr 24 12:39 PDT | 19 Apr 24 12:39 PDT | | start | -p minikube | minikube | mhr | v1.32.0 | 19 Apr 24 13:08 PDT | 19 Apr 24 13:09 PDT | | | --extra-config=apiserver.service-node-port-range=1024-65535 | | | | | | | | --driver qemu --cpus 4 --memory 16G --disk-size | | | | | | | | 128G --network socket_vmnet --socket-vmnet-path | | | | | | | | /opt/homebrew/var/run/socket_vmnet | | | | | | | | --socket-vmnet-client-path | | | | | | | | /opt/homebrew/opt/socket_vmnet/bin/socket_vmnet_client | | | | | | | ip | | minikube | mhr | v1.32.0 | 19 Apr 24 13:09 PDT | 19 Apr 24 13:09 PDT | | addons | configure ingress | minikube | mhr | v1.32.0 | 19 Apr 24 13:16 PDT | 19 Apr 24 13:16 PDT | | addons | enable ingress | minikube | mhr | v1.32.0 | 19 Apr 24 13:16 PDT | 19 Apr 24 13:17 PDT | | addons | enable ingress-dns --registries | minikube | mhr | v1.32.0 | 19 Apr 24 13:17 PDT | 19 Apr 24 13:17 PDT | | | IngressDNS=public.ecr.aws --images | | | | | | | | IngressDNS=h7i3i2k3/minikube-ingress-dns:0.0.3 | | | | | | | ssh | | minikube | mhr | v1.33.0 | 22 Apr 24 11:16 PDT | | | ip | | minikube | mhr | v1.33.0 | 22 Apr 24 11:21 PDT | 22 Apr 24 11:21 PDT | | ip | | minikube | mhr | v1.33.0 | 22 Apr 24 11:21 PDT | 22 Apr 24 11:21 PDT | | ip | | minikube | mhr | v1.33.0 | 22 Apr 24 11:22 PDT | 22 Apr 24 11:22 PDT | | ip | | minikube | mhr | v1.33.0 | 22 Apr 24 11:22 PDT | 22 Apr 24 11:22 PDT | | stop | | minikube | mhr | v1.33.0 | 22 Apr 24 11:23 PDT | 22 Apr 24 11:23 PDT | | start | | minikube | mhr | v1.33.0 | 22 Apr 24 11:23 PDT | 22 Apr 24 11:24 PDT | | delete | | minikube | mhr | v1.33.0 | 22 Apr 24 11:24 PDT | 22 Apr 24 11:24 PDT | | start | -p minikube | minikube | mhr | v1.33.0 | 22 Apr 24 11:26 PDT | 22 Apr 24 11:27 PDT | | | --extra-config=apiserver.service-node-port-range=1024-65535 | | | | | | | | --driver qemu --cpus 4 --memory 16G --disk-size | | | | | | | | 128G --network socket_vmnet --socket-vmnet-path | | | | | | | | /opt/homebrew/var/run/socket_vmnet | | | | | | | | --socket-vmnet-client-path | | | | | | | | /opt/homebrew/opt/socket_vmnet/bin/socket_vmnet_client | | | | | | | ip | | minikube | mhr | v1.33.0 | 22 Apr 24 11:27 PDT | 22 Apr 24 11:27 PDT | | addons | configure ingress | minikube | mhr | v1.33.0 | 22 Apr 24 11:28 PDT | 22 Apr 24 11:28 PDT | | addons | enable ingress | minikube | mhr | v1.33.0 | 22 Apr 24 11:28 PDT | | | delete | | minikube | mhr | v1.33.0 | 22 Apr 24 11:43 PDT | 22 Apr 24 11:43 PDT | | start | -p minikube | minikube | mhr | v1.33.0 | 22 Apr 24 11:43 PDT | 22 Apr 24 11:44 PDT | | | --extra-config=apiserver.service-node-port-range=1024-65535 | | | | | | | | --driver qemu --cpus 4 --memory 16G --disk-size | | | | | | | | 128G --network socket_vmnet --socket-vmnet-path | | | | | | | | /opt/homebrew/var/run/socket_vmnet | | | | | | | | --socket-vmnet-client-path | | | | | | | | /opt/homebrew/opt/socket_vmnet/bin/socket_vmnet_client | | | | | | | ip | | minikube | mhr | v1.33.0 | 22 Apr 24 11:44 PDT | 22 Apr 24 11:44 PDT | | addons | configure ingress | minikube | mhr | v1.33.0 | 22 Apr 24 11:45 PDT | 22 Apr 24 11:45 PDT | | addons | enable ingress | minikube | mhr | v1.33.0 | 22 Apr 24 11:45 PDT | | | ssh | | minikube | mhr | v1.33.0 | 22 Apr 24 11:52 PDT | | | addons | configure ingress | minikube | mhr | v1.33.0 | 22 Apr 24 11:56 PDT | | | delete | | minikube | mhr | v1.33.0 | 22 Apr 24 12:11 PDT | 22 Apr 24 12:11 PDT | | start | -p minikube | minikube | mhr | v1.33.0 | 22 Apr 24 12:23 PDT | | | | --extra-config=apiserver.service-node-port-range=1024-65535 | | | | | | | | --driver qemu --cpus 4 --memory 16G --disk-size | | | | | | | | 128G --network socket_vmnet --socket-vmnet-path | | | | | | | | /opt/homebrew/var/run/socket_vmnet | | | | | | | | --socket-vmnet-client-path | | | | | | | | /opt/homebrew/opt/socket_vmnet/bin/socket_vmnet_client | | | | | | | delete | | minikube | mhr | v1.33.0 | 22 Apr 24 12:23 PDT | 22 Apr 24 12:23 PDT | | start | -p minikube | minikube | mhr | v1.33.0 | 22 Apr 24 12:23 PDT | 22 Apr 24 12:23 PDT | | | --extra-config=apiserver.service-node-port-range=1024-65535 | | | | | | | | --driver qemu --cpus 4 --memory 16G --disk-size | | | | | | | | 128G --network socket_vmnet --socket-vmnet-path | | | | | | | | /opt/homebrew/var/run/socket_vmnet | | | | | | | | --socket-vmnet-client-path | | | | | | | | /opt/homebrew/opt/socket_vmnet/bin/socket_vmnet_client | | | | | | | ip | | minikube | mhr | v1.33.0 | 22 Apr 24 12:24 PDT | 22 Apr 24 12:24 PDT | | addons | configure ingress | minikube | mhr | v1.33.0 | 22 Apr 24 12:28 PDT | 22 Apr 24 12:28 PDT | | addons | enable ingress | minikube | mhr | v1.33.0 | 22 Apr 24 12:28 PDT | | | ip | | minikube | mhr | v1.33.0 | 22 Apr 24 13:40 PDT | 22 Apr 24 13:40 PDT | | stop | | minikube | mhr | v1.33.0 | 23 Apr 24 11:40 PDT | 23 Apr 24 11:40 PDT | | start | | minikube | mhr | v1.33.0 | 23 Apr 24 17:53 PDT | 23 Apr 24 17:54 PDT | | delete | | minikube | mhr | v1.33.0 | 23 Apr 24 18:11 PDT | 23 Apr 24 18:11 PDT | | delete | | minikube | mhr | v1.33.0 | 23 Apr 24 18:19 PDT | 23 Apr 24 18:19 PDT | | start | -p minikube | minikube | mhr | v1.33.0 | 23 Apr 24 19:30 PDT | 23 Apr 24 19:31 PDT | | | --extra-config=apiserver.service-node-port-range=1024-65535 | | | | | | | | --driver qemu --cpus 4 --memory 16G --disk-size | | | | | | | | 128G --network socket_vmnet --socket-vmnet-path | | | | | | | | /opt/homebrew/var/run/socket_vmnet | | | | | | | | --socket-vmnet-client-path | | | | | | | | /opt/homebrew/opt/socket_vmnet/bin/socket_vmnet_client | | | | | | | delete | | minikube | mhr | v1.33.0 | 23 Apr 24 19:31 PDT | 23 Apr 24 19:31 PDT | | start | -p minikube | minikube | mhr | v1.33.0 | 23 Apr 24 19:32 PDT | 23 Apr 24 19:33 PDT | | | --extra-config=apiserver.service-node-port-range=1024-65535 | | | | | | | | --driver qemu --cpus 4 --memory 16G --disk-size | | | | | | | | 128G --network socket_vmnet --socket-vmnet-path | | | | | | | | /opt/homebrew/var/run/socket_vmnet | | | | | | | | --socket-vmnet-client-path | | | | | | | | /opt/homebrew/opt/socket_vmnet/bin/socket_vmnet_client | | | | | | | addons | enable ingress | minikube | mhr | v1.33.0 | 23 Apr 24 19:33 PDT | | | delete | | minikube | mhr | v1.33.0 | 23 Apr 24 19:42 PDT | 23 Apr 24 19:42 PDT | | delete | | minikube | mhr | v1.33.0 | 23 Apr 24 19:44 PDT | 23 Apr 24 19:44 PDT | | start | -p minikube | minikube | mhr | v1.33.0 | 23 Apr 24 19:45 PDT | 23 Apr 24 19:45 PDT | | addons | enable ingress | minikube | mhr | v1.33.0 | 23 Apr 24 19:50 PDT | | | delete | | minikube | mhr | v1.33.0 | 23 Apr 24 20:11 PDT | 23 Apr 24 20:11 PDT | | start | -p minikube | minikube | mhr | v1.33.0 | 23 Apr 24 20:12 PDT | 23 Apr 24 20:12 PDT | | addons | enable ingress | minikube | mhr | v1.33.0 | 23 Apr 24 20:13 PDT | | |---------|-------------------------------------------------------------|----------|------|---------|---------------------|---------------------| ==> Last Start <== Log file created at: 2024/04/23 20:12:11 Running on machine: mhr-lightspark Binary: Built with gc go1.22.2 for darwin/arm64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0423 20:12:11.455325 10704 out.go:291] Setting OutFile to fd 1 ... I0423 20:12:11.455595 10704 out.go:343] isatty.IsTerminal(1) = true I0423 20:12:11.455598 10704 out.go:304] Setting ErrFile to fd 2... I0423 20:12:11.455601 10704 out.go:343] isatty.IsTerminal(2) = true I0423 20:12:11.455723 10704 root.go:338] Updating PATH: /Users/mhr/.minikube/bin I0423 20:12:11.456166 10704 out.go:298] Setting JSON to false I0423 20:12:11.488173 10704 start.go:129] hostinfo: {"hostname":"mhr-lightspark","uptime":424367,"bootTime":1713503964,"procs":791,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"d947d30b-503c-57b5-b505-6e87209fb375"} W0423 20:12:11.488268 10704 start.go:137] gopshost.Virtualization returned error: not implemented yet I0423 20:12:11.495995 10704 out.go:177] 😄 minikube v1.33.0 on Darwin 14.4.1 (arm64) I0423 20:12:11.506018 10704 notify.go:220] Checking for updates... I0423 20:12:11.506231 10704 driver.go:392] Setting default libvirt URI to qemu:///system I0423 20:12:11.506261 10704 global.go:112] Querying for installed drivers using PATH=/Users/mhr/.minikube/bin:/Users/mhr/.nvm/versions/node/v18.15.0/bin:/Users/mhr/go/bin:/Users/mhr/.pyenv/shims:/Users/mhr/bin:/opt/homebrew/bin:/opt/homebrew/sbin:/usr/local/bin:/System/Cryptexes/App/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/local/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/appleinternal/bin:/Users/mhr/.nvm/versions/node/v18.15.0/bin:/Users/mhr/go/bin:/Users/mhr/bin:/opt/homebrew/bin:/opt/homebrew/sbin:/Users/mhr/.cargo/bin:/opt/homebrew/opt/fzf/bin I0423 20:12:11.506338 10704 global.go:133] qemu2 default: true priority: 7, state: {Installed:true Healthy:true Running:true NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0423 20:12:11.506468 10704 global.go:133] virtualbox default: true priority: 6, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:unable to find VBoxManage in $PATH Reason: Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/ Version:} I0423 20:12:11.506533 10704 global.go:133] vmware default: false priority: 5, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "vmrun": executable file not found in $PATH Reason: Fix:Install vmrun Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/ Version:} W0423 20:12:11.552248 10704 docker.go:169] docker version returned error: exit status 1 I0423 20:12:11.552325 10704 global.go:133] docker default: true priority: 9, state: {Installed:true Healthy:false Running:false NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}:{{.Server.Platform.Name}}" exit status 1: Cannot connect to the Docker daemon at unix:///Users/mhr/.docker/run/docker.sock. Is the docker daemon running? Reason:PROVIDER_DOCKER_NOT_RUNNING Fix:Start the Docker service Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/ Version:} I0423 20:12:11.552453 10704 global.go:133] podman default: true priority: 3, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Reason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/ Version:} I0423 20:12:11.552462 10704 global.go:133] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0423 20:12:11.552518 10704 global.go:133] hyperkit default: true priority: 8, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "hyperkit": executable file not found in $PATH Reason: Fix:Run 'brew install hyperkit' Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/hyperkit/ Version:} I0423 20:12:11.552563 10704 global.go:133] parallels default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "prlctl": executable file not found in $PATH Reason: Fix:Install Parallels Desktop for Mac Doc:https://minikube.sigs.k8s.io/docs/drivers/parallels/ Version:} I0423 20:12:11.552572 10704 driver.go:314] not recommending "ssh" due to default: false I0423 20:12:11.552575 10704 driver.go:309] not recommending "docker" due to health: "docker version --format {{.Server.Os}}-{{.Server.Version}}:{{.Server.Platform.Name}}" exit status 1: Cannot connect to the Docker daemon at unix:///Users/mhr/.docker/run/docker.sock. Is the docker daemon running? I0423 20:12:11.552595 10704 driver.go:349] Picked: qemu2 I0423 20:12:11.552599 10704 driver.go:350] Alternatives: [ssh] I0423 20:12:11.552601 10704 driver.go:351] Rejects: [virtualbox vmware docker podman hyperkit parallels] I0423 20:12:11.560977 10704 out.go:177] ✨ Automatically selected the qemu2 driver I0423 20:12:11.564985 10704 start.go:297] selected driver: qemu2 I0423 20:12:11.564988 10704 start.go:901] validating driver "qemu2" against I0423 20:12:11.564993 10704 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0423 20:12:11.565069 10704 start_flags.go:310] no existing cluster config was found, will generate one from the flags I0423 20:12:11.569827 10704 out.go:177] 🌐 Automatically selected the socket_vmnet network I0423 20:12:11.574012 10704 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65536MB, container=0MB I0423 20:12:11.574073 10704 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true] I0423 20:12:11.574104 10704 cni.go:84] Creating CNI manager for "" I0423 20:12:11.574109 10704 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0423 20:12:11.574112 10704 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni I0423 20:12:11.574138 10704 start.go:340] cluster config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:6000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/homebrew/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/opt/homebrew/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} I0423 20:12:11.585560 10704 iso.go:125] acquiring lock: {Name:mke689cc24874de7cd202508ce1edb76b5cc2d77 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0423 20:12:11.593902 10704 out.go:177] 👍 Starting "minikube" primary control-plane node in "minikube" cluster I0423 20:12:11.597890 10704 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker I0423 20:12:11.597901 10704 preload.go:147] Found local preload: /Users/mhr/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 I0423 20:12:11.597906 10704 cache.go:56] Caching tarball of preloaded images I0423 20:12:11.597964 10704 preload.go:173] Found /Users/mhr/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download I0423 20:12:11.597971 10704 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker I0423 20:12:11.598185 10704 profile.go:143] Saving config to /Users/mhr/.minikube/profiles/minikube/config.json ... I0423 20:12:11.598195 10704 lock.go:35] WriteFile acquiring /Users/mhr/.minikube/profiles/minikube/config.json: {Name:mke98c5922decd85da853d80c3b4197fbe8ad6ca Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0423 20:12:11.598406 10704 start.go:360] acquireMachinesLock for minikube: {Name:mk8d18bce27c7cf7296e2fc8818c6198a840a17e Clock:{} Delay:500ms Timeout:13m0s Cancel:} I0423 20:12:11.598437 10704 start.go:364] duration metric: took 27.459µs to acquireMachinesLock for "minikube" I0423 20:12:11.598443 10704 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:6000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/homebrew/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/opt/homebrew/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} I0423 20:12:11.598467 10704 start.go:125] createHost starting for "" (driver="qemu2") I0423 20:12:11.606852 10704 out.go:204] 🔥 Creating qemu2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ... I0423 20:12:11.629794 10704 start.go:159] libmachine.API.Create for "minikube" (driver="qemu2") I0423 20:12:11.629820 10704 client.go:168] LocalClient.Create starting I0423 20:12:11.629894 10704 main.go:141] libmachine: Reading certificate data from /Users/mhr/.minikube/certs/ca.pem I0423 20:12:11.629930 10704 main.go:141] libmachine: Decoding PEM data... I0423 20:12:11.629939 10704 main.go:141] libmachine: Parsing certificate... I0423 20:12:11.629981 10704 main.go:141] libmachine: Reading certificate data from /Users/mhr/.minikube/certs/cert.pem I0423 20:12:11.630005 10704 main.go:141] libmachine: Decoding PEM data... I0423 20:12:11.630011 10704 main.go:141] libmachine: Parsing certificate... I0423 20:12:11.630426 10704 main.go:141] libmachine: Downloading /Users/mhr/.minikube/cache/boot2docker.iso from file:///Users/mhr/.minikube/cache/iso/arm64/minikube-v1.33.0-arm64.iso... I0423 20:12:11.741498 10704 main.go:141] libmachine: Creating SSH key... I0423 20:12:11.772636 10704 main.go:141] libmachine: Creating Disk image... I0423 20:12:11.772643 10704 main.go:141] libmachine: Creating 20000 MB hard disk image... I0423 20:12:11.772831 10704 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/mhr/.minikube/machines/minikube/disk.qcow2.raw /Users/mhr/.minikube/machines/minikube/disk.qcow2 I0423 20:12:11.787246 10704 main.go:141] libmachine: STDOUT: I0423 20:12:11.787264 10704 main.go:141] libmachine: STDERR: I0423 20:12:11.787299 10704 main.go:141] libmachine: executing: qemu-img resize /Users/mhr/.minikube/machines/minikube/disk.qcow2 +20000M I0423 20:12:11.796901 10704 main.go:141] libmachine: STDOUT: Image resized. I0423 20:12:11.796916 10704 main.go:141] libmachine: STDERR: I0423 20:12:11.796935 10704 main.go:141] libmachine: DONE writing to /Users/mhr/.minikube/machines/minikube/disk.qcow2.raw and /Users/mhr/.minikube/machines/minikube/disk.qcow2 I0423 20:12:11.796938 10704 main.go:141] libmachine: Starting QEMU VM... I0423 20:12:11.796957 10704 main.go:141] libmachine: executing: /opt/homebrew/opt/socket_vmnet/bin/socket_vmnet_client /opt/homebrew/var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 6000 -smp 2 -boot d -cdrom /Users/mhr/.minikube/machines/minikube/boot2docker.iso -qmp unix:/Users/mhr/.minikube/machines/minikube/monitor,server,nowait -pidfile /Users/mhr/.minikube/machines/minikube/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:e3:2d:30:e1:3d -netdev socket,id=net0,fd=3 -daemonize /Users/mhr/.minikube/machines/minikube/disk.qcow2 I0423 20:12:11.838777 10704 main.go:141] libmachine: STDOUT: I0423 20:12:11.838798 10704 main.go:141] libmachine: STDERR: I0423 20:12:11.838801 10704 main.go:141] libmachine: Attempt 0 I0423 20:12:11.838817 10704 main.go:141] libmachine: Searching for be:e3:2d:30:e1:3d in /var/db/dhcpd_leases ... I0423 20:12:11.838892 10704 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases! I0423 20:12:11.838917 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:8a:3f:4d:a:b4:34 ID:1,8a:3f:4d:a:b4:34 Lease:0x6629c3b9} I0423 20:12:11.838921 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:72:83:60:52:c:80 ID:1,72:83:60:52:c:80 Lease:0x6629c0d2} I0423 20:12:11.838925 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a6:7a:0:37:f8:39 ID:1,a6:7a:0:37:f8:39 Lease:0x6629c050} I0423 20:12:11.838929 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:42:80:2a:e3:37:2f ID:1,42:80:2a:e3:37:2f Lease:0x6629a9aa} I0423 20:12:11.838932 10704 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:2a:88:99:ad:e4:d8 ID:1,2a:88:99:ad:e4:d8 Lease:0x66231a48} I0423 20:12:13.839788 10704 main.go:141] libmachine: Attempt 1 I0423 20:12:13.839814 10704 main.go:141] libmachine: Searching for be:e3:2d:30:e1:3d in /var/db/dhcpd_leases ... I0423 20:12:13.840094 10704 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases! I0423 20:12:13.840118 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:8a:3f:4d:a:b4:34 ID:1,8a:3f:4d:a:b4:34 Lease:0x6629c3b9} I0423 20:12:13.840128 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:72:83:60:52:c:80 ID:1,72:83:60:52:c:80 Lease:0x6629c0d2} I0423 20:12:13.840149 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a6:7a:0:37:f8:39 ID:1,a6:7a:0:37:f8:39 Lease:0x6629c050} I0423 20:12:13.840158 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:42:80:2a:e3:37:2f ID:1,42:80:2a:e3:37:2f Lease:0x6629a9aa} I0423 20:12:13.840167 10704 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:2a:88:99:ad:e4:d8 ID:1,2a:88:99:ad:e4:d8 Lease:0x66231a48} I0423 20:12:15.841310 10704 main.go:141] libmachine: Attempt 2 I0423 20:12:15.841341 10704 main.go:141] libmachine: Searching for be:e3:2d:30:e1:3d in /var/db/dhcpd_leases ... I0423 20:12:15.841511 10704 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases! I0423 20:12:15.841533 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:8a:3f:4d:a:b4:34 ID:1,8a:3f:4d:a:b4:34 Lease:0x6629c3b9} I0423 20:12:15.841544 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:72:83:60:52:c:80 ID:1,72:83:60:52:c:80 Lease:0x6629c0d2} I0423 20:12:15.841552 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a6:7a:0:37:f8:39 ID:1,a6:7a:0:37:f8:39 Lease:0x6629c050} I0423 20:12:15.841560 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:42:80:2a:e3:37:2f ID:1,42:80:2a:e3:37:2f Lease:0x6629a9aa} I0423 20:12:15.841568 10704 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:2a:88:99:ad:e4:d8 ID:1,2a:88:99:ad:e4:d8 Lease:0x66231a48} I0423 20:12:17.842654 10704 main.go:141] libmachine: Attempt 3 I0423 20:12:17.842669 10704 main.go:141] libmachine: Searching for be:e3:2d:30:e1:3d in /var/db/dhcpd_leases ... I0423 20:12:17.842757 10704 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases! I0423 20:12:17.842764 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:8a:3f:4d:a:b4:34 ID:1,8a:3f:4d:a:b4:34 Lease:0x6629c3b9} I0423 20:12:17.842768 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:72:83:60:52:c:80 ID:1,72:83:60:52:c:80 Lease:0x6629c0d2} I0423 20:12:17.842771 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a6:7a:0:37:f8:39 ID:1,a6:7a:0:37:f8:39 Lease:0x6629c050} I0423 20:12:17.842774 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:42:80:2a:e3:37:2f ID:1,42:80:2a:e3:37:2f Lease:0x6629a9aa} I0423 20:12:17.842779 10704 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:2a:88:99:ad:e4:d8 ID:1,2a:88:99:ad:e4:d8 Lease:0x66231a48} I0423 20:12:19.843835 10704 main.go:141] libmachine: Attempt 4 I0423 20:12:19.843843 10704 main.go:141] libmachine: Searching for be:e3:2d:30:e1:3d in /var/db/dhcpd_leases ... I0423 20:12:19.843929 10704 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases! I0423 20:12:19.843937 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:8a:3f:4d:a:b4:34 ID:1,8a:3f:4d:a:b4:34 Lease:0x6629c3b9} I0423 20:12:19.843940 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:72:83:60:52:c:80 ID:1,72:83:60:52:c:80 Lease:0x6629c0d2} I0423 20:12:19.843943 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a6:7a:0:37:f8:39 ID:1,a6:7a:0:37:f8:39 Lease:0x6629c050} I0423 20:12:19.843947 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:42:80:2a:e3:37:2f ID:1,42:80:2a:e3:37:2f Lease:0x6629a9aa} I0423 20:12:19.843950 10704 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:2a:88:99:ad:e4:d8 ID:1,2a:88:99:ad:e4:d8 Lease:0x66231a48} I0423 20:12:21.845018 10704 main.go:141] libmachine: Attempt 5 I0423 20:12:21.845031 10704 main.go:141] libmachine: Searching for be:e3:2d:30:e1:3d in /var/db/dhcpd_leases ... I0423 20:12:21.845146 10704 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases! I0423 20:12:21.845156 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:8a:3f:4d:a:b4:34 ID:1,8a:3f:4d:a:b4:34 Lease:0x6629c3b9} I0423 20:12:21.845173 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:72:83:60:52:c:80 ID:1,72:83:60:52:c:80 Lease:0x6629c0d2} I0423 20:12:21.845177 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a6:7a:0:37:f8:39 ID:1,a6:7a:0:37:f8:39 Lease:0x6629c050} I0423 20:12:21.845180 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:42:80:2a:e3:37:2f ID:1,42:80:2a:e3:37:2f Lease:0x6629a9aa} I0423 20:12:21.845183 10704 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:2a:88:99:ad:e4:d8 ID:1,2a:88:99:ad:e4:d8 Lease:0x66231a48} I0423 20:12:23.846304 10704 main.go:141] libmachine: Attempt 6 I0423 20:12:23.846313 10704 main.go:141] libmachine: Searching for be:e3:2d:30:e1:3d in /var/db/dhcpd_leases ... I0423 20:12:23.846384 10704 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases! I0423 20:12:23.846391 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:8a:3f:4d:a:b4:34 ID:1,8a:3f:4d:a:b4:34 Lease:0x6629c3b9} I0423 20:12:23.846395 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:72:83:60:52:c:80 ID:1,72:83:60:52:c:80 Lease:0x6629c0d2} I0423 20:12:23.846406 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a6:7a:0:37:f8:39 ID:1,a6:7a:0:37:f8:39 Lease:0x6629c050} I0423 20:12:23.846409 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:42:80:2a:e3:37:2f ID:1,42:80:2a:e3:37:2f Lease:0x6629a9aa} I0423 20:12:23.846412 10704 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:2a:88:99:ad:e4:d8 ID:1,2a:88:99:ad:e4:d8 Lease:0x66231a48} I0423 20:12:25.846914 10704 main.go:141] libmachine: Attempt 7 I0423 20:12:25.846929 10704 main.go:141] libmachine: Searching for be:e3:2d:30:e1:3d in /var/db/dhcpd_leases ... I0423 20:12:25.847060 10704 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases! I0423 20:12:25.847070 10704 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:be:e3:2d:30:e1:3d ID:1,be:e3:2d:30:e1:3d Lease:0x6629ca18} I0423 20:12:25.847072 10704 main.go:141] libmachine: Found match: be:e3:2d:30:e1:3d I0423 20:12:25.847086 10704 main.go:141] libmachine: IP: 192.168.105.6 I0423 20:12:25.847089 10704 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)... I0423 20:12:27.857137 10704 machine.go:94] provisionDockerMachine start ... I0423 20:12:27.857303 10704 main.go:141] libmachine: Using SSH client type: native I0423 20:12:27.857546 10704 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x105333280] 0x105335ae0 [] 0s} 192.168.105.6 22 } I0423 20:12:27.857552 10704 main.go:141] libmachine: About to run SSH command: hostname I0423 20:12:27.930148 10704 main.go:141] libmachine: SSH cmd err, output: : minikube I0423 20:12:27.930159 10704 buildroot.go:166] provisioning hostname "minikube" I0423 20:12:27.930268 10704 main.go:141] libmachine: Using SSH client type: native I0423 20:12:27.930497 10704 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x105333280] 0x105335ae0 [] 0s} 192.168.105.6 22 } I0423 20:12:27.930504 10704 main.go:141] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0423 20:12:27.993383 10704 main.go:141] libmachine: SSH cmd err, output: : minikube I0423 20:12:27.993462 10704 main.go:141] libmachine: Using SSH client type: native I0423 20:12:27.993571 10704 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x105333280] 0x105335ae0 [] 0s} 192.168.105.6 22 } I0423 20:12:27.993576 10704 main.go:141] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0423 20:12:28.049356 10704 main.go:141] libmachine: SSH cmd err, output: : I0423 20:12:28.049367 10704 buildroot.go:172] set auth options {CertDir:/Users/mhr/.minikube CaCertPath:/Users/mhr/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/mhr/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/mhr/.minikube/machines/server.pem ServerKeyPath:/Users/mhr/.minikube/machines/server-key.pem ClientKeyPath:/Users/mhr/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/mhr/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/mhr/.minikube} I0423 20:12:28.049378 10704 buildroot.go:174] setting up certificates I0423 20:12:28.049387 10704 provision.go:84] configureAuth start I0423 20:12:28.049390 10704 provision.go:143] copyHostCerts I0423 20:12:28.049453 10704 exec_runner.go:144] found /Users/mhr/.minikube/ca.pem, removing ... I0423 20:12:28.049458 10704 exec_runner.go:203] rm: /Users/mhr/.minikube/ca.pem I0423 20:12:28.049944 10704 exec_runner.go:151] cp: /Users/mhr/.minikube/certs/ca.pem --> /Users/mhr/.minikube/ca.pem (1070 bytes) I0423 20:12:28.050197 10704 exec_runner.go:144] found /Users/mhr/.minikube/cert.pem, removing ... I0423 20:12:28.050199 10704 exec_runner.go:203] rm: /Users/mhr/.minikube/cert.pem I0423 20:12:28.050414 10704 exec_runner.go:151] cp: /Users/mhr/.minikube/certs/cert.pem --> /Users/mhr/.minikube/cert.pem (1111 bytes) I0423 20:12:28.050643 10704 exec_runner.go:144] found /Users/mhr/.minikube/key.pem, removing ... I0423 20:12:28.050646 10704 exec_runner.go:203] rm: /Users/mhr/.minikube/key.pem I0423 20:12:28.050964 10704 exec_runner.go:151] cp: /Users/mhr/.minikube/certs/key.pem --> /Users/mhr/.minikube/key.pem (1675 bytes) I0423 20:12:28.051159 10704 provision.go:117] generating server cert: /Users/mhr/.minikube/machines/server.pem ca-key=/Users/mhr/.minikube/certs/ca.pem private-key=/Users/mhr/.minikube/certs/ca-key.pem org=mhr.minikube san=[127.0.0.1 192.168.105.6 localhost minikube] I0423 20:12:28.122027 10704 provision.go:177] copyRemoteCerts I0423 20:12:28.122093 10704 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0423 20:12:28.122099 10704 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/mhr/.minikube/machines/minikube/id_rsa Username:docker} I0423 20:12:28.151994 10704 ssh_runner.go:362] scp /Users/mhr/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1070 bytes) I0423 20:12:28.159427 10704 ssh_runner.go:362] scp /Users/mhr/.minikube/machines/server.pem --> /etc/docker/server.pem (1172 bytes) I0423 20:12:28.166962 10704 ssh_runner.go:362] scp /Users/mhr/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0423 20:12:28.174795 10704 provision.go:87] duration metric: took 125.405916ms to configureAuth I0423 20:12:28.174800 10704 buildroot.go:189] setting minikube options for container-runtime I0423 20:12:28.174888 10704 config.go:182] Loaded profile config "minikube": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0 I0423 20:12:28.174930 10704 main.go:141] libmachine: Using SSH client type: native I0423 20:12:28.175019 10704 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x105333280] 0x105335ae0 [] 0s} 192.168.105.6 22 } I0423 20:12:28.175022 10704 main.go:141] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0423 20:12:28.227637 10704 main.go:141] libmachine: SSH cmd err, output: : tmpfs I0423 20:12:28.227640 10704 buildroot.go:70] root file system type: tmpfs I0423 20:12:28.227699 10704 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ... I0423 20:12:28.227751 10704 main.go:141] libmachine: Using SSH client type: native I0423 20:12:28.227836 10704 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x105333280] 0x105335ae0 [] 0s} 192.168.105.6 22 } I0423 20:12:28.227864 10704 main.go:141] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target minikube-automount.service docker.socket Requires= minikube-automount.service docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0423 20:12:28.283930 10704 main.go:141] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target minikube-automount.service docker.socket Requires= minikube-automount.service docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0423 20:12:28.283985 10704 main.go:141] libmachine: Using SSH client type: native I0423 20:12:28.284076 10704 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x105333280] 0x105335ae0 [] 0s} 192.168.105.6 22 } I0423 20:12:28.284082 10704 main.go:141] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0423 20:12:29.610830 10704 main.go:141] libmachine: SSH cmd err, output: : diff: can't stat '/lib/systemd/system/docker.service': No such file or directory Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service. I0423 20:12:29.610842 10704 machine.go:97] duration metric: took 1.75370325s to provisionDockerMachine I0423 20:12:29.610847 10704 client.go:171] duration metric: took 17.9810965s to LocalClient.Create I0423 20:12:29.610859 10704 start.go:167] duration metric: took 17.981138708s to libmachine.API.Create "minikube" I0423 20:12:29.610862 10704 start.go:293] postStartSetup for "minikube" (driver="qemu2") I0423 20:12:29.610867 10704 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0423 20:12:29.610941 10704 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0423 20:12:29.610948 10704 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/mhr/.minikube/machines/minikube/id_rsa Username:docker} I0423 20:12:29.642245 10704 ssh_runner.go:195] Run: cat /etc/os-release I0423 20:12:29.643685 10704 info.go:137] Remote host: Buildroot 2023.02.9 I0423 20:12:29.643694 10704 filesync.go:126] Scanning /Users/mhr/.minikube/addons for local assets ... I0423 20:12:29.643787 10704 filesync.go:126] Scanning /Users/mhr/.minikube/files for local assets ... I0423 20:12:29.643822 10704 start.go:296] duration metric: took 32.958ms for postStartSetup I0423 20:12:29.644443 10704 profile.go:143] Saving config to /Users/mhr/.minikube/profiles/minikube/config.json ... I0423 20:12:29.644860 10704 start.go:128] duration metric: took 18.046461458s to createHost I0423 20:12:29.644895 10704 main.go:141] libmachine: Using SSH client type: native I0423 20:12:29.645004 10704 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x105333280] 0x105335ae0 [] 0s} 192.168.105.6 22 } I0423 20:12:29.645006 10704 main.go:141] libmachine: About to run SSH command: date +%!s(MISSING).%!N(MISSING) I0423 20:12:29.698776 10704 main.go:141] libmachine: SSH cmd err, output: : 1713928349.946421044 I0423 20:12:29.698781 10704 fix.go:216] guest clock: 1713928349.946421044 I0423 20:12:29.698785 10704 fix.go:229] Guest: 2024-04-23 20:12:29.946421044 -0700 PDT Remote: 2024-04-23 20:12:29.644862 -0700 PDT m=+18.216049085 (delta=301.559044ms) I0423 20:12:29.698795 10704 fix.go:200] guest clock delta is within tolerance: 301.559044ms I0423 20:12:29.698797 10704 start.go:83] releasing machines lock for "minikube", held for 18.100430458s I0423 20:12:29.700460 10704 ssh_runner.go:195] Run: cat /version.json I0423 20:12:29.700466 10704 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/mhr/.minikube/machines/minikube/id_rsa Username:docker} I0423 20:12:29.700493 10704 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/ I0423 20:12:29.700510 10704 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/mhr/.minikube/machines/minikube/id_rsa Username:docker} I0423 20:12:29.731916 10704 ssh_runner.go:195] Run: systemctl --version I0423 20:12:29.869250 10704 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*" W0423 20:12:29.872099 10704 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found I0423 20:12:29.872175 10704 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ; I0423 20:12:29.879306 10704 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s) I0423 20:12:29.879311 10704 start.go:494] detecting cgroup driver to use... I0423 20:12:29.879381 10704 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock " | sudo tee /etc/crictl.yaml" I0423 20:12:29.887299 10704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml" I0423 20:12:29.891340 10704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml" I0423 20:12:29.895082 10704 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver... I0423 20:12:29.895118 10704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml" I0423 20:12:29.898890 10704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0423 20:12:29.902697 10704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml" I0423 20:12:29.906479 10704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0423 20:12:29.910381 10704 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk" I0423 20:12:29.913859 10704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml" I0423 20:12:29.917433 10704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml" I0423 20:12:29.920876 10704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml" I0423 20:12:29.923972 10704 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables I0423 20:12:29.926984 10704 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" I0423 20:12:29.930463 10704 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0423 20:12:30.001230 10704 ssh_runner.go:195] Run: sudo systemctl restart containerd I0423 20:12:30.011528 10704 start.go:494] detecting cgroup driver to use... I0423 20:12:30.011576 10704 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0423 20:12:30.016693 10704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0423 20:12:30.021626 10704 ssh_runner.go:195] Run: sudo systemctl stop -f containerd I0423 20:12:30.027596 10704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0423 20:12:30.032453 10704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0423 20:12:30.037120 10704 ssh_runner.go:195] Run: sudo systemctl stop -f crio I0423 20:12:30.077358 10704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0423 20:12:30.082658 10704 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0423 20:12:30.088686 10704 ssh_runner.go:195] Run: which cri-dockerd I0423 20:12:30.089920 10704 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d I0423 20:12:30.092959 10704 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes) I0423 20:12:30.098365 10704 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0423 20:12:30.160255 10704 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0423 20:12:30.224746 10704 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver... I0423 20:12:30.224803 10704 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes) I0423 20:12:30.230383 10704 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0423 20:12:30.292439 10704 ssh_runner.go:195] Run: sudo systemctl restart docker I0423 20:12:32.445116 10704 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.15267475s) I0423 20:12:32.445220 10704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket I0423 20:12:32.450210 10704 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket I0423 20:12:32.456457 10704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service I0423 20:12:32.461043 10704 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket I0423 20:12:32.523446 10704 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0423 20:12:32.583998 10704 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0423 20:12:32.644515 10704 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket I0423 20:12:32.650559 10704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service I0423 20:12:32.655698 10704 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0423 20:12:32.715410 10704 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service I0423 20:12:32.741720 10704 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock I0423 20:12:32.741802 10704 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0423 20:12:32.743974 10704 start.go:562] Will wait 60s for crictl version I0423 20:12:32.744008 10704 ssh_runner.go:195] Run: which crictl I0423 20:12:32.745344 10704 ssh_runner.go:195] Run: sudo /usr/bin/crictl version I0423 20:12:32.762917 10704 start.go:578] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 26.0.1 RuntimeApiVersion: v1 I0423 20:12:32.762951 10704 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0423 20:12:32.771202 10704 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0423 20:12:32.785132 10704 out.go:204] 🐳 Preparing Kubernetes v1.30.0 on Docker 26.0.1 ... I0423 20:12:32.785236 10704 ssh_runner.go:195] Run: grep 192.168.105.1 host.minikube.internal$ /etc/hosts I0423 20:12:32.786828 10704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0423 20:12:32.790769 10704 kubeadm.go:877] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:6000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/homebrew/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/opt/homebrew/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ... I0423 20:12:32.790826 10704 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker I0423 20:12:32.790857 10704 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0423 20:12:32.794730 10704 docker.go:685] Got preloaded images: I0423 20:12:32.794732 10704 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded I0423 20:12:32.794769 10704 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json I0423 20:12:32.798231 10704 ssh_runner.go:195] Run: which lz4 I0423 20:12:32.799533 10704 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4 I0423 20:12:32.800825 10704 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1 stdout: stderr: stat: cannot statx '/preloaded.tar.lz4': No such file or directory I0423 20:12:32.800832 10704 ssh_runner.go:362] scp /Users/mhr/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (335341169 bytes) I0423 20:12:33.824439 10704 docker.go:649] duration metric: took 1.024951167s to copy over tarball I0423 20:12:33.824541 10704 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4 I0423 20:12:34.735446 10704 ssh_runner.go:146] rm: /preloaded.tar.lz4 I0423 20:12:34.750583 10704 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json I0423 20:12:34.754587 10704 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes) I0423 20:12:34.760297 10704 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0423 20:12:34.831696 10704 ssh_runner.go:195] Run: sudo systemctl restart docker I0423 20:12:37.114258 10704 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.282534542s) I0423 20:12:37.114372 10704 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0423 20:12:37.119891 10704 docker.go:685] Got preloaded images: -- stdout -- registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 registry.k8s.io/pause:3.9 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0423 20:12:37.119910 10704 cache_images.go:84] Images are preloaded, skipping loading I0423 20:12:37.119915 10704 kubeadm.go:928] updating node { 192.168.105.6 8443 v1.30.0 docker true true} ... I0423 20:12:37.119979 10704 kubeadm.go:940] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6 [Install] config: {KubernetesVersion:v1.30.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} I0423 20:12:37.120043 10704 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0423 20:12:37.126527 10704 cni.go:84] Creating CNI manager for "" I0423 20:12:37.126532 10704 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0423 20:12:37.126535 10704 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16 I0423 20:12:37.126542 10704 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.6 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true} I0423 20:12:37.126587 10704 kubeadm.go:187] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.105.6 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: unix:///var/run/cri-dockerd.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.105.6 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.105.6"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.30.0 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock hairpinMode: hairpin-veth runtimeRequestTimeout: 15m clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0423 20:12:37.126656 10704 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0 I0423 20:12:37.130519 10704 binaries.go:44] Found k8s binaries, skipping transfer I0423 20:12:37.130565 10704 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0423 20:12:37.133882 10704 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes) I0423 20:12:37.139461 10704 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0423 20:12:37.144619 10704 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes) I0423 20:12:37.150137 10704 ssh_runner.go:195] Run: grep 192.168.105.6 control-plane.minikube.internal$ /etc/hosts I0423 20:12:37.151453 10704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.6 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0423 20:12:37.155294 10704 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0423 20:12:37.220187 10704 ssh_runner.go:195] Run: sudo systemctl start kubelet I0423 20:12:37.228906 10704 certs.go:68] Setting up /Users/mhr/.minikube/profiles/minikube for IP: 192.168.105.6 I0423 20:12:37.228908 10704 certs.go:194] generating shared ca certs ... I0423 20:12:37.228918 10704 certs.go:226] acquiring lock for ca certs: {Name:mk7d3402db85b7aac2098cccaca56565dd44c865 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0423 20:12:37.229093 10704 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/mhr/.minikube/ca.key I0423 20:12:37.229152 10704 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/mhr/.minikube/proxy-client-ca.key I0423 20:12:37.229161 10704 certs.go:256] generating profile certs ... I0423 20:12:37.229188 10704 certs.go:363] generating signed profile cert for "minikube-user": /Users/mhr/.minikube/profiles/minikube/client.key I0423 20:12:37.229194 10704 crypto.go:68] Generating cert /Users/mhr/.minikube/profiles/minikube/client.crt with IP's: [] I0423 20:12:37.388930 10704 crypto.go:156] Writing cert to /Users/mhr/.minikube/profiles/minikube/client.crt ... I0423 20:12:37.388937 10704 lock.go:35] WriteFile acquiring /Users/mhr/.minikube/profiles/minikube/client.crt: {Name:mkc1cb4071bcb73647cebbe167ea3c6e24c36c14 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0423 20:12:37.389247 10704 crypto.go:164] Writing key to /Users/mhr/.minikube/profiles/minikube/client.key ... I0423 20:12:37.389250 10704 lock.go:35] WriteFile acquiring /Users/mhr/.minikube/profiles/minikube/client.key: {Name:mkdfd67290fa277583efcd5b7d700ec984fe4375 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0423 20:12:37.389386 10704 certs.go:363] generating signed profile cert for "minikube": /Users/mhr/.minikube/profiles/minikube/apiserver.key.6f782735 I0423 20:12:37.389392 10704 crypto.go:68] Generating cert /Users/mhr/.minikube/profiles/minikube/apiserver.crt.6f782735 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.6] I0423 20:12:37.579260 10704 crypto.go:156] Writing cert to /Users/mhr/.minikube/profiles/minikube/apiserver.crt.6f782735 ... I0423 20:12:37.579265 10704 lock.go:35] WriteFile acquiring /Users/mhr/.minikube/profiles/minikube/apiserver.crt.6f782735: {Name:mk9305aae2807df1c518459303507c2454c0d2eb Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0423 20:12:37.579498 10704 crypto.go:164] Writing key to /Users/mhr/.minikube/profiles/minikube/apiserver.key.6f782735 ... I0423 20:12:37.579500 10704 lock.go:35] WriteFile acquiring /Users/mhr/.minikube/profiles/minikube/apiserver.key.6f782735: {Name:mkcb15b3ba584ea327d79b1d0ae2b2b260168b34 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0423 20:12:37.579625 10704 certs.go:381] copying /Users/mhr/.minikube/profiles/minikube/apiserver.crt.6f782735 -> /Users/mhr/.minikube/profiles/minikube/apiserver.crt I0423 20:12:37.580049 10704 certs.go:385] copying /Users/mhr/.minikube/profiles/minikube/apiserver.key.6f782735 -> /Users/mhr/.minikube/profiles/minikube/apiserver.key I0423 20:12:37.580242 10704 certs.go:363] generating signed profile cert for "aggregator": /Users/mhr/.minikube/profiles/minikube/proxy-client.key I0423 20:12:37.580252 10704 crypto.go:68] Generating cert /Users/mhr/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I0423 20:12:37.677523 10704 crypto.go:156] Writing cert to /Users/mhr/.minikube/profiles/minikube/proxy-client.crt ... I0423 20:12:37.677525 10704 lock.go:35] WriteFile acquiring /Users/mhr/.minikube/profiles/minikube/proxy-client.crt: {Name:mkdfeba92201fc57cc2d209fbe34f8afd03777ee Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0423 20:12:37.677701 10704 crypto.go:164] Writing key to /Users/mhr/.minikube/profiles/minikube/proxy-client.key ... I0423 20:12:37.677703 10704 lock.go:35] WriteFile acquiring /Users/mhr/.minikube/profiles/minikube/proxy-client.key: {Name:mk4d94d22b3d0b89e0ef0cafc61a00e64e8d1c63 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0423 20:12:37.677970 10704 certs.go:484] found cert: /Users/mhr/.minikube/certs/ca-key.pem (1675 bytes) I0423 20:12:37.677993 10704 certs.go:484] found cert: /Users/mhr/.minikube/certs/ca.pem (1070 bytes) I0423 20:12:37.678021 10704 certs.go:484] found cert: /Users/mhr/.minikube/certs/cert.pem (1111 bytes) I0423 20:12:37.678042 10704 certs.go:484] found cert: /Users/mhr/.minikube/certs/key.pem (1675 bytes) I0423 20:12:37.678371 10704 ssh_runner.go:362] scp /Users/mhr/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0423 20:12:37.689638 10704 ssh_runner.go:362] scp /Users/mhr/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0423 20:12:37.697346 10704 ssh_runner.go:362] scp /Users/mhr/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0423 20:12:37.705313 10704 ssh_runner.go:362] scp /Users/mhr/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0423 20:12:37.712676 10704 ssh_runner.go:362] scp /Users/mhr/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes) I0423 20:12:37.719767 10704 ssh_runner.go:362] scp /Users/mhr/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0423 20:12:37.726818 10704 ssh_runner.go:362] scp /Users/mhr/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0423 20:12:37.734228 10704 ssh_runner.go:362] scp /Users/mhr/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0423 20:12:37.741708 10704 ssh_runner.go:362] scp /Users/mhr/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0423 20:12:37.749162 10704 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0423 20:12:37.754546 10704 ssh_runner.go:195] Run: openssl version I0423 20:12:37.756626 10704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0423 20:12:37.760111 10704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0423 20:12:37.761437 10704 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 2023 /usr/share/ca-certificates/minikubeCA.pem I0423 20:12:37.761457 10704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0423 20:12:37.763230 10704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0423 20:12:37.767184 10704 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt I0423 20:12:37.769099 10704 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1 stdout: stderr: stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory I0423 20:12:37.769168 10704 kubeadm.go:391] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:6000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/homebrew/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/opt/homebrew/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} I0423 20:12:37.769220 10704 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0423 20:12:37.777066 10704 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0423 20:12:37.780091 10704 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0423 20:12:37.783093 10704 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0423 20:12:37.786231 10704 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0423 20:12:37.786234 10704 kubeadm.go:156] found existing configuration files: I0423 20:12:37.786258 10704 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf I0423 20:12:37.789452 10704 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2 stdout: stderr: grep: /etc/kubernetes/admin.conf: No such file or directory I0423 20:12:37.789476 10704 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf I0423 20:12:37.792426 10704 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf I0423 20:12:37.795180 10704 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2 stdout: stderr: grep: /etc/kubernetes/kubelet.conf: No such file or directory I0423 20:12:37.795223 10704 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf I0423 20:12:37.798269 10704 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf I0423 20:12:37.801355 10704 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2 stdout: stderr: grep: /etc/kubernetes/controller-manager.conf: No such file or directory I0423 20:12:37.801386 10704 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf I0423 20:12:37.804448 10704 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf I0423 20:12:37.807019 10704 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: grep: /etc/kubernetes/scheduler.conf: No such file or directory I0423 20:12:37.807045 10704 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf I0423 20:12:37.810101 10704 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem" I0423 20:12:37.833071 10704 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0 I0423 20:12:37.833097 10704 kubeadm.go:309] [preflight] Running pre-flight checks I0423 20:12:37.888313 10704 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster I0423 20:12:37.888362 10704 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection I0423 20:12:37.888411 10704 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' I0423 20:12:37.959637 10704 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs" I0423 20:12:37.965264 10704 out.go:204] ▪ Generating certificates and keys ... I0423 20:12:37.965321 10704 kubeadm.go:309] [certs] Using existing ca certificate authority I0423 20:12:37.965365 10704 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk I0423 20:12:38.014228 10704 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key I0423 20:12:38.199185 10704 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key I0423 20:12:38.409912 10704 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key I0423 20:12:38.531965 10704 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key I0423 20:12:38.617499 10704 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key I0423 20:12:38.617564 10704 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.105.6 127.0.0.1 ::1] I0423 20:12:38.692413 10704 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key I0423 20:12:38.692480 10704 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.105.6 127.0.0.1 ::1] I0423 20:12:38.786718 10704 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key I0423 20:12:38.850330 10704 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key I0423 20:12:38.907427 10704 kubeadm.go:309] [certs] Generating "sa" key and public key I0423 20:12:38.907462 10704 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes" I0423 20:12:38.936322 10704 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file I0423 20:12:39.154815 10704 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file I0423 20:12:39.297665 10704 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file I0423 20:12:39.392791 10704 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file I0423 20:12:39.481289 10704 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file I0423 20:12:39.481670 10704 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" I0423 20:12:39.483186 10704 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests" I0423 20:12:39.488691 10704 out.go:204] ▪ Booting up control plane ... I0423 20:12:39.488745 10704 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver" I0423 20:12:39.488788 10704 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager" I0423 20:12:39.488833 10704 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler" I0423 20:12:39.492736 10704 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" I0423 20:12:39.493012 10704 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" I0423 20:12:39.493030 10704 kubeadm.go:309] [kubelet-start] Starting the kubelet I0423 20:12:39.585181 10704 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests" I0423 20:12:39.585227 10704 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s I0423 20:12:40.090874 10704 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 505.319ms I0423 20:12:40.091026 10704 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s I0423 20:12:43.092121 10704 kubeadm.go:309] [api-check] The API server is healthy after 3.001657418s I0423 20:12:43.097552 10704 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace I0423 20:12:43.100845 10704 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster I0423 20:12:43.106368 10704 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs I0423 20:12:43.106451 10704 kubeadm.go:309] [mark-control-plane] Marking the node minikube as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] I0423 20:12:43.108905 10704 kubeadm.go:309] [bootstrap-token] Using token: un9xjq.9ayi0y77lrhf9373 I0423 20:12:43.113579 10704 out.go:204] ▪ Configuring RBAC rules ... I0423 20:12:43.113640 10704 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles I0423 20:12:43.120409 10704 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes I0423 20:12:43.122083 10704 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials I0423 20:12:43.123312 10704 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token I0423 20:12:43.124045 10704 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster I0423 20:12:43.124758 10704 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace I0423 20:12:43.495566 10704 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key I0423 20:12:43.901328 10704 kubeadm.go:309] [addons] Applied essential addon: CoreDNS I0423 20:12:44.495447 10704 kubeadm.go:309] [addons] Applied essential addon: kube-proxy I0423 20:12:44.496001 10704 kubeadm.go:309] I0423 20:12:44.496025 10704 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully! I0423 20:12:44.496027 10704 kubeadm.go:309] I0423 20:12:44.496057 10704 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user: I0423 20:12:44.496058 10704 kubeadm.go:309] I0423 20:12:44.496076 10704 kubeadm.go:309] mkdir -p $HOME/.kube I0423 20:12:44.496110 10704 kubeadm.go:309] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config I0423 20:12:44.496138 10704 kubeadm.go:309] sudo chown $(id -u):$(id -g) $HOME/.kube/config I0423 20:12:44.496142 10704 kubeadm.go:309] I0423 20:12:44.496164 10704 kubeadm.go:309] Alternatively, if you are the root user, you can run: I0423 20:12:44.496166 10704 kubeadm.go:309] I0423 20:12:44.496195 10704 kubeadm.go:309] export KUBECONFIG=/etc/kubernetes/admin.conf I0423 20:12:44.496197 10704 kubeadm.go:309] I0423 20:12:44.496221 10704 kubeadm.go:309] You should now deploy a pod network to the cluster. I0423 20:12:44.496254 10704 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: I0423 20:12:44.496287 10704 kubeadm.go:309] https://kubernetes.io/docs/concepts/cluster-administration/addons/ I0423 20:12:44.496288 10704 kubeadm.go:309] I0423 20:12:44.496334 10704 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities I0423 20:12:44.496367 10704 kubeadm.go:309] and service account keys on each node and then running the following as root: I0423 20:12:44.496368 10704 kubeadm.go:309] I0423 20:12:44.496405 10704 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token un9xjq.9ayi0y77lrhf9373 \ I0423 20:12:44.496461 10704 kubeadm.go:309] --discovery-token-ca-cert-hash sha256:a4c36fe483242140df1848d97fe7506520ab315ed9dc63f5a162f27704a18163 \ I0423 20:12:44.496471 10704 kubeadm.go:309] --control-plane I0423 20:12:44.496475 10704 kubeadm.go:309] I0423 20:12:44.496513 10704 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root: I0423 20:12:44.496514 10704 kubeadm.go:309] I0423 20:12:44.496550 10704 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token un9xjq.9ayi0y77lrhf9373 \ I0423 20:12:44.496613 10704 kubeadm.go:309] --discovery-token-ca-cert-hash sha256:a4c36fe483242140df1848d97fe7506520ab315ed9dc63f5a162f27704a18163 I0423 20:12:44.496957 10704 kubeadm.go:309] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' I0423 20:12:44.496964 10704 cni.go:84] Creating CNI manager for "" I0423 20:12:44.496970 10704 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0423 20:12:44.501731 10704 out.go:177] 🔗 Configuring bridge CNI (Container Networking Interface) ... I0423 20:12:44.509778 10704 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d I0423 20:12:44.513805 10704 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes) I0423 20:12:44.519185 10704 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0423 20:12:44.519248 10704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0423 20:12:44.519277 10704 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes minikube minikube.k8s.io/updated_at=2024_04_23T20_12_44_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=86fc9d54fca63f295d8737c8eacdbb7987e89c67 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true I0423 20:12:44.565744 10704 ops.go:34] apiserver oom_adj: -16 I0423 20:12:44.565745 10704 kubeadm.go:1107] duration metric: took 46.546041ms to wait for elevateKubeSystemPrivileges W0423 20:12:44.575648 10704 kubeadm.go:286] apiserver tunnel failed: apiserver port not set I0423 20:12:44.575657 10704 kubeadm.go:393] duration metric: took 6.806517875s to StartCluster I0423 20:12:44.575667 10704 settings.go:142] acquiring lock: {Name:mk107acd7deb2d528a003568b5a9612e83d75f1e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0423 20:12:44.575781 10704 settings.go:150] Updating kubeconfig: /Users/mhr/.kube/config I0423 20:12:44.576761 10704 lock.go:35] WriteFile acquiring /Users/mhr/.kube/config: {Name:mk616a6e7b398912470d5ee6bb622205d2cfe13b Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0423 20:12:44.577009 10704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0423 20:12:44.577013 10704 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} I0423 20:12:44.581694 10704 out.go:177] 🔎 Verifying Kubernetes components... I0423 20:12:44.577035 10704 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] I0423 20:12:44.581709 10704 addons.go:69] Setting storage-provisioner=true in profile "minikube" I0423 20:12:44.581711 10704 addons.go:69] Setting default-storageclass=true in profile "minikube" I0423 20:12:44.589715 10704 addons.go:234] Setting addon storage-provisioner=true in "minikube" I0423 20:12:44.577100 10704 config.go:182] Loaded profile config "minikube": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0 I0423 20:12:44.589737 10704 host.go:66] Checking if "minikube" exists ... I0423 20:12:44.589737 10704 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0423 20:12:44.589763 10704 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0423 20:12:44.591291 10704 addons.go:234] Setting addon default-storageclass=true in "minikube" I0423 20:12:44.591298 10704 host.go:66] Checking if "minikube" exists ... I0423 20:12:44.595695 10704 out.go:177] ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0423 20:12:44.591955 10704 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml I0423 20:12:44.599641 10704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0423 20:12:44.599647 10704 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/mhr/.minikube/machines/minikube/id_rsa Username:docker} I0423 20:12:44.599688 10704 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml I0423 20:12:44.599690 10704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0423 20:12:44.599693 10704 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/mhr/.minikube/machines/minikube/id_rsa Username:docker} I0423 20:12:44.633372 10704 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.105.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0423 20:12:44.671618 10704 ssh_runner.go:195] Run: sudo systemctl start kubelet I0423 20:12:44.723280 10704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0423 20:12:44.746656 10704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0423 20:12:44.791431 10704 start.go:946] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap I0423 20:12:44.792192 10704 api_server.go:52] waiting for apiserver process to appear ... I0423 20:12:44.792248 10704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0423 20:12:44.886258 10704 api_server.go:72] duration metric: took 309.229333ms to wait for apiserver process to appear ... I0423 20:12:44.886263 10704 api_server.go:88] waiting for apiserver healthz status ... I0423 20:12:44.886270 10704 api_server.go:253] Checking apiserver healthz at https://192.168.105.6:8443/healthz ... I0423 20:12:44.888849 10704 api_server.go:279] https://192.168.105.6:8443/healthz returned 200: ok I0423 20:12:44.893634 10704 out.go:177] 🌟 Enabled addons: storage-provisioner, default-storageclass I0423 20:12:44.889328 10704 api_server.go:141] control plane version: v1.30.0 I0423 20:12:44.901693 10704 api_server.go:131] duration metric: took 15.426417ms to wait for apiserver health ... I0423 20:12:44.901695 10704 addons.go:505] duration metric: took 324.66475ms for enable addons: enabled=[storage-provisioner default-storageclass] I0423 20:12:44.901699 10704 system_pods.go:43] waiting for kube-system pods to appear ... I0423 20:12:44.904976 10704 system_pods.go:59] 5 kube-system pods found I0423 20:12:44.904984 10704 system_pods.go:61] "etcd-minikube" [9997af2d-404e-4093-a246-45046209ac57] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd]) I0423 20:12:44.904987 10704 system_pods.go:61] "kube-apiserver-minikube" [213b93ae-53fe-4f56-a637-9b864e4e0a1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver]) I0423 20:12:44.904990 10704 system_pods.go:61] "kube-controller-manager-minikube" [e8264e02-8522-4a96-817e-85b2b4d65aed] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager]) I0423 20:12:44.904992 10704 system_pods.go:61] "kube-scheduler-minikube" [b0b817fc-7e86-4f88-8d95-d398b03e1b95] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler]) I0423 20:12:44.904993 10704 system_pods.go:61] "storage-provisioner" [f277ea2f-e409-492c-a7f9-bbf33ed1c04e] Pending I0423 20:12:44.904995 10704 system_pods.go:74] duration metric: took 3.295042ms to wait for pod list to return data ... I0423 20:12:44.904998 10704 kubeadm.go:576] duration metric: took 327.979125ms to wait for: map[apiserver:true system_pods:true] I0423 20:12:44.905003 10704 node_conditions.go:102] verifying NodePressure condition ... I0423 20:12:44.906002 10704 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki I0423 20:12:44.906006 10704 node_conditions.go:123] node cpu capacity is 2 I0423 20:12:44.906011 10704 node_conditions.go:105] duration metric: took 1.00625ms to run NodePressure ... I0423 20:12:44.906015 10704 start.go:240] waiting for startup goroutines ... I0423 20:12:45.294902 10704 kapi.go:248] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas I0423 20:12:45.294919 10704 start.go:245] waiting for cluster config update ... I0423 20:12:45.294924 10704 start.go:254] writing updated cluster config ... I0423 20:12:45.295158 10704 ssh_runner.go:195] Run: rm -f paused I0423 20:12:45.326611 10704 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0) I0423 20:12:45.331325 10704 out.go:177] 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default ==> Docker <== Apr 24 03:14:37 minikube dockerd[1174]: time="2024-04-24T03:14:37.975171136Z" level=info msg="Layer sha256:156422d3e64cebcf482750e0e81e00c17416732445d41fd615126f98b89f44c2 cleaned up" Apr 24 03:14:37 minikube dockerd[1174]: time="2024-04-24T03:14:37.977839554Z" level=info msg="Layer sha256:85c8ac1aa5492b0aeb7690800f01a1a57a0feac3afb8e6b140b6864912831d1f cleaned up" Apr 24 03:14:37 minikube dockerd[1174]: time="2024-04-24T03:14:37.978807513Z" level=info msg="Layer sha256:90021f74405b377591a72f7ee4af6cfc9ca2237e535c252c2f16edb566085fe7 cleaned up" Apr 24 03:14:37 minikube dockerd[1174]: time="2024-04-24T03:14:37.979623972Z" level=info msg="Layer sha256:65aaef512fa2db6e9378639ca350de66693d659497e7007e1787f33a17fc361d cleaned up" Apr 24 03:14:37 minikube dockerd[1174]: time="2024-04-24T03:14:37.980581680Z" level=info msg="Layer sha256:e828f313e70e1c7eea675c3c907456bac5d9239d4c9d203c6fb05b0f15376202 cleaned up" Apr 24 03:14:37 minikube dockerd[1174]: time="2024-04-24T03:14:37.980599847Z" level=info msg="Layer sha256:9fbe2db5494c716c4b1cd671b734d6fddd838e5b8e71f058e96b80ff82a46cfe cleaned up" Apr 24 03:14:37 minikube dockerd[1174]: time="2024-04-24T03:14:37.991282102Z" level=info msg="Layer sha256:c488f46a3b2c54c0bc197ab4f89229f078d23a51763dfea077e5e0952fc1760d cleaned up" Apr 24 03:14:37 minikube dockerd[1174]: time="2024-04-24T03:14:37.994628396Z" level=info msg="Layer sha256:d0cd4a44c2a1e034f98239969041c59c00ee260beb607cbc02ba221f8f85446d cleaned up" Apr 24 03:14:37 minikube dockerd[1174]: time="2024-04-24T03:14:37.994641187Z" level=info msg="Layer sha256:e647fb093321a8b4bbcb7b2eaec635c6018535a8f4087f27298175f62fe44ae9 cleaned up" Apr 24 03:14:38 minikube dockerd[1174]: time="2024-04-24T03:14:38.002861316Z" level=info msg="Layer sha256:916874c13127db26885ccf679a3fd7bc0252ae059a7eaabf15d370206207a663 cleaned up" Apr 24 03:14:38 minikube dockerd[1174]: time="2024-04-24T03:14:38.002875400Z" level=info msg="Layer sha256:dd2f61c71ae88bda2e956330e72fc4d153e91c1dc97780e3b0e02bb9f3bb991c cleaned up" Apr 24 03:14:38 minikube dockerd[1174]: time="2024-04-24T03:14:38.006447068Z" level=info msg="Layer sha256:b09314aec293bcd9a8ee5e643539437b3846f9e5e55f79e282e5f67e3026de5e cleaned up" Apr 24 03:15:19 minikube dockerd[1174]: time="2024-04-24T03:15:19.010659979Z" level=warning msg="reference for unknown type: " digest="sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c" remote="registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c" spanID=ffc91f0894eb8c6e traceID=938cbee3226a5a7474cdac3e6df35820 Apr 24 03:15:25 minikube dockerd[1174]: time="2024-04-24T03:15:25.168274320Z" level=info msg="Attempting next endpoint for pull after error: failed to register layer: lsetxattr security.capability /nginx-ingress-controller: operation not supported" spanID=ffc91f0894eb8c6e traceID=938cbee3226a5a7474cdac3e6df35820 Apr 24 03:15:25 minikube dockerd[1174]: time="2024-04-24T03:15:25.169065444Z" level=info msg="Layer sha256:f71b503fa87a4ee4af97d13f2a862eacb6c22fe1c7adf2ec14180f558a82e5e8 cleaned up" Apr 24 03:15:25 minikube cri-dockerd[1081]: time="2024-04-24T03:15:25Z" level=info msg="Stop pulling image registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c: 72b7a5ad5731: Extracting [==================================================>] 17.38MB/17.38MB" Apr 24 03:15:25 minikube dockerd[1174]: time="2024-04-24T03:15:25.172739817Z" level=info msg="Layer sha256:156422d3e64cebcf482750e0e81e00c17416732445d41fd615126f98b89f44c2 cleaned up" Apr 24 03:15:25 minikube dockerd[1174]: time="2024-04-24T03:15:25.172747400Z" level=info msg="Layer sha256:85c8ac1aa5492b0aeb7690800f01a1a57a0feac3afb8e6b140b6864912831d1f cleaned up" Apr 24 03:15:25 minikube dockerd[1174]: time="2024-04-24T03:15:25.173548733Z" level=info msg="Layer sha256:90021f74405b377591a72f7ee4af6cfc9ca2237e535c252c2f16edb566085fe7 cleaned up" Apr 24 03:15:25 minikube dockerd[1174]: time="2024-04-24T03:15:25.199074052Z" level=info msg="Layer sha256:65aaef512fa2db6e9378639ca350de66693d659497e7007e1787f33a17fc361d cleaned up" Apr 24 03:15:25 minikube dockerd[1174]: time="2024-04-24T03:15:25.199087177Z" level=info msg="Layer sha256:e828f313e70e1c7eea675c3c907456bac5d9239d4c9d203c6fb05b0f15376202 cleaned up" Apr 24 03:15:25 minikube dockerd[1174]: time="2024-04-24T03:15:25.199090469Z" level=info msg="Layer sha256:9fbe2db5494c716c4b1cd671b734d6fddd838e5b8e71f058e96b80ff82a46cfe cleaned up" Apr 24 03:15:25 minikube dockerd[1174]: time="2024-04-24T03:15:25.199093344Z" level=info msg="Layer sha256:c488f46a3b2c54c0bc197ab4f89229f078d23a51763dfea077e5e0952fc1760d cleaned up" Apr 24 03:15:25 minikube dockerd[1174]: time="2024-04-24T03:15:25.199096011Z" level=info msg="Layer sha256:d0cd4a44c2a1e034f98239969041c59c00ee260beb607cbc02ba221f8f85446d cleaned up" Apr 24 03:15:25 minikube dockerd[1174]: time="2024-04-24T03:15:25.199098844Z" level=info msg="Layer sha256:e647fb093321a8b4bbcb7b2eaec635c6018535a8f4087f27298175f62fe44ae9 cleaned up" Apr 24 03:15:25 minikube dockerd[1174]: time="2024-04-24T03:15:25.199102136Z" level=info msg="Layer sha256:916874c13127db26885ccf679a3fd7bc0252ae059a7eaabf15d370206207a663 cleaned up" Apr 24 03:15:25 minikube dockerd[1174]: time="2024-04-24T03:15:25.199104719Z" level=info msg="Layer sha256:dd2f61c71ae88bda2e956330e72fc4d153e91c1dc97780e3b0e02bb9f3bb991c cleaned up" Apr 24 03:15:25 minikube dockerd[1174]: time="2024-04-24T03:15:25.199107469Z" level=info msg="Layer sha256:b09314aec293bcd9a8ee5e643539437b3846f9e5e55f79e282e5f67e3026de5e cleaned up" Apr 24 03:16:56 minikube dockerd[1174]: time="2024-04-24T03:16:56.010456049Z" level=warning msg="reference for unknown type: " digest="sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c" remote="registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c" spanID=aa0aa5637b1ce319 traceID=c52e5abe0daf9df06e093c5b614d5f2d Apr 24 03:17:03 minikube dockerd[1174]: time="2024-04-24T03:17:03.542140593Z" level=info msg="Attempting next endpoint for pull after error: failed to register layer: lsetxattr security.capability /nginx-ingress-controller: operation not supported" spanID=aa0aa5637b1ce319 traceID=c52e5abe0daf9df06e093c5b614d5f2d Apr 24 03:17:03 minikube cri-dockerd[1081]: time="2024-04-24T03:17:03Z" level=info msg="Stop pulling image registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c: 72b7a5ad5731: Extracting [==================================================>] 17.38MB/17.38MB" Apr 24 03:17:03 minikube dockerd[1174]: time="2024-04-24T03:17:03.543354216Z" level=info msg="Layer sha256:f71b503fa87a4ee4af97d13f2a862eacb6c22fe1c7adf2ec14180f558a82e5e8 cleaned up" Apr 24 03:17:03 minikube dockerd[1174]: time="2024-04-24T03:17:03.543926549Z" level=info msg="Layer sha256:156422d3e64cebcf482750e0e81e00c17416732445d41fd615126f98b89f44c2 cleaned up" Apr 24 03:17:03 minikube dockerd[1174]: time="2024-04-24T03:17:03.546367169Z" level=info msg="Layer sha256:85c8ac1aa5492b0aeb7690800f01a1a57a0feac3afb8e6b140b6864912831d1f cleaned up" Apr 24 03:17:03 minikube dockerd[1174]: time="2024-04-24T03:17:03.547196043Z" level=info msg="Layer sha256:90021f74405b377591a72f7ee4af6cfc9ca2237e535c252c2f16edb566085fe7 cleaned up" Apr 24 03:17:03 minikube dockerd[1174]: time="2024-04-24T03:17:03.548069624Z" level=info msg="Layer sha256:65aaef512fa2db6e9378639ca350de66693d659497e7007e1787f33a17fc361d cleaned up" Apr 24 03:17:03 minikube dockerd[1174]: time="2024-04-24T03:17:03.548620290Z" level=info msg="Layer sha256:e828f313e70e1c7eea675c3c907456bac5d9239d4c9d203c6fb05b0f15376202 cleaned up" Apr 24 03:17:03 minikube dockerd[1174]: time="2024-04-24T03:17:03.549020206Z" level=info msg="Layer sha256:9fbe2db5494c716c4b1cd671b734d6fddd838e5b8e71f058e96b80ff82a46cfe cleaned up" Apr 24 03:17:03 minikube dockerd[1174]: time="2024-04-24T03:17:03.559653729Z" level=info msg="Layer sha256:c488f46a3b2c54c0bc197ab4f89229f078d23a51763dfea077e5e0952fc1760d cleaned up" Apr 24 03:17:03 minikube dockerd[1174]: time="2024-04-24T03:17:03.562388391Z" level=info msg="Layer sha256:d0cd4a44c2a1e034f98239969041c59c00ee260beb607cbc02ba221f8f85446d cleaned up" Apr 24 03:17:03 minikube dockerd[1174]: time="2024-04-24T03:17:03.562798098Z" level=info msg="Layer sha256:e647fb093321a8b4bbcb7b2eaec635c6018535a8f4087f27298175f62fe44ae9 cleaned up" Apr 24 03:17:03 minikube dockerd[1174]: time="2024-04-24T03:17:03.565933426Z" level=info msg="Layer sha256:916874c13127db26885ccf679a3fd7bc0252ae059a7eaabf15d370206207a663 cleaned up" Apr 24 03:17:03 minikube dockerd[1174]: time="2024-04-24T03:17:03.571357875Z" level=info msg="Layer sha256:dd2f61c71ae88bda2e956330e72fc4d153e91c1dc97780e3b0e02bb9f3bb991c cleaned up" Apr 24 03:17:03 minikube dockerd[1174]: time="2024-04-24T03:17:03.574772327Z" level=info msg="Layer sha256:b09314aec293bcd9a8ee5e643539437b3846f9e5e55f79e282e5f67e3026de5e cleaned up" Apr 24 03:19:52 minikube dockerd[1174]: time="2024-04-24T03:19:52.007699027Z" level=warning msg="reference for unknown type: " digest="sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c" remote="registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c" spanID=1dc0fbe0f6650791 traceID=4cd9cc40b703c07fed6cd78ebc0bd526 Apr 24 03:19:58 minikube dockerd[1174]: time="2024-04-24T03:19:58.161326913Z" level=info msg="Attempting next endpoint for pull after error: failed to register layer: lsetxattr security.capability /nginx-ingress-controller: operation not supported" spanID=1dc0fbe0f6650791 traceID=4cd9cc40b703c07fed6cd78ebc0bd526 Apr 24 03:19:58 minikube dockerd[1174]: time="2024-04-24T03:19:58.162287118Z" level=info msg="Layer sha256:f71b503fa87a4ee4af97d13f2a862eacb6c22fe1c7adf2ec14180f558a82e5e8 cleaned up" Apr 24 03:19:58 minikube cri-dockerd[1081]: time="2024-04-24T03:19:58Z" level=info msg="Stop pulling image registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c: 72b7a5ad5731: Extracting [==================================================>] 17.38MB/17.38MB" Apr 24 03:19:58 minikube dockerd[1174]: time="2024-04-24T03:19:58.163238408Z" level=info msg="Layer sha256:156422d3e64cebcf482750e0e81e00c17416732445d41fd615126f98b89f44c2 cleaned up" Apr 24 03:19:58 minikube dockerd[1174]: time="2024-04-24T03:19:58.166001234Z" level=info msg="Layer sha256:85c8ac1aa5492b0aeb7690800f01a1a57a0feac3afb8e6b140b6864912831d1f cleaned up" Apr 24 03:19:58 minikube dockerd[1174]: time="2024-04-24T03:19:58.166772274Z" level=info msg="Layer sha256:90021f74405b377591a72f7ee4af6cfc9ca2237e535c252c2f16edb566085fe7 cleaned up" Apr 24 03:19:58 minikube dockerd[1174]: time="2024-04-24T03:19:58.167528563Z" level=info msg="Layer sha256:65aaef512fa2db6e9378639ca350de66693d659497e7007e1787f33a17fc361d cleaned up" Apr 24 03:19:58 minikube dockerd[1174]: time="2024-04-24T03:19:58.168414061Z" level=info msg="Layer sha256:e828f313e70e1c7eea675c3c907456bac5d9239d4c9d203c6fb05b0f15376202 cleaned up" Apr 24 03:19:58 minikube dockerd[1174]: time="2024-04-24T03:19:58.168421519Z" level=info msg="Layer sha256:9fbe2db5494c716c4b1cd671b734d6fddd838e5b8e71f058e96b80ff82a46cfe cleaned up" Apr 24 03:19:58 minikube dockerd[1174]: time="2024-04-24T03:19:58.178966575Z" level=info msg="Layer sha256:c488f46a3b2c54c0bc197ab4f89229f078d23a51763dfea077e5e0952fc1760d cleaned up" Apr 24 03:19:58 minikube dockerd[1174]: time="2024-04-24T03:19:58.181918568Z" level=info msg="Layer sha256:d0cd4a44c2a1e034f98239969041c59c00ee260beb607cbc02ba221f8f85446d cleaned up" Apr 24 03:19:58 minikube dockerd[1174]: time="2024-04-24T03:19:58.181925859Z" level=info msg="Layer sha256:e647fb093321a8b4bbcb7b2eaec635c6018535a8f4087f27298175f62fe44ae9 cleaned up" Apr 24 03:19:58 minikube dockerd[1174]: time="2024-04-24T03:19:58.189797006Z" level=info msg="Layer sha256:916874c13127db26885ccf679a3fd7bc0252ae059a7eaabf15d370206207a663 cleaned up" Apr 24 03:19:58 minikube dockerd[1174]: time="2024-04-24T03:19:58.189816047Z" level=info msg="Layer sha256:dd2f61c71ae88bda2e956330e72fc4d153e91c1dc97780e3b0e02bb9f3bb991c cleaned up" Apr 24 03:19:58 minikube dockerd[1174]: time="2024-04-24T03:19:58.193081747Z" level=info msg="Layer sha256:b09314aec293bcd9a8ee5e643539437b3846f9e5e55f79e282e5f67e3026de5e cleaned up" ==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD 0968d59d3e12c registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334 8 minutes ago Exited patch 0 16ed87386da04 ingress-nginx-admission-patch-jg8ff 0fbc9205ddac0 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334 8 minutes ago Exited create 0 edf2f3604ef6c ingress-nginx-admission-create-86cj6 becb3db03decd ba04bb24b9575 9 minutes ago Running storage-provisioner 1 4a7fe9bdb39ff storage-provisioner 81ba14bc90b11 2437cf7621777 9 minutes ago Running coredns 0 021f953a634c0 coredns-7db6d8ff4d-s76rd b67d8cd5c5397 cb7eac0b42cc1 9 minutes ago Running kube-proxy 0 f5a63f282ebc1 kube-proxy-9hp7c 9a96a44d956f7 ba04bb24b9575 9 minutes ago Exited storage-provisioner 0 4a7fe9bdb39ff storage-provisioner 0bbc8e007d8a1 181f57fd3cdb7 9 minutes ago Running kube-apiserver 0 f8a38f8d9f2f7 kube-apiserver-minikube 73a6b4a8b0723 014faa467e297 9 minutes ago Running etcd 0 ac58ef8eac98f etcd-minikube d6318e56dde61 68feac521c0f1 9 minutes ago Running kube-controller-manager 0 2516ef6b945f5 kube-controller-manager-minikube 6c6e95b2e69a5 547adae34140b 9 minutes ago Running kube-scheduler 0 e7303a9b0487a kube-scheduler-minikube ==> coredns [81ba14bc90b1] <== [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host [ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server .:53 [INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b CoreDNS-1.11.1 linux/arm64, go1.20.7, ae2bbc2 [INFO] 127.0.0.1:43890 - 13585 "HINFO IN 2420130784087483133.4362125321468537516. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.03212766s ==> describe nodes <== Name: minikube Roles: control-plane Labels: beta.kubernetes.io/arch=arm64 beta.kubernetes.io/os=linux kubernetes.io/arch=arm64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=86fc9d54fca63f295d8737c8eacdbb7987e89c67 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2024_04_23T20_12_44_0700 minikube.k8s.io/version=v1.33.0 node-role.kubernetes.io/control-plane= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 24 Apr 2024 03:12:42 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Wed, 24 Apr 2024 03:22:05 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Wed, 24 Apr 2024 03:18:52 +0000 Wed, 24 Apr 2024 03:12:41 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 24 Apr 2024 03:18:52 +0000 Wed, 24 Apr 2024 03:12:41 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 24 Apr 2024 03:18:52 +0000 Wed, 24 Apr 2024 03:12:41 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 24 Apr 2024 03:18:52 +0000 Wed, 24 Apr 2024 03:12:48 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.105.6 Hostname: minikube Capacity: cpu: 2 ephemeral-storage: 17734596Ki hugepages-1Gi: 0 hugepages-2Mi: 0 hugepages-32Mi: 0 hugepages-64Ki: 0 memory: 5909816Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 17734596Ki hugepages-1Gi: 0 hugepages-2Mi: 0 hugepages-32Mi: 0 hugepages-64Ki: 0 memory: 5909816Ki pods: 110 System Info: Machine ID: 2784638240cd49559b9ca9ee1fdb968e System UUID: 2784638240cd49559b9ca9ee1fdb968e Boot ID: ff68e9c7-2bb2-405d-b758-62a3c6e82341 Kernel Version: 5.10.207 OS Image: Buildroot 2023.02.9 Operating System: linux Architecture: arm64 Container Runtime Version: docker://26.0.1 Kubelet Version: v1.30.0 Kube-Proxy Version: v1.30.0 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- ingress-nginx ingress-nginx-controller-84df5799c-ld4hn 100m (5%!)(MISSING) 0 (0%!)(MISSING) 90Mi (1%!)(MISSING) 0 (0%!)(MISSING) 8m38s kube-system coredns-7db6d8ff4d-s76rd 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (2%!)(MISSING) 9m15s kube-system etcd-minikube 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (1%!)(MISSING) 0 (0%!)(MISSING) 9m30s kube-system kube-apiserver-minikube 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m30s kube-system kube-controller-manager-minikube 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m30s kube-system kube-proxy-9hp7c 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m15s kube-system kube-scheduler-minikube 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m30s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m29s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 850m (42%!)(MISSING) 0 (0%!)(MISSING) memory 260Mi (4%!)(MISSING) 170Mi (2%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-32Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-64Ki 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 9m15s kube-proxy Normal Starting 9m31s kubelet Starting kubelet. Normal NodeAllocatableEnforced 9m31s kubelet Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 9m30s kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 9m30s kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 9m30s kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeReady 9m26s kubelet Node minikube status is now: NodeReady Normal RegisteredNode 9m16s node-controller Node minikube event: Registered Node minikube in Controller ==> dmesg <== [Apr24 03:12] ACPI: SRAT not present [ +0.000000] KASLR disabled due to lack of seed [ +0.702568] EINJ: EINJ table not found. [ +0.509602] systemd-fstab-generator[117]: Ignoring "noauto" option for root device [ +0.151886] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 [ +0.000588] platform regulatory.0: Falling back to sysfs fallback for: regulatory.db [ +4.487821] systemd-fstab-generator[510]: Ignoring "noauto" option for root device [ +0.061068] systemd-fstab-generator[522]: Ignoring "noauto" option for root device [ +1.538632] systemd-fstab-generator[800]: Ignoring "noauto" option for root device [ +0.158513] systemd-fstab-generator[836]: Ignoring "noauto" option for root device [ +0.065555] systemd-fstab-generator[848]: Ignoring "noauto" option for root device [ +0.065971] systemd-fstab-generator[862]: Ignoring "noauto" option for root device [ +2.233914] systemd-fstab-generator[1034]: Ignoring "noauto" option for root device [ +0.060721] systemd-fstab-generator[1046]: Ignoring "noauto" option for root device [ +0.059438] systemd-fstab-generator[1058]: Ignoring "noauto" option for root device [ +0.071766] systemd-fstab-generator[1073]: Ignoring "noauto" option for root device [ +2.112701] systemd-fstab-generator[1166]: Ignoring "noauto" option for root device [ +0.030194] kauditd_printk_skb: 275 callbacks suppressed [ +2.359794] systemd-fstab-generator[1363]: Ignoring "noauto" option for root device [ +2.357608] systemd-fstab-generator[1519]: Ignoring "noauto" option for root device [ +0.701016] kauditd_printk_skb: 107 callbacks suppressed [ +3.288924] systemd-fstab-generator[1943]: Ignoring "noauto" option for root device [ +1.083516] systemd-fstab-generator[2003]: Ignoring "noauto" option for root device [ +14.340599] kauditd_printk_skb: 74 callbacks suppressed [Apr24 03:13] kauditd_printk_skb: 63 callbacks suppressed [ +36.797854] kauditd_printk_skb: 74 callbacks suppressed ==> etcd [73a6b4a8b072] <== {"level":"warn","ts":"2024-04-24T03:12:40.637894Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."} {"level":"info","ts":"2024-04-24T03:12:40.638006Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.105.6:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.105.6:2380","--initial-cluster=minikube=https://192.168.105.6:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.105.6:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.105.6:2380","--name=minikube","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]} {"level":"warn","ts":"2024-04-24T03:12:40.638069Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."} {"level":"info","ts":"2024-04-24T03:12:40.638089Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.105.6:2380"]} {"level":"info","ts":"2024-04-24T03:12:40.638107Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2024-04-24T03:12:40.638541Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.6:2379"]} {"level":"info","ts":"2024-04-24T03:12:40.63861Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"arm64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":false,"name":"minikube","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.105.6:2380"],"listen-peer-urls":["https://192.168.105.6:2380"],"advertise-client-urls":["https://192.168.105.6:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.6:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"minikube=https://192.168.105.6:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"} {"level":"info","ts":"2024-04-24T03:12:40.639532Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"756µs"} {"level":"info","ts":"2024-04-24T03:12:40.648911Z","caller":"etcdserver/raft.go:495","msg":"starting local member","local-member-id":"ed054832bd1917e1","cluster-id":"45a39c2c59b0edf4"} {"level":"info","ts":"2024-04-24T03:12:40.648986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ed054832bd1917e1 switched to configuration voters=()"} {"level":"info","ts":"2024-04-24T03:12:40.649044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ed054832bd1917e1 became follower at term 0"} {"level":"info","ts":"2024-04-24T03:12:40.649065Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft ed054832bd1917e1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"} {"level":"info","ts":"2024-04-24T03:12:40.649089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ed054832bd1917e1 became follower at term 1"} {"level":"info","ts":"2024-04-24T03:12:40.649145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ed054832bd1917e1 switched to configuration voters=(17079136544630577121)"} {"level":"warn","ts":"2024-04-24T03:12:40.655657Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"} {"level":"info","ts":"2024-04-24T03:12:40.665301Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1} {"level":"info","ts":"2024-04-24T03:12:40.675513Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"} {"level":"info","ts":"2024-04-24T03:12:40.684556Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"ed054832bd1917e1","local-server-version":"3.5.12","cluster-version":"to_be_decided"} {"level":"info","ts":"2024-04-24T03:12:40.684825Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"ed054832bd1917e1","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} {"level":"info","ts":"2024-04-24T03:12:40.684917Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"} {"level":"info","ts":"2024-04-24T03:12:40.684974Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"} {"level":"info","ts":"2024-04-24T03:12:40.684999Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"} {"level":"info","ts":"2024-04-24T03:12:40.687667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ed054832bd1917e1 switched to configuration voters=(17079136544630577121)"} {"level":"info","ts":"2024-04-24T03:12:40.687727Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"45a39c2c59b0edf4","local-member-id":"ed054832bd1917e1","added-peer-id":"ed054832bd1917e1","added-peer-peer-urls":["https://192.168.105.6:2380"]} {"level":"info","ts":"2024-04-24T03:12:40.692592Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2024-04-24T03:12:40.692701Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ed054832bd1917e1","initial-advertise-peer-urls":["https://192.168.105.6:2380"],"listen-peer-urls":["https://192.168.105.6:2380"],"advertise-client-urls":["https://192.168.105.6:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.6:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]} {"level":"info","ts":"2024-04-24T03:12:40.692726Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"} {"level":"info","ts":"2024-04-24T03:12:40.692799Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.6:2380"} {"level":"info","ts":"2024-04-24T03:12:40.692819Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.6:2380"} {"level":"info","ts":"2024-04-24T03:12:41.650426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ed054832bd1917e1 is starting a new election at term 1"} {"level":"info","ts":"2024-04-24T03:12:41.650681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ed054832bd1917e1 became pre-candidate at term 1"} {"level":"info","ts":"2024-04-24T03:12:41.650772Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ed054832bd1917e1 received MsgPreVoteResp from ed054832bd1917e1 at term 1"} {"level":"info","ts":"2024-04-24T03:12:41.650819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ed054832bd1917e1 became candidate at term 2"} {"level":"info","ts":"2024-04-24T03:12:41.650923Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ed054832bd1917e1 received MsgVoteResp from ed054832bd1917e1 at term 2"} {"level":"info","ts":"2024-04-24T03:12:41.650972Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ed054832bd1917e1 became leader at term 2"} {"level":"info","ts":"2024-04-24T03:12:41.651056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ed054832bd1917e1 elected leader ed054832bd1917e1 at term 2"} {"level":"info","ts":"2024-04-24T03:12:41.652261Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2024-04-24T03:12:41.653058Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"45a39c2c59b0edf4","local-member-id":"ed054832bd1917e1","cluster-version":"3.5"} {"level":"info","ts":"2024-04-24T03:12:41.653137Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2024-04-24T03:12:41.653165Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2024-04-24T03:12:41.653201Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"ed054832bd1917e1","local-member-attributes":"{Name:minikube ClientURLs:[https://192.168.105.6:2379]}","request-path":"/0/members/ed054832bd1917e1/attributes","cluster-id":"45a39c2c59b0edf4","publish-timeout":"7s"} {"level":"info","ts":"2024-04-24T03:12:41.653194Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"} {"level":"info","ts":"2024-04-24T03:12:41.653474Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"} {"level":"info","ts":"2024-04-24T03:12:41.653823Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} {"level":"info","ts":"2024-04-24T03:12:41.653841Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} {"level":"info","ts":"2024-04-24T03:12:41.656266Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"} {"level":"info","ts":"2024-04-24T03:12:41.656513Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.6:2379"} ==> kernel <== 03:22:14 up 9 min, 0 users, load average: 0.16, 0.16, 0.10 Linux minikube 5.10.207 #1 SMP PREEMPT Thu Apr 18 19:10:12 UTC 2024 aarch64 GNU/Linux PRETTY_NAME="Buildroot 2023.02.9" ==> kube-apiserver [0bbc8e007d8a] <== I0424 03:12:42.119633 1 shared_informer.go:313] Waiting for caches to sync for configmaps I0424 03:12:42.119952 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I0424 03:12:42.119984 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0424 03:12:42.120060 1 customresource_discovery_controller.go:289] Starting DiscoveryController I0424 03:12:42.120108 1 system_namespaces_controller.go:67] Starting system namespaces controller I0424 03:12:42.120233 1 controller.go:78] Starting OpenAPI AggregationController I0424 03:12:42.120267 1 controller.go:80] Starting OpenAPI V3 AggregationController I0424 03:12:42.120288 1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key" I0424 03:12:42.120337 1 aggregator.go:163] waiting for initial CRD sync... I0424 03:12:42.120433 1 available_controller.go:423] Starting AvailableConditionController I0424 03:12:42.120455 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0424 03:12:42.120478 1 apf_controller.go:374] Starting API Priority and Fairness config controller I0424 03:12:42.120837 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0424 03:12:42.120841 1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller I0424 03:12:42.123093 1 controller.go:139] Starting OpenAPI controller I0424 03:12:42.123126 1 controller.go:87] Starting OpenAPI V3 controller I0424 03:12:42.123150 1 naming_controller.go:291] Starting NamingConditionController I0424 03:12:42.123186 1 establishing_controller.go:76] Starting EstablishingController I0424 03:12:42.123208 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0424 03:12:42.123231 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0424 03:12:42.123245 1 crd_finalizer.go:266] Starting CRDFinalizer I0424 03:12:42.123343 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0424 03:12:42.123361 1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister I0424 03:12:42.135430 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0424 03:12:42.147639 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0424 03:12:42.179678 1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator] I0424 03:12:42.179692 1 policy_source.go:224] refreshing policies I0424 03:12:42.181175 1 shared_informer.go:320] Caches are synced for node_authorizer I0424 03:12:42.220355 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0424 03:12:42.220388 1 shared_informer.go:320] Caches are synced for configmaps I0424 03:12:42.220482 1 cache.go:39] Caches are synced for AvailableConditionController controller I0424 03:12:42.220506 1 handler_discovery.go:447] Starting ResourceDiscoveryManager I0424 03:12:42.220521 1 apf_controller.go:379] Running API Priority and Fairness config worker I0424 03:12:42.220523 1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process I0424 03:12:42.220903 1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller I0424 03:12:42.220926 1 controller.go:615] quota admission added evaluator for: namespaces I0424 03:12:42.223623 1 shared_informer.go:320] Caches are synced for crd-autoregister I0424 03:12:42.223695 1 aggregator.go:165] initial CRD sync complete... I0424 03:12:42.223720 1 autoregister_controller.go:141] Starting autoregister controller I0424 03:12:42.223745 1 cache.go:32] Waiting for caches to sync for autoregister controller I0424 03:12:42.223770 1 cache.go:39] Caches are synced for autoregister controller I0424 03:12:42.227769 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io I0424 03:12:43.123123 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000 I0424 03:12:43.124342 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000 I0424 03:12:43.124349 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist. I0424 03:12:43.251542 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0424 03:12:43.260537 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0424 03:12:43.329294 1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"} W0424 03:12:43.331475 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.6] I0424 03:12:43.331768 1 controller.go:615] quota admission added evaluator for: endpoints I0424 03:12:43.332844 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io I0424 03:12:44.142796 1 controller.go:615] quota admission added evaluator for: serviceaccounts I0424 03:12:44.145253 1 controller.go:615] quota admission added evaluator for: deployments.apps I0424 03:12:44.148455 1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"} I0424 03:12:44.152136 1 controller.go:615] quota admission added evaluator for: daemonsets.apps I0424 03:12:58.865558 1 controller.go:615] quota admission added evaluator for: replicasets.apps I0424 03:12:59.015470 1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps I0424 03:13:36.058627 1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs={"IPv4":"10.106.89.137"} I0424 03:13:36.068922 1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.101.64.200"} I0424 03:13:36.076289 1 controller.go:615] quota admission added evaluator for: jobs.batch ==> kube-controller-manager [d6318e56dde6] <== I0424 03:12:58.256695 1 shared_informer.go:320] Caches are synced for ReplicationController I0424 03:12:58.259917 1 shared_informer.go:320] Caches are synced for HPA I0424 03:12:58.264252 1 shared_informer.go:320] Caches are synced for taint I0424 03:12:58.264367 1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone="" I0424 03:12:58.264536 1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="minikube" I0424 03:12:58.264576 1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal" I0424 03:12:58.264270 1 shared_informer.go:320] Caches are synced for PVC protection I0424 03:12:58.264273 1 shared_informer.go:320] Caches are synced for GC I0424 03:12:58.264334 1 shared_informer.go:320] Caches are synced for endpoint I0424 03:12:58.264463 1 shared_informer.go:320] Caches are synced for daemon sets I0424 03:12:58.264555 1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner I0424 03:12:58.266150 1 shared_informer.go:320] Caches are synced for resource quota I0424 03:12:58.629220 1 shared_informer.go:320] Caches are synced for garbage collector I0424 03:12:58.713525 1 shared_informer.go:320] Caches are synced for garbage collector I0424 03:12:58.713580 1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller" I0424 03:12:59.119078 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="252.22525ms" I0424 03:12:59.122866 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="3.767875ms" I0424 03:12:59.122887 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="10.125µs" I0424 03:12:59.128118 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="10.875µs" I0424 03:13:00.013936 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.282µs" I0424 03:13:09.431637 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="3.127045ms" I0424 03:13:09.432168 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.589µs" I0424 03:13:36.078559 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" I0424 03:13:36.085003 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="9.149595ms" I0424 03:13:36.089747 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="4.732524ms" I0424 03:13:36.089868 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="17.583µs" I0424 03:13:36.092282 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" I0424 03:13:36.093862 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="13.333µs" I0424 03:13:36.094067 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" I0424 03:13:36.097106 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" I0424 03:13:36.097230 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" I0424 03:13:36.104243 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" I0424 03:13:36.104281 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" I0424 03:13:36.104342 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" I0424 03:13:36.106795 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" I0424 03:13:36.116536 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" I0424 03:13:39.117262 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" I0424 03:13:39.122122 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" I0424 03:13:40.207741 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" I0424 03:13:40.225689 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" I0424 03:13:41.132770 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" I0424 03:13:41.136988 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" I0424 03:13:41.210817 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" I0424 03:13:41.213236 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" I0424 03:13:41.215290 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" I0424 03:13:41.228271 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" I0424 03:13:41.230542 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" I0424 03:13:41.232316 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" I0424 03:13:46.164456 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="34.5µs" I0424 03:13:58.957977 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="38.042µs" I0424 03:14:16.958469 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="32.875µs" I0424 03:14:31.959577 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="49.584µs" I0424 03:14:52.957554 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="34.125µs" I0424 03:15:04.957800 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="31.209µs" I0424 03:15:38.957576 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="40µs" I0424 03:15:49.958890 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="49.5µs" I0424 03:17:15.960193 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="47µs" I0424 03:17:30.957929 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="47.25µs" I0424 03:20:12.958728 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="45.5µs" I0424 03:20:26.957441 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="52.749µs" ==> kube-proxy [b67d8cd5c539] <== I0424 03:12:59.431358 1 server_linux.go:69] "Using iptables proxy" I0424 03:12:59.442301 1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.6"] I0424 03:12:59.450103 1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6" I0424 03:12:59.450115 1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4" I0424 03:12:59.450120 1 server_linux.go:165] "Using iptables Proxier" I0424 03:12:59.450728 1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" I0424 03:12:59.450830 1 server.go:872] "Version info" version="v1.30.0" I0424 03:12:59.450838 1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0424 03:12:59.451451 1 config.go:192] "Starting service config controller" I0424 03:12:59.451480 1 shared_informer.go:313] Waiting for caches to sync for service config I0424 03:12:59.451504 1 config.go:101] "Starting endpoint slice config controller" I0424 03:12:59.451515 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config I0424 03:12:59.451738 1 config.go:319] "Starting node config controller" I0424 03:12:59.451761 1 shared_informer.go:313] Waiting for caches to sync for node config I0424 03:12:59.552319 1 shared_informer.go:320] Caches are synced for node config I0424 03:12:59.552338 1 shared_informer.go:320] Caches are synced for service config I0424 03:12:59.552366 1 shared_informer.go:320] Caches are synced for endpoint slice config ==> kube-scheduler [6c6e95b2e69a] <== I0424 03:12:40.898589 1 serving.go:380] Generated self-signed cert in-memory W0424 03:12:42.149290 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0424 03:12:42.149310 1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0424 03:12:42.149315 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous. W0424 03:12:42.149318 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0424 03:12:42.171028 1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0" I0424 03:12:42.171083 1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0424 03:12:42.171856 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259 I0424 03:12:42.172718 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0424 03:12:42.172762 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0424 03:12:42.172794 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" W0424 03:12:42.178418 1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0424 03:12:42.178563 1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0424 03:12:42.178831 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0424 03:12:42.178871 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0424 03:12:42.178887 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0424 03:12:42.178911 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0424 03:12:42.178875 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0424 03:12:42.178961 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0424 03:12:42.178982 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0424 03:12:42.178990 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0424 03:12:42.179032 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0424 03:12:42.179039 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0424 03:12:42.179087 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0424 03:12:42.179101 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0424 03:12:42.179123 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0424 03:12:42.179134 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0424 03:12:42.179125 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0424 03:12:42.179147 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0424 03:12:42.179162 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0424 03:12:42.179193 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0424 03:12:42.179167 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0424 03:12:42.179212 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0424 03:12:42.179221 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0424 03:12:42.179228 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0424 03:12:42.179244 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0424 03:12:42.179267 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0424 03:12:42.179252 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0424 03:12:42.179273 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0424 03:12:42.178860 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0424 03:12:42.179303 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0424 03:12:43.056520 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0424 03:12:43.056549 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0424 03:12:43.093120 1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0424 03:12:43.093134 1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0424 03:12:43.121148 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0424 03:12:43.121363 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope I0424 03:12:45.173849 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file ==> kubelet <== Apr 24 03:16:43 minikube kubelet[1949]: E0424 03:16:43.955815 1949 iptables.go:577] "Could not set up iptables canary" err=< Apr 24 03:16:43 minikube kubelet[1949]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option. Apr 24 03:16:43 minikube kubelet[1949]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?) Apr 24 03:16:43 minikube kubelet[1949]: Perhaps ip6tables or your kernel needs to be upgraded. Apr 24 03:16:43 minikube kubelet[1949]: > table="nat" chain="KUBE-KUBELET-CANARY" Apr 24 03:17:03 minikube kubelet[1949]: E0424 03:17:03.542990 1949 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to register layer: lsetxattr security.capability /nginx-ingress-controller: operation not supported" image="registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c" Apr 24 03:17:03 minikube kubelet[1949]: E0424 03:17:03.543021 1949 kuberuntime_image.go:55] "Failed to pull image" err="failed to register layer: lsetxattr security.capability /nginx-ingress-controller: operation not supported" image="registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c" Apr 24 03:17:03 minikube kubelet[1949]: E0424 03:17:03.543117 1949 kuberuntime_manager.go:1256] container &Container{Name:controller,Image:registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c,Command:[],Args:[/nginx-ingress-controller --election-id=ingress-nginx-leader --controller-class=k8s.io/ingress-nginx --watch-ingress-without-class=true --configmap=$(POD_NAMESPACE)/ingress-nginx-controller --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services --udp-services-configmap=$(POD_NAMESPACE)/udp-services --validating-webhook=:8443 --validating-webhook-certificate=/usr/local/certificates/cert --validating-webhook-key=/usr/local/certificates/key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:80,ContainerPort:80,Protocol:TCP,HostIP:,},ContainerPort{Name:https,HostPort:443,ContainerPort:443,Protocol:TCP,HostIP:,},ContainerPort{Name:webhook,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LD_PRELOAD,Value:/usr/local/lib/libmimalloc.so,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{94371840 0} {} 90Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:true,MountPath:/usr/local/certificates/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lqq2s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10254 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10254 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/wait-shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*101,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ingress-nginx-controller-84df5799c-ld4hn_ingress-nginx(4defd1cf-5766-4c45-88fb-012919bc6a6e): ErrImagePull: failed to register layer: lsetxattr security.capability /nginx-ingress-controller: operation not supported Apr 24 03:17:03 minikube kubelet[1949]: E0424 03:17:03.543135 1949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ErrImagePull: \"failed to register layer: lsetxattr security.capability /nginx-ingress-controller: operation not supported\"" pod="ingress-nginx/ingress-nginx-controller-84df5799c-ld4hn" podUID="4defd1cf-5766-4c45-88fb-012919bc6a6e" Apr 24 03:17:15 minikube kubelet[1949]: E0424 03:17:15.954995 1949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c\\\"\"" pod="ingress-nginx/ingress-nginx-controller-84df5799c-ld4hn" podUID="4defd1cf-5766-4c45-88fb-012919bc6a6e" Apr 24 03:17:30 minikube kubelet[1949]: E0424 03:17:30.953293 1949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c\\\"\"" pod="ingress-nginx/ingress-nginx-controller-84df5799c-ld4hn" podUID="4defd1cf-5766-4c45-88fb-012919bc6a6e" Apr 24 03:17:42 minikube kubelet[1949]: E0424 03:17:42.953344 1949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c\\\"\"" pod="ingress-nginx/ingress-nginx-controller-84df5799c-ld4hn" podUID="4defd1cf-5766-4c45-88fb-012919bc6a6e" Apr 24 03:17:43 minikube kubelet[1949]: E0424 03:17:43.955452 1949 iptables.go:577] "Could not set up iptables canary" err=< Apr 24 03:17:43 minikube kubelet[1949]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option. Apr 24 03:17:43 minikube kubelet[1949]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?) Apr 24 03:17:43 minikube kubelet[1949]: Perhaps ip6tables or your kernel needs to be upgraded. Apr 24 03:17:43 minikube kubelet[1949]: > table="nat" chain="KUBE-KUBELET-CANARY" Apr 24 03:17:53 minikube kubelet[1949]: E0424 03:17:53.953338 1949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c\\\"\"" pod="ingress-nginx/ingress-nginx-controller-84df5799c-ld4hn" podUID="4defd1cf-5766-4c45-88fb-012919bc6a6e" Apr 24 03:18:06 minikube kubelet[1949]: E0424 03:18:06.952915 1949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c\\\"\"" pod="ingress-nginx/ingress-nginx-controller-84df5799c-ld4hn" podUID="4defd1cf-5766-4c45-88fb-012919bc6a6e" Apr 24 03:18:18 minikube kubelet[1949]: E0424 03:18:18.953395 1949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c\\\"\"" pod="ingress-nginx/ingress-nginx-controller-84df5799c-ld4hn" podUID="4defd1cf-5766-4c45-88fb-012919bc6a6e" Apr 24 03:18:33 minikube kubelet[1949]: E0424 03:18:33.956101 1949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c\\\"\"" pod="ingress-nginx/ingress-nginx-controller-84df5799c-ld4hn" podUID="4defd1cf-5766-4c45-88fb-012919bc6a6e" Apr 24 03:18:43 minikube kubelet[1949]: E0424 03:18:43.955746 1949 iptables.go:577] "Could not set up iptables canary" err=< Apr 24 03:18:43 minikube kubelet[1949]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option. Apr 24 03:18:43 minikube kubelet[1949]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?) Apr 24 03:18:43 minikube kubelet[1949]: Perhaps ip6tables or your kernel needs to be upgraded. Apr 24 03:18:43 minikube kubelet[1949]: > table="nat" chain="KUBE-KUBELET-CANARY" Apr 24 03:18:48 minikube kubelet[1949]: E0424 03:18:48.953671 1949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c\\\"\"" pod="ingress-nginx/ingress-nginx-controller-84df5799c-ld4hn" podUID="4defd1cf-5766-4c45-88fb-012919bc6a6e" Apr 24 03:19:02 minikube kubelet[1949]: E0424 03:19:02.953669 1949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c\\\"\"" pod="ingress-nginx/ingress-nginx-controller-84df5799c-ld4hn" podUID="4defd1cf-5766-4c45-88fb-012919bc6a6e" Apr 24 03:19:16 minikube kubelet[1949]: E0424 03:19:16.953909 1949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c\\\"\"" pod="ingress-nginx/ingress-nginx-controller-84df5799c-ld4hn" podUID="4defd1cf-5766-4c45-88fb-012919bc6a6e" Apr 24 03:19:28 minikube kubelet[1949]: E0424 03:19:28.953349 1949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c\\\"\"" pod="ingress-nginx/ingress-nginx-controller-84df5799c-ld4hn" podUID="4defd1cf-5766-4c45-88fb-012919bc6a6e" Apr 24 03:19:39 minikube kubelet[1949]: E0424 03:19:39.953669 1949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c\\\"\"" pod="ingress-nginx/ingress-nginx-controller-84df5799c-ld4hn" podUID="4defd1cf-5766-4c45-88fb-012919bc6a6e" Apr 24 03:19:43 minikube kubelet[1949]: E0424 03:19:43.955267 1949 iptables.go:577] "Could not set up iptables canary" err=< Apr 24 03:19:43 minikube kubelet[1949]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option. Apr 24 03:19:43 minikube kubelet[1949]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?) Apr 24 03:19:43 minikube kubelet[1949]: Perhaps ip6tables or your kernel needs to be upgraded. Apr 24 03:19:43 minikube kubelet[1949]: > table="nat" chain="KUBE-KUBELET-CANARY" Apr 24 03:19:58 minikube kubelet[1949]: E0424 03:19:58.162408 1949 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to register layer: lsetxattr security.capability /nginx-ingress-controller: operation not supported" image="registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c" Apr 24 03:19:58 minikube kubelet[1949]: E0424 03:19:58.162434 1949 kuberuntime_image.go:55] "Failed to pull image" err="failed to register layer: lsetxattr security.capability /nginx-ingress-controller: operation not supported" image="registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c" Apr 24 03:19:58 minikube kubelet[1949]: E0424 03:19:58.162509 1949 kuberuntime_manager.go:1256] container &Container{Name:controller,Image:registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c,Command:[],Args:[/nginx-ingress-controller --election-id=ingress-nginx-leader --controller-class=k8s.io/ingress-nginx --watch-ingress-without-class=true --configmap=$(POD_NAMESPACE)/ingress-nginx-controller --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services --udp-services-configmap=$(POD_NAMESPACE)/udp-services --validating-webhook=:8443 --validating-webhook-certificate=/usr/local/certificates/cert --validating-webhook-key=/usr/local/certificates/key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:80,ContainerPort:80,Protocol:TCP,HostIP:,},ContainerPort{Name:https,HostPort:443,ContainerPort:443,Protocol:TCP,HostIP:,},ContainerPort{Name:webhook,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LD_PRELOAD,Value:/usr/local/lib/libmimalloc.so,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{94371840 0} {} 90Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:true,MountPath:/usr/local/certificates/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lqq2s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10254 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10254 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/wait-shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*101,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ingress-nginx-controller-84df5799c-ld4hn_ingress-nginx(4defd1cf-5766-4c45-88fb-012919bc6a6e): ErrImagePull: failed to register layer: lsetxattr security.capability /nginx-ingress-controller: operation not supported Apr 24 03:19:58 minikube kubelet[1949]: E0424 03:19:58.162524 1949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ErrImagePull: \"failed to register layer: lsetxattr security.capability /nginx-ingress-controller: operation not supported\"" pod="ingress-nginx/ingress-nginx-controller-84df5799c-ld4hn" podUID="4defd1cf-5766-4c45-88fb-012919bc6a6e" Apr 24 03:20:12 minikube kubelet[1949]: E0424 03:20:12.954158 1949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c\\\"\"" pod="ingress-nginx/ingress-nginx-controller-84df5799c-ld4hn" podUID="4defd1cf-5766-4c45-88fb-012919bc6a6e" Apr 24 03:20:26 minikube kubelet[1949]: E0424 03:20:26.953249 1949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c\\\"\"" pod="ingress-nginx/ingress-nginx-controller-84df5799c-ld4hn" podUID="4defd1cf-5766-4c45-88fb-012919bc6a6e" Apr 24 03:20:40 minikube kubelet[1949]: E0424 03:20:40.953742 1949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c\\\"\"" pod="ingress-nginx/ingress-nginx-controller-84df5799c-ld4hn" podUID="4defd1cf-5766-4c45-88fb-012919bc6a6e" Apr 24 03:20:43 minikube kubelet[1949]: E0424 03:20:43.955033 1949 iptables.go:577] "Could not set up iptables canary" err=< Apr 24 03:20:43 minikube kubelet[1949]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option. Apr 24 03:20:43 minikube kubelet[1949]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?) Apr 24 03:20:43 minikube kubelet[1949]: Perhaps ip6tables or your kernel needs to be upgraded. Apr 24 03:20:43 minikube kubelet[1949]: > table="nat" chain="KUBE-KUBELET-CANARY" Apr 24 03:20:55 minikube kubelet[1949]: E0424 03:20:55.954154 1949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c\\\"\"" pod="ingress-nginx/ingress-nginx-controller-84df5799c-ld4hn" podUID="4defd1cf-5766-4c45-88fb-012919bc6a6e" Apr 24 03:21:09 minikube kubelet[1949]: E0424 03:21:09.954020 1949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c\\\"\"" pod="ingress-nginx/ingress-nginx-controller-84df5799c-ld4hn" podUID="4defd1cf-5766-4c45-88fb-012919bc6a6e" Apr 24 03:21:23 minikube kubelet[1949]: E0424 03:21:23.954289 1949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c\\\"\"" pod="ingress-nginx/ingress-nginx-controller-84df5799c-ld4hn" podUID="4defd1cf-5766-4c45-88fb-012919bc6a6e" Apr 24 03:21:34 minikube kubelet[1949]: E0424 03:21:34.954204 1949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c\\\"\"" pod="ingress-nginx/ingress-nginx-controller-84df5799c-ld4hn" podUID="4defd1cf-5766-4c45-88fb-012919bc6a6e" Apr 24 03:21:43 minikube kubelet[1949]: E0424 03:21:43.955451 1949 iptables.go:577] "Could not set up iptables canary" err=< Apr 24 03:21:43 minikube kubelet[1949]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option. Apr 24 03:21:43 minikube kubelet[1949]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?) Apr 24 03:21:43 minikube kubelet[1949]: Perhaps ip6tables or your kernel needs to be upgraded. Apr 24 03:21:43 minikube kubelet[1949]: > table="nat" chain="KUBE-KUBELET-CANARY" Apr 24 03:21:45 minikube kubelet[1949]: E0424 03:21:45.954073 1949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c\\\"\"" pod="ingress-nginx/ingress-nginx-controller-84df5799c-ld4hn" podUID="4defd1cf-5766-4c45-88fb-012919bc6a6e" Apr 24 03:21:57 minikube kubelet[1949]: E0424 03:21:57.953840 1949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c\\\"\"" pod="ingress-nginx/ingress-nginx-controller-84df5799c-ld4hn" podUID="4defd1cf-5766-4c45-88fb-012919bc6a6e" Apr 24 03:22:11 minikube kubelet[1949]: E0424 03:22:11.953003 1949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/ingress-nginx/controller:v1.10.0@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c\\\"\"" pod="ingress-nginx/ingress-nginx-controller-84df5799c-ld4hn" podUID="4defd1cf-5766-4c45-88fb-012919bc6a6e" ==> storage-provisioner [9a96a44d956f] <== I0424 03:12:59.278441 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0424 03:12:59.290880 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: no route to host ==> storage-provisioner [becb3db03dec] <== I0424 03:13:00.058036 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0424 03:13:00.061146 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0424 03:13:00.061266 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0424 03:13:00.063509 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0424 03:13:00.063714 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"59357bb2-9e01-4a35-ae1a-2fd763b45543", APIVersion:"v1", ResourceVersion:"374", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_27e2243c-a71b-45c4-9a7e-28047f425aa1 became leader I0424 03:13:00.063758 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_27e2243c-a71b-45c4-9a7e-28047f425aa1! I0424 03:13:00.164189 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_27e2243c-a71b-45c4-9a7e-28047f425aa1!