* * ==> Audit <== * |--------------|-------------------|----------|-----------|---------|---------------------|---------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |--------------|-------------------|----------|-----------|---------|---------------------|---------------------| | update-check | | minikube | codespace | v1.30.1 | 21 Jul 23 22:14 UTC | 21 Jul 23 22:14 UTC | | start | | minikube | codespace | v1.31.1 | 21 Jul 23 22:15 UTC | 21 Jul 23 22:17 UTC | | docker-env | --alsologtostderr | minikube | codespace | v1.31.1 | 21 Jul 23 22:17 UTC | | |--------------|-------------------|----------|-----------|---------|---------------------|---------------------| * * ==> Last Start <== * Log file created at: 2023/07/21 22:15:18 Running on machine: codespaces-e39d60 Binary: Built with gc go1.20.6 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0721 22:15:18.163610 4021 out.go:296] Setting OutFile to fd 1 ... I0721 22:15:18.163770 4021 out.go:348] isatty.IsTerminal(1) = true I0721 22:15:18.163773 4021 out.go:309] Setting ErrFile to fd 2... I0721 22:15:18.163778 4021 out.go:348] isatty.IsTerminal(2) = true I0721 22:15:18.164005 4021 root.go:338] Updating PATH: /home/codespace/.minikube/bin W0721 22:15:18.164121 4021 root.go:314] Error reading config file at /home/codespace/.minikube/config/config.json: open /home/codespace/.minikube/config/config.json: no such file or directory I0721 22:15:18.164709 4021 out.go:303] Setting JSON to false I0721 22:15:18.165484 4021 start.go:128] hostinfo: {"hostname":"codespaces-e39d60","uptime":5711,"bootTime":1689972008,"procs":16,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-azure","kernelArch":"x86_64","virtualizationSystem":"docker","virtualizationRole":"guest","hostId":"cb8350e0-b93b-0243-b4a4-5e52bd2aa11d"} I0721 22:15:18.165561 4021 start.go:138] virtualization: docker guest I0721 22:15:18.181728 4021 out.go:177] 😄 minikube v1.31.1 on Ubuntu 20.04 (docker/amd64) W0721 22:15:18.212991 4021 preload.go:295] Failed to list preload files: open /home/codespace/.minikube/cache/preloaded-tarball: no such file or directory I0721 22:15:18.213098 4021 notify.go:220] Checking for updates... I0721 22:15:18.213342 4021 driver.go:373] Setting default libvirt URI to qemu:///system I0721 22:15:18.213377 4021 global.go:111] Querying for installed drivers using PATH=/home/codespace/.minikube/bin:/home/codespace/.rbenv/shims:/home/codespace/.dotfiles/bin:/opt/homebrew/bin:/opt/homebrew/sbin:/usr/local/rvm/gems/ruby-3.2.2/bin:/usr/local/rvm/gems/ruby-3.2.2@global/bin:/usr/local/rvm/rubies/ruby-3.2.2/bin:/vscode/bin/linux-x64/660393deaaa6d1996740ff4880f1bad43768c814/bin/remote-cli:/home/codespace/.local/bin:/home/codespace/.dotnet:/home/codespace/nvm/current/bin:/home/codespace/.php/current/bin:/home/codespace/.python/current/bin:/home/codespace/java/current/bin:/home/codespace/.ruby/current/bin:/usr/local/oryx:/usr/local/go/bin:/go/bin:/usr/local/sdkman/bin:/usr/local/sdkman/candidates/java/current/bin:/usr/local/sdkman/candidates/gradle/current/bin:/usr/local/sdkman/candidates/maven/current/bin:/usr/local/sdkman/candidates/ant/current/bin:/usr/local/rvm/gems/default/bin:/usr/local/rvm/gems/default@global/bin:/usr/local/rvm/rubies/default/bin:/usr/local/share/rbenv/bin:/opt/conda/bin:/usr/local/php/current/bin:/usr/local/python/current/bin:/usr/local/py-utils/bin:/usr/local/nvs:/usr/local/share/nvm/versions/node/v20.3.0/bin:/usr/local/hugo/bin:/usr/local/dotnet/current:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/rvm/bin:/home/codespace/.jenv/bin:/home/codespace/.jenv/shims:/home/codespace/google-cloud-sdk/bin:/home/codespace/.nvm:/home/codespace/.nvm/versions/node/v10.16.0/bin:/home/codespace/bin:/Library/Apple/usr/bin I0721 22:15:18.213723 4021 global.go:122] vmware default: false priority: 5, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "vmrun": executable file not found in $PATH Reason: Fix:Install vmrun Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/ Version:} I0721 22:15:20.759108 4021 docker.go:121] docker version: linux-20.10.25+azure-2: I0721 22:15:20.759422 4021 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0721 22:15:23.257037 4021 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.497552s) I0721 22:15:23.257480 4021 info.go:266] docker info: {ID:HLI4:QJGT:EWTO:KKN4:4JO2:M5PQ:46WV:RDOI:SUOX:WQDP:3VLE:POXK Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff false] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:false NGoroutines:39 SystemTime:2023-07-21 22:15:20.8265787 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-azure OperatingSystem:Ubuntu 20.04.6 LTS (containerized) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:4104077312 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:codespaces-e39d60 Labels:[] ExperimentalBuild:false ServerVersion:20.10.25+azure-2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:860f061b76bb4fc671f0f9e900f7d80ff93d4eb7 Expected:860f061b76bb4fc671f0f9e900f7d80ff93d4eb7} InitCommit:{ID: Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:0.11.0+azure-1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.18.1+azure-2]] Warnings:}} I0721 22:15:23.257597 4021 docker.go:294] overlay module found I0721 22:15:23.257606 4021 global.go:122] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0721 22:15:23.258080 4021 global.go:122] kvm2 default: true priority: 8, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "virsh": executable file not found in $PATH Reason: Fix:Install libvirt Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/ Version:} I0721 22:15:23.271095 4021 global.go:122] none default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0721 22:15:23.271511 4021 global.go:122] podman default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Reason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/ Version:} I0721 22:15:23.271800 4021 global.go:122] qemu2 default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "qemu-system-x86_64": executable file not found in $PATH Reason: Fix:Install qemu-system Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/qemu/ Version:} I0721 22:15:23.271808 4021 global.go:122] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0721 22:15:23.272190 4021 global.go:122] virtualbox default: true priority: 6, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:unable to find VBoxManage in $PATH Reason: Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/ Version:} I0721 22:15:23.272216 4021 driver.go:308] not recommending "none" due to default: false I0721 22:15:23.272220 4021 driver.go:308] not recommending "ssh" due to default: false I0721 22:15:23.272230 4021 driver.go:343] Picked: docker I0721 22:15:23.272237 4021 driver.go:344] Alternatives: [none ssh] I0721 22:15:23.272241 4021 driver.go:345] Rejects: [vmware kvm2 podman qemu2 virtualbox] I0721 22:15:23.289866 4021 out.go:177] ✨ Automatically selected the docker driver. Other choices: none, ssh I0721 22:15:23.320403 4021 start.go:298] selected driver: docker I0721 22:15:23.320413 4021 start.go:898] validating driver "docker" against I0721 22:15:23.320449 4021 start.go:909] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0721 22:15:23.320929 4021 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0721 22:15:23.409233 4021 info.go:266] docker info: {ID:HLI4:QJGT:EWTO:KKN4:4JO2:M5PQ:46WV:RDOI:SUOX:WQDP:3VLE:POXK Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff false] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:false NGoroutines:39 SystemTime:2023-07-21 22:15:23.3415248 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-azure OperatingSystem:Ubuntu 20.04.6 LTS (containerized) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:4104077312 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:codespaces-e39d60 Labels:[] ExperimentalBuild:false ServerVersion:20.10.25+azure-2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:860f061b76bb4fc671f0f9e900f7d80ff93d4eb7 Expected:860f061b76bb4fc671f0f9e900f7d80ff93d4eb7} InitCommit:{ID: Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:0.11.0+azure-1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.18.1+azure-2]] Warnings:}} I0721 22:15:23.409387 4021 start_flags.go:305] no existing cluster config was found, will generate one from the flags I0721 22:15:23.409679 4021 start_flags.go:382] Using suggested 2200MB memory alloc based on sys=3913MB, container=3913MB I0721 22:15:23.415230 4021 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true] I0721 22:15:23.431144 4021 out.go:177] 📌 Using Docker driver with root privileges I0721 22:15:23.446526 4021 cni.go:84] Creating CNI manager for "" I0721 22:15:23.446551 4021 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0721 22:15:23.446560 4021 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni I0721 22:15:23.446575 4021 start_flags.go:319] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/codespace:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} I0721 22:15:23.462598 4021 out.go:177] 👍 Starting control plane node minikube in cluster minikube I0721 22:15:23.480927 4021 cache.go:122] Beginning downloading kic base image for docker with docker I0721 22:15:23.498071 4021 out.go:177] 🚜 Pulling base image ... I0721 22:15:23.515727 4021 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker I0721 22:15:23.515784 4021 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon I0721 22:15:23.537201 4021 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache I0721 22:15:23.537347 4021 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory I0721 22:15:23.537469 4021 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache I0721 22:15:23.542927 4021 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 I0721 22:15:23.542940 4021 cache.go:57] Caching tarball of preloaded images I0721 22:15:23.543066 4021 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker I0721 22:15:23.558814 4021 out.go:177] 💾 Downloading Kubernetes v1.27.3 preload ... I0721 22:15:23.573605 4021 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 ... I0721 22:15:23.622532 4021 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4?checksum=md5:90b30902fa911e3bcfdde5b24cedf0b2 -> /home/codespace/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 I0721 22:15:30.208966 4021 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 ... I0721 22:15:30.209088 4021 preload.go:256] verifying checksum of /home/codespace/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 ... I0721 22:15:32.421556 4021 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball I0721 22:15:32.421586 4021 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 from local cache I0721 22:15:32.604556 4021 cache.go:60] Finished verifying existence of preloaded tar for v1.27.3 on docker I0721 22:15:32.605038 4021 profile.go:148] Saving config to /home/codespace/.minikube/profiles/minikube/config.json ... I0721 22:15:32.605866 4021 lock.go:35] WriteFile acquiring /home/codespace/.minikube/profiles/minikube/config.json: {Name:mkec4d5194e13a60f082327d4967b1c1d7be06d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0721 22:16:09.206764 4021 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 from cached tarball I0721 22:16:09.311689 4021 cache.go:195] Successfully downloaded all kic artifacts I0721 22:16:09.466781 4021 start.go:365] acquiring machines lock for minikube: {Name:mk9977b0a48967319f600c4ece50922cfa145934 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0721 22:16:09.530526 4021 start.go:369] acquired machines lock for "minikube" in 16.2181ms I0721 22:16:09.557874 4021 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/codespace:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0721 22:16:09.598192 4021 start.go:125] createHost starting for "" (driver="docker") I0721 22:16:09.788119 4021 out.go:204] 🔥 Creating docker container (CPUs=2, Memory=2200MB) ... I0721 22:16:10.138766 4021 start.go:159] libmachine.API.Create for "minikube" (driver="docker") I0721 22:16:10.138794 4021 client.go:168] LocalClient.Create starting I0721 22:16:10.182078 4021 main.go:141] libmachine: Creating CA: /home/codespace/.minikube/certs/ca.pem I0721 22:16:10.501371 4021 main.go:141] libmachine: Creating client certificate: /home/codespace/.minikube/certs/cert.pem I0721 22:16:10.651651 4021 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0721 22:16:10.738836 4021 cli_runner.go:211] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0721 22:16:10.739081 4021 network_create.go:281] running [docker network inspect minikube] to gather additional debugging logs... I0721 22:16:10.739095 4021 cli_runner.go:164] Run: docker network inspect minikube W0721 22:16:10.761152 4021 cli_runner.go:211] docker network inspect minikube returned with exit code 1 I0721 22:16:10.761169 4021 network_create.go:284] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1 stdout: [] stderr: Error: No such network: minikube I0721 22:16:10.761180 4021 network_create.go:286] output of [docker network inspect minikube]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: minikube ** /stderr ** I0721 22:16:10.761396 4021 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0721 22:16:10.782390 4021 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc01a7497e0} I0721 22:16:10.782482 4021 network_create.go:123] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ... I0721 22:16:10.783360 4021 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=minikube minikube I0721 22:16:11.416948 4021 network_create.go:107] docker network minikube 192.168.49.0/24 created I0721 22:16:11.417231 4021 kic.go:117] calculated static IP "192.168.49.2" for the "minikube" container I0721 22:16:11.417751 4021 cli_runner.go:164] Run: docker ps -a --format {{.Names}} I0721 22:16:11.477312 4021 cli_runner.go:164] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I0721 22:16:11.543149 4021 oci.go:103] Successfully created a docker volume minikube I0721 22:16:11.543512 4021 cli_runner.go:164] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib I0721 22:16:14.316317 4021 cli_runner.go:217] Completed: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (2.7727544s) I0721 22:16:14.316334 4021 oci.go:107] Successfully prepared a docker volume minikube I0721 22:16:14.316362 4021 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker I0721 22:16:14.324808 4021 kic.go:190] Starting extracting preloaded images to volume ... I0721 22:16:14.325047 4021 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/codespace/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir I0721 22:16:25.540383 4021 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/codespace/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (11.1998945s) I0721 22:16:25.540402 4021 kic.go:199] duration metric: took 11.215607 seconds to extract preloaded images to volume W0721 22:16:25.555073 4021 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0721 22:16:25.555114 4021 oci.go:240] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted. I0721 22:16:25.555316 4021 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'" I0721 22:16:27.896453 4021 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.341095s) I0721 22:16:27.896705 4021 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=2200mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 I0721 22:16:29.252983 4021 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=2200mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631: (1.3559712s) I0721 22:16:29.255222 4021 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Running}} I0721 22:16:29.282731 4021 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0721 22:16:29.315494 4021 cli_runner.go:164] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables I0721 22:16:29.445710 4021 oci.go:144] the created container "minikube" has a running status. I0721 22:16:29.445732 4021 kic.go:221] Creating ssh key for kic: /home/codespace/.minikube/machines/minikube/id_rsa... I0721 22:16:29.987310 4021 kic_runner.go:191] docker (temp): /home/codespace/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0721 22:16:30.162373 4021 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0721 22:16:30.198715 4021 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0721 22:16:30.198730 4021 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys] I0721 22:16:30.319207 4021 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0721 22:16:30.374445 4021 machine.go:88] provisioning docker machine ... I0721 22:16:30.374787 4021 ubuntu.go:169] provisioning hostname "minikube" I0721 22:16:30.375021 4021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0721 22:16:30.425059 4021 main.go:141] libmachine: Using SSH client type: native I0721 22:16:30.444950 4021 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80eb00] 0x811ba0 [] 0s} 127.0.0.1 32772 } I0721 22:16:30.444964 4021 main.go:141] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0721 22:16:30.479246 4021 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF I0721 22:16:34.260717 4021 main.go:141] libmachine: SSH cmd err, output: : minikube I0721 22:16:34.280339 4021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0721 22:16:34.366475 4021 main.go:141] libmachine: Using SSH client type: native I0721 22:16:34.367041 4021 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80eb00] 0x811ba0 [] 0s} 127.0.0.1 32772 } I0721 22:16:34.367059 4021 main.go:141] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0721 22:16:34.537208 4021 main.go:141] libmachine: SSH cmd err, output: : I0721 22:16:34.537234 4021 ubuntu.go:175] set auth options {CertDir:/home/codespace/.minikube CaCertPath:/home/codespace/.minikube/certs/ca.pem CaPrivateKeyPath:/home/codespace/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/codespace/.minikube/machines/server.pem ServerKeyPath:/home/codespace/.minikube/machines/server-key.pem ClientKeyPath:/home/codespace/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/codespace/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/codespace/.minikube} I0721 22:16:34.537275 4021 ubuntu.go:177] setting up certificates I0721 22:16:34.537296 4021 provision.go:83] configureAuth start I0721 22:16:34.537629 4021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0721 22:16:34.571448 4021 provision.go:138] copyHostCerts I0721 22:16:34.586274 4021 exec_runner.go:151] cp: /home/codespace/.minikube/certs/ca.pem --> /home/codespace/.minikube/ca.pem (1086 bytes) I0721 22:16:34.589848 4021 exec_runner.go:151] cp: /home/codespace/.minikube/certs/cert.pem --> /home/codespace/.minikube/cert.pem (1131 bytes) I0721 22:16:34.592682 4021 exec_runner.go:151] cp: /home/codespace/.minikube/certs/key.pem --> /home/codespace/.minikube/key.pem (1679 bytes) I0721 22:16:34.592928 4021 provision.go:112] generating server cert: /home/codespace/.minikube/machines/server.pem ca-key=/home/codespace/.minikube/certs/ca.pem private-key=/home/codespace/.minikube/certs/ca-key.pem org=codespace.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] I0721 22:16:34.788369 4021 provision.go:172] copyRemoteCerts I0721 22:16:34.788606 4021 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0721 22:16:34.788769 4021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0721 22:16:34.813500 4021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/codespace/.minikube/machines/minikube/id_rsa Username:docker} I0721 22:16:34.946499 4021 ssh_runner.go:362] scp /home/codespace/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes) I0721 22:16:35.056009 4021 ssh_runner.go:362] scp /home/codespace/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0721 22:16:35.097216 4021 ssh_runner.go:362] scp /home/codespace/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1086 bytes) I0721 22:16:35.137265 4021 provision.go:86] duration metric: configureAuth took 599.9541ms I0721 22:16:35.137285 4021 ubuntu.go:193] setting minikube options for container-runtime I0721 22:16:35.141505 4021 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3 I0721 22:16:35.141768 4021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0721 22:16:35.174130 4021 main.go:141] libmachine: Using SSH client type: native I0721 22:16:35.174711 4021 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80eb00] 0x811ba0 [] 0s} 127.0.0.1 32772 } I0721 22:16:35.174720 4021 main.go:141] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0721 22:16:35.346919 4021 main.go:141] libmachine: SSH cmd err, output: : overlay I0721 22:16:35.346935 4021 ubuntu.go:71] root file system type: overlay I0721 22:16:35.347693 4021 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0721 22:16:35.362426 4021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0721 22:16:35.402178 4021 main.go:141] libmachine: Using SSH client type: native I0721 22:16:35.402568 4021 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80eb00] 0x811ba0 [] 0s} 127.0.0.1 32772 } I0721 22:16:35.402653 4021 main.go:141] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0721 22:16:35.553511 4021 main.go:141] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0721 22:16:35.563484 4021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0721 22:16:35.586725 4021 main.go:141] libmachine: Using SSH client type: native I0721 22:16:35.587131 4021 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80eb00] 0x811ba0 [] 0s} 127.0.0.1 32772 } I0721 22:16:35.587145 4021 main.go:141] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0721 22:16:37.218919 4021 main.go:141] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2023-07-07 14:50:55.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2023-07-21 22:16:35.546653700 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com -After=network-online.target docker.socket firewalld.service containerd.service time-set.target -Wants=network-online.target containerd.service +BindsTo=containerd.service +After=network-online.target firewalld.service containerd.service +Wants=network-online.target Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutStartSec=0 -RestartSec=2 -Restart=always +Restart=on-failure -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0721 22:16:37.218941 4021 machine.go:91] provisioned docker machine in 6.8444834s I0721 22:16:37.218948 4021 client.go:171] LocalClient.Create took 27.0801508s I0721 22:16:37.218969 4021 start.go:167] duration metric: libmachine.API.Create for "minikube" took 27.0802124s I0721 22:16:37.219236 4021 start.go:300] post-start starting for "minikube" (driver="docker") I0721 22:16:37.234413 4021 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0721 22:16:37.234683 4021 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0721 22:16:37.234818 4021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0721 22:16:37.294036 4021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/codespace/.minikube/machines/minikube/id_rsa Username:docker} I0721 22:16:37.458765 4021 ssh_runner.go:195] Run: cat /etc/os-release I0721 22:16:37.468684 4021 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0721 22:16:37.468716 4021 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0721 22:16:37.468729 4021 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0721 22:16:37.468737 4021 info.go:137] Remote host: Ubuntu 22.04.2 LTS I0721 22:16:37.468750 4021 filesync.go:126] Scanning /home/codespace/.minikube/addons for local assets ... I0721 22:16:37.473681 4021 filesync.go:126] Scanning /home/codespace/.minikube/files for local assets ... I0721 22:16:37.481824 4021 start.go:303] post-start completed in 262.5667ms I0721 22:16:37.482413 4021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0721 22:16:37.529525 4021 profile.go:148] Saving config to /home/codespace/.minikube/profiles/minikube/config.json ... I0721 22:16:37.530066 4021 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0721 22:16:37.530202 4021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0721 22:16:37.553803 4021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/codespace/.minikube/machines/minikube/id_rsa Username:docker} I0721 22:16:37.669012 4021 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'" I0721 22:16:37.674202 4021 start.go:128] duration metric: createHost completed in 28.0759833s I0721 22:16:37.677117 4021 start.go:83] releasing machines lock for "minikube", held for 28.1465649s I0721 22:16:37.677373 4021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0721 22:16:37.715698 4021 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/ I0721 22:16:37.733152 4021 ssh_runner.go:195] Run: cat /version.json I0721 22:16:37.733157 4021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0721 22:16:37.733361 4021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0721 22:16:37.775764 4021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/codespace/.minikube/machines/minikube/id_rsa Username:docker} I0721 22:16:37.788360 4021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/codespace/.minikube/machines/minikube/id_rsa Username:docker} I0721 22:16:38.407480 4021 ssh_runner.go:195] Run: systemctl --version I0721 22:16:38.423143 4021 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*" I0721 22:16:38.451423 4021 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ; I0721 22:16:38.519540 4021 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found I0721 22:16:38.519889 4021 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ; I0721 22:16:38.586766 4021 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s) I0721 22:16:38.586801 4021 start.go:466] detecting cgroup driver to use... I0721 22:16:38.586839 4021 detect.go:199] detected "systemd" cgroup driver on host os I0721 22:16:38.619304 4021 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock " | sudo tee /etc/crictl.yaml" I0721 22:16:38.642544 4021 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml" I0721 22:16:38.656315 4021 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml" I0721 22:16:38.668783 4021 containerd.go:145] configuring containerd to use "systemd" as cgroup driver... I0721 22:16:38.669085 4021 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml" I0721 22:16:38.683939 4021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0721 22:16:38.701950 4021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml" I0721 22:16:38.715814 4021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0721 22:16:38.729602 4021 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk" I0721 22:16:38.753887 4021 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml" I0721 22:16:38.768579 4021 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables I0721 22:16:38.792631 4021 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" I0721 22:16:38.805369 4021 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0721 22:16:38.945730 4021 ssh_runner.go:195] Run: sudo systemctl restart containerd I0721 22:16:39.161512 4021 start.go:466] detecting cgroup driver to use... I0721 22:16:39.161554 4021 detect.go:199] detected "systemd" cgroup driver on host os I0721 22:16:39.161834 4021 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0721 22:16:39.204636 4021 cruntime.go:276] skipping containerd shutdown because we are bound to it I0721 22:16:39.204931 4021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0721 22:16:39.233191 4021 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0721 22:16:39.266215 4021 ssh_runner.go:195] Run: which cri-dockerd I0721 22:16:39.273182 4021 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d I0721 22:16:39.302597 4021 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes) I0721 22:16:39.332995 4021 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0721 22:16:39.533496 4021 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0721 22:16:39.682053 4021 docker.go:535] configuring docker to use "systemd" as cgroup driver... I0721 22:16:39.682087 4021 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (143 bytes) I0721 22:16:39.713205 4021 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0721 22:16:39.914423 4021 ssh_runner.go:195] Run: sudo systemctl restart docker I0721 22:16:40.423312 4021 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0721 22:16:40.563213 4021 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket I0721 22:16:40.692493 4021 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0721 22:16:40.829064 4021 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0721 22:16:40.953750 4021 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket I0721 22:16:40.970438 4021 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0721 22:16:41.122138 4021 ssh_runner.go:195] Run: sudo systemctl restart cri-docker I0721 22:16:42.328753 4021 ssh_runner.go:235] Completed: sudo systemctl restart cri-docker: (1.206585s) I0721 22:16:42.328772 4021 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock I0721 22:16:42.343457 4021 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0721 22:16:42.348047 4021 start.go:534] Will wait 60s for crictl version I0721 22:16:42.348309 4021 ssh_runner.go:195] Run: which crictl I0721 22:16:42.352268 4021 ssh_runner.go:195] Run: sudo /usr/bin/crictl version I0721 22:16:43.290354 4021 start.go:550] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 24.0.4 RuntimeApiVersion: v1 I0721 22:16:43.290607 4021 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0721 22:16:43.779897 4021 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0721 22:16:43.849492 4021 out.go:204] 🐳 Preparing Kubernetes v1.27.3 on Docker 24.0.4 ... I0721 22:16:43.860245 4021 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0721 22:16:43.908261 4021 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I0721 22:16:43.918211 4021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0721 22:16:44.005148 4021 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker I0721 22:16:44.015545 4021 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0721 22:16:44.037356 4021 docker.go:636] Got preloaded images: -- stdout -- registry.k8s.io/kube-apiserver:v1.27.3 registry.k8s.io/kube-scheduler:v1.27.3 registry.k8s.io/kube-proxy:v1.27.3 registry.k8s.io/kube-controller-manager:v1.27.3 registry.k8s.io/coredns/coredns:v1.10.1 registry.k8s.io/etcd:3.5.7-0 registry.k8s.io/pause:3.9 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0721 22:16:44.037374 4021 docker.go:566] Images already preloaded, skipping extraction I0721 22:16:44.037634 4021 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0721 22:16:44.058542 4021 docker.go:636] Got preloaded images: -- stdout -- registry.k8s.io/kube-apiserver:v1.27.3 registry.k8s.io/kube-controller-manager:v1.27.3 registry.k8s.io/kube-scheduler:v1.27.3 registry.k8s.io/kube-proxy:v1.27.3 registry.k8s.io/coredns/coredns:v1.10.1 registry.k8s.io/etcd:3.5.7-0 registry.k8s.io/pause:3.9 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0721 22:16:44.058572 4021 cache_images.go:84] Images are preloaded, skipping loading I0721 22:16:44.068491 4021 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0721 22:16:45.290822 4021 ssh_runner.go:235] Completed: docker info --format {{.CgroupDriver}}: (1.2223027s) I0721 22:16:45.290862 4021 cni.go:84] Creating CNI manager for "" I0721 22:16:45.290874 4021 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0721 22:16:45.291538 4021 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0721 22:16:45.291577 4021 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true} I0721 22:16:45.291828 4021 kubeadm.go:181] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: unix:///var/run/cri-dockerd.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.27.3 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd hairpinMode: hairpin-veth runtimeRequestTimeout: 15m clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0721 22:16:45.324072 4021 kubeadm.go:976] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.27.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0721 22:16:45.324429 4021 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3 I0721 22:16:45.361911 4021 binaries.go:44] Found k8s binaries, skipping transfer I0721 22:16:45.362117 4021 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0721 22:16:45.380721 4021 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes) I0721 22:16:45.447828 4021 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0721 22:16:45.472039 4021 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2090 bytes) I0721 22:16:45.495385 4021 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0721 22:16:45.499272 4021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0721 22:16:45.515684 4021 certs.go:56] Setting up /home/codespace/.minikube/profiles/minikube for IP: 192.168.49.2 I0721 22:16:45.515712 4021 certs.go:190] acquiring lock for shared ca certs: {Name:mkfa96ed18af5943665a9c19bbdd27c28387f4c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0721 22:16:45.522532 4021 certs.go:204] generating minikubeCA CA: /home/codespace/.minikube/ca.key I0721 22:16:46.120377 4021 crypto.go:156] Writing cert to /home/codespace/.minikube/ca.crt ... I0721 22:16:46.120432 4021 lock.go:35] WriteFile acquiring /home/codespace/.minikube/ca.crt: {Name:mk878182c9c6b580ee454eee459cb707424f8751 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0721 22:16:46.120685 4021 crypto.go:164] Writing key to /home/codespace/.minikube/ca.key ... I0721 22:16:46.120696 4021 lock.go:35] WriteFile acquiring /home/codespace/.minikube/ca.key: {Name:mke3c9a86ad5be7b3cf8319275b4ff99a19632af Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0721 22:16:46.120825 4021 certs.go:204] generating proxyClientCA CA: /home/codespace/.minikube/proxy-client-ca.key I0721 22:16:46.394207 4021 crypto.go:156] Writing cert to /home/codespace/.minikube/proxy-client-ca.crt ... I0721 22:16:46.394221 4021 lock.go:35] WriteFile acquiring /home/codespace/.minikube/proxy-client-ca.crt: {Name:mkb043b67c84dea80ee20ee7ab28ad995d6d81e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0721 22:16:46.394449 4021 crypto.go:164] Writing key to /home/codespace/.minikube/proxy-client-ca.key ... I0721 22:16:46.394458 4021 lock.go:35] WriteFile acquiring /home/codespace/.minikube/proxy-client-ca.key: {Name:mk98075b7cbed4dca23d5c990420df0706ed8eef Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0721 22:16:46.394596 4021 certs.go:319] generating minikube-user signed cert: /home/codespace/.minikube/profiles/minikube/client.key I0721 22:16:46.394612 4021 crypto.go:68] Generating cert /home/codespace/.minikube/profiles/minikube/client.crt with IP's: [] I0721 22:16:46.511606 4021 crypto.go:156] Writing cert to /home/codespace/.minikube/profiles/minikube/client.crt ... I0721 22:16:46.511620 4021 lock.go:35] WriteFile acquiring /home/codespace/.minikube/profiles/minikube/client.crt: {Name:mk1375d49d1beed1633ee911f18b5429b734f48a Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0721 22:16:46.511848 4021 crypto.go:164] Writing key to /home/codespace/.minikube/profiles/minikube/client.key ... I0721 22:16:46.511857 4021 lock.go:35] WriteFile acquiring /home/codespace/.minikube/profiles/minikube/client.key: {Name:mkde2515628ddd136746ae1fb9f7554a77849955 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0721 22:16:46.511951 4021 certs.go:319] generating minikube signed cert: /home/codespace/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I0721 22:16:46.511970 4021 crypto.go:68] Generating cert /home/codespace/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1] I0721 22:16:46.689014 4021 crypto.go:156] Writing cert to /home/codespace/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ... I0721 22:16:46.689028 4021 lock.go:35] WriteFile acquiring /home/codespace/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mk953957f9894f2a4b49935323ddf4d91fda9ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0721 22:16:46.689223 4021 crypto.go:164] Writing key to /home/codespace/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ... I0721 22:16:46.689229 4021 lock.go:35] WriteFile acquiring /home/codespace/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mkfdd0948fd2a42f4a09111a8df56e0bbd5874e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0721 22:16:46.689320 4021 certs.go:337] copying /home/codespace/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/codespace/.minikube/profiles/minikube/apiserver.crt I0721 22:16:46.703494 4021 certs.go:341] copying /home/codespace/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/codespace/.minikube/profiles/minikube/apiserver.key I0721 22:16:46.703609 4021 certs.go:319] generating aggregator signed cert: /home/codespace/.minikube/profiles/minikube/proxy-client.key I0721 22:16:46.703640 4021 crypto.go:68] Generating cert /home/codespace/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I0721 22:16:46.834870 4021 crypto.go:156] Writing cert to /home/codespace/.minikube/profiles/minikube/proxy-client.crt ... I0721 22:16:46.834886 4021 lock.go:35] WriteFile acquiring /home/codespace/.minikube/profiles/minikube/proxy-client.crt: {Name:mk405302f794ebdffcdd8f62300e8ed3b1d072cc Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0721 22:16:46.835078 4021 crypto.go:164] Writing key to /home/codespace/.minikube/profiles/minikube/proxy-client.key ... I0721 22:16:46.835086 4021 lock.go:35] WriteFile acquiring /home/codespace/.minikube/profiles/minikube/proxy-client.key: {Name:mk044f8b80f6abd01f3e0d0669a3df1e10f17546 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0721 22:16:46.835479 4021 certs.go:437] found cert: /home/codespace/.minikube/certs/home/codespace/.minikube/certs/ca-key.pem (1679 bytes) I0721 22:16:46.835524 4021 certs.go:437] found cert: /home/codespace/.minikube/certs/home/codespace/.minikube/certs/ca.pem (1086 bytes) I0721 22:16:46.835558 4021 certs.go:437] found cert: /home/codespace/.minikube/certs/home/codespace/.minikube/certs/cert.pem (1131 bytes) I0721 22:16:46.835588 4021 certs.go:437] found cert: /home/codespace/.minikube/certs/home/codespace/.minikube/certs/key.pem (1679 bytes) I0721 22:16:47.074173 4021 ssh_runner.go:362] scp /home/codespace/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0721 22:16:47.113383 4021 ssh_runner.go:362] scp /home/codespace/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0721 22:16:47.158237 4021 ssh_runner.go:362] scp /home/codespace/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0721 22:16:47.204630 4021 ssh_runner.go:362] scp /home/codespace/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0721 22:16:47.237742 4021 ssh_runner.go:362] scp /home/codespace/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0721 22:16:47.273454 4021 ssh_runner.go:362] scp /home/codespace/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I0721 22:16:47.307620 4021 ssh_runner.go:362] scp /home/codespace/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0721 22:16:47.350546 4021 ssh_runner.go:362] scp /home/codespace/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes) I0721 22:16:47.389292 4021 ssh_runner.go:362] scp /home/codespace/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0721 22:16:47.424314 4021 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0721 22:16:47.471200 4021 ssh_runner.go:195] Run: openssl version I0721 22:16:47.541541 4021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0721 22:16:47.608691 4021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0721 22:16:47.614676 4021 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 21 22:16 /usr/share/ca-certificates/minikubeCA.pem I0721 22:16:47.614899 4021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0721 22:16:47.646354 4021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0721 22:16:47.676122 4021 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd I0721 22:16:47.688243 4021 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2 stdout: stderr: ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory I0721 22:16:47.688359 4021 kubeadm.go:404] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/codespace:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} I0721 22:16:47.688706 4021 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0721 22:16:47.737094 4021 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0721 22:16:47.778390 4021 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0721 22:16:47.800307 4021 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver I0721 22:16:47.800879 4021 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0721 22:16:47.812325 4021 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0721 22:16:47.812368 4021 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0721 22:16:49.021828 4021 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3 I0721 22:16:49.021906 4021 kubeadm.go:322] [preflight] Running pre-flight checks I0721 22:16:49.091316 4021 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification: I0721 22:16:49.091403 4021 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1041-azure I0721 22:16:49.091450 4021 kubeadm.go:322] OS: Linux I0721 22:16:49.091506 4021 kubeadm.go:322] CGROUPS_CPU: enabled I0721 22:16:49.091572 4021 kubeadm.go:322] CGROUPS_CPUSET: enabled I0721 22:16:49.091629 4021 kubeadm.go:322] CGROUPS_DEVICES: enabled I0721 22:16:49.091684 4021 kubeadm.go:322] CGROUPS_FREEZER: enabled I0721 22:16:49.091743 4021 kubeadm.go:322] CGROUPS_MEMORY: enabled I0721 22:16:49.091802 4021 kubeadm.go:322] CGROUPS_PIDS: enabled I0721 22:16:49.091867 4021 kubeadm.go:322] CGROUPS_HUGETLB: enabled I0721 22:16:49.091924 4021 kubeadm.go:322] CGROUPS_IO: enabled I0721 22:16:51.155587 4021 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster I0721 22:16:51.155685 4021 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection I0721 22:16:51.155768 4021 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' I0721 22:16:51.712205 4021 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs" I0721 22:16:51.732893 4021 out.go:204] ▪ Generating certificates and keys ... I0721 22:16:51.733159 4021 kubeadm.go:322] [certs] Using existing ca certificate authority I0721 22:16:51.733244 4021 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk I0721 22:16:51.841258 4021 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key I0721 22:16:51.933225 4021 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key I0721 22:16:52.175571 4021 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key I0721 22:16:52.241108 4021 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key I0721 22:16:52.398188 4021 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key I0721 22:16:52.398598 4021 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] I0721 22:16:52.464146 4021 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key I0721 22:16:52.464542 4021 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] I0721 22:16:52.570003 4021 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key I0721 22:16:52.713384 4021 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key I0721 22:16:52.967386 4021 kubeadm.go:322] [certs] Generating "sa" key and public key I0721 22:16:52.967707 4021 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes" I0721 22:16:53.058960 4021 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file I0721 22:16:53.156353 4021 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file I0721 22:16:53.506431 4021 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file I0721 22:16:53.623647 4021 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file I0721 22:16:53.637748 4021 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" I0721 22:16:53.638828 4021 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" I0721 22:16:53.638922 4021 kubeadm.go:322] [kubelet-start] Starting the kubelet I0721 22:16:53.774940 4021 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests" I0721 22:16:53.804436 4021 out.go:204] ▪ Booting up control plane ... I0721 22:16:53.804641 4021 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver" I0721 22:16:53.804702 4021 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager" I0721 22:16:53.804758 4021 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler" I0721 22:16:53.804823 4021 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" I0721 22:16:53.816833 4021 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s I0721 22:17:06.821922 4021 kubeadm.go:322] [apiclient] All control plane components are healthy after 13.005089 seconds I0721 22:17:06.822045 4021 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace I0721 22:17:06.934176 4021 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster I0721 22:17:07.541853 4021 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs I0721 22:17:07.542044 4021 kubeadm.go:322] [mark-control-plane] Marking the node minikube as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] I0721 22:17:08.067626 4021 kubeadm.go:322] [bootstrap-token] Using token: qvzu8j.b3ozsf3cu2vyl7na I0721 22:17:08.089848 4021 out.go:204] ▪ Configuring RBAC rules ... I0721 22:17:08.098898 4021 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles I0721 22:17:08.098996 4021 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes I0721 22:17:08.132463 4021 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials I0721 22:17:08.142989 4021 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token I0721 22:17:08.153450 4021 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster I0721 22:17:08.166551 4021 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace I0721 22:17:08.216406 4021 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key I0721 22:17:08.845521 4021 kubeadm.go:322] [addons] Applied essential addon: CoreDNS I0721 22:17:09.026065 4021 kubeadm.go:322] [addons] Applied essential addon: kube-proxy I0721 22:17:09.034301 4021 kubeadm.go:322] I0721 22:17:09.034372 4021 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully! I0721 22:17:09.034379 4021 kubeadm.go:322] I0721 22:17:09.034462 4021 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user: I0721 22:17:09.034466 4021 kubeadm.go:322] I0721 22:17:09.034501 4021 kubeadm.go:322] mkdir -p $HOME/.kube I0721 22:17:09.035180 4021 kubeadm.go:322] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config I0721 22:17:09.035237 4021 kubeadm.go:322] sudo chown $(id -u):$(id -g) $HOME/.kube/config I0721 22:17:09.035241 4021 kubeadm.go:322] I0721 22:17:09.035296 4021 kubeadm.go:322] Alternatively, if you are the root user, you can run: I0721 22:17:09.035301 4021 kubeadm.go:322] I0721 22:17:09.035351 4021 kubeadm.go:322] export KUBECONFIG=/etc/kubernetes/admin.conf I0721 22:17:09.035356 4021 kubeadm.go:322] I0721 22:17:09.035411 4021 kubeadm.go:322] You should now deploy a pod network to the cluster. I0721 22:17:09.035493 4021 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: I0721 22:17:09.035568 4021 kubeadm.go:322] https://kubernetes.io/docs/concepts/cluster-administration/addons/ I0721 22:17:09.035573 4021 kubeadm.go:322] I0721 22:17:09.036675 4021 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities I0721 22:17:09.036779 4021 kubeadm.go:322] and service account keys on each node and then running the following as root: I0721 22:17:09.036785 4021 kubeadm.go:322] I0721 22:17:09.037619 4021 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token qvzu8j.b3ozsf3cu2vyl7na \ I0721 22:17:09.037748 4021 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:3a0a9b10401ab69f2cf979031cafad2db921386fdb2f8b6314eafdf5d9e835e5 \ I0721 22:17:09.038147 4021 kubeadm.go:322] --control-plane I0721 22:17:09.038155 4021 kubeadm.go:322] I0721 22:17:09.038471 4021 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root: I0721 22:17:09.038478 4021 kubeadm.go:322] I0721 22:17:09.038902 4021 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token qvzu8j.b3ozsf3cu2vyl7na \ I0721 22:17:09.039443 4021 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:3a0a9b10401ab69f2cf979031cafad2db921386fdb2f8b6314eafdf5d9e835e5 I0721 22:17:09.046073 4021 kubeadm.go:322] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1041-azure\n", err: exit status 1 I0721 22:17:09.046251 4021 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' I0721 22:17:09.046276 4021 cni.go:84] Creating CNI manager for "" I0721 22:17:09.046314 4021 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0721 22:17:09.082478 4021 out.go:177] 🔗 Configuring bridge CNI (Container Networking Interface) ... I0721 22:17:09.100744 4021 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d I0721 22:17:09.197957 4021 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes) I0721 22:17:09.290849 4021 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0721 22:17:09.298199 4021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0721 22:17:09.315240 4021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.1 minikube.k8s.io/commit=fd3f3801765d093a485d255043149f92ec0a695f minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2023_07_21T22_17_09_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0721 22:17:10.746228 4021 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (1.4479915s) I0721 22:17:10.746260 4021 kubeadm.go:1081] duration metric: took 1.4822211s to wait for elevateKubeSystemPrivileges. I0721 22:17:10.746278 4021 ssh_runner.go:235] Completed: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": (1.4554143s) I0721 22:17:10.746286 4021 ops.go:34] apiserver oom_adj: -16 I0721 22:17:10.787697 4021 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.1 minikube.k8s.io/commit=fd3f3801765d093a485d255043149f92ec0a695f minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2023_07_21T22_17_09_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (1.472422s) I0721 22:17:10.787751 4021 kubeadm.go:406] StartCluster complete in 23.0993906s I0721 22:17:10.790903 4021 settings.go:142] acquiring lock: {Name:mkce455b6b0fbce0e47966323246dc2fce71854c Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0721 22:17:10.791049 4021 settings.go:150] Updating kubeconfig: /home/codespace/.kube/config I0721 22:17:10.828684 4021 lock.go:35] WriteFile acquiring /home/codespace/.kube/config: {Name:mkd8daf22afb9156d5f931edc479a445e5e99cb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0721 22:17:10.854213 4021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0721 22:17:11.027509 4021 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3 I0721 22:17:11.027554 4021 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] I0721 22:17:11.079482 4021 addons.go:69] Setting default-storageclass=true in profile "minikube" I0721 22:17:11.088467 4021 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0721 22:17:11.194052 4021 addons.go:69] Setting storage-provisioner=true in profile "minikube" I0721 22:17:11.194089 4021 addons.go:231] Setting addon storage-provisioner=true in "minikube" I0721 22:17:11.249977 4021 host.go:66] Checking if "minikube" exists ... I0721 22:17:11.322605 4021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0721 22:17:11.323797 4021 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0721 22:17:11.335076 4021 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0721 22:17:11.669304 4021 out.go:177] ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0721 22:17:11.709800 4021 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml I0721 22:17:11.709814 4021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0721 22:17:11.721773 4021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0721 22:17:11.807582 4021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/codespace/.minikube/machines/minikube/id_rsa Username:docker} I0721 22:17:12.030266 4021 addons.go:231] Setting addon default-storageclass=true in "minikube" I0721 22:17:12.030312 4021 host.go:66] Checking if "minikube" exists ... I0721 22:17:12.032141 4021 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0721 22:17:12.133736 4021 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml I0721 22:17:12.133751 4021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0721 22:17:12.134034 4021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0721 22:17:12.191961 4021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/codespace/.minikube/machines/minikube/id_rsa Username:docker} I0721 22:17:12.213282 4021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0721 22:17:12.440159 4021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0721 22:17:12.672432 4021 kapi.go:248] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas I0721 22:17:12.672473 4021 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0721 22:17:12.713130 4021 out.go:177] 🔎 Verifying Kubernetes components... I0721 22:17:12.754321 4021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0721 22:17:12.984976 4021 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.6623374s) I0721 22:17:12.985298 4021 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap I0721 22:17:13.396101 4021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.182792s) I0721 22:17:13.420875 4021 out.go:177] 🌟 Enabled addons: storage-provisioner, default-storageclass I0721 22:17:13.462942 4021 addons.go:502] enable addons completed in 2.6045772s: enabled=[storage-provisioner default-storageclass] I0721 22:17:13.477583 4021 api_server.go:52] waiting for apiserver process to appear ... I0721 22:17:13.477825 4021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0721 22:17:13.495330 4021 api_server.go:72] duration metric: took 822.8166ms to wait for apiserver process to appear ... I0721 22:17:13.495355 4021 api_server.go:88] waiting for apiserver healthz status ... I0721 22:17:13.495374 4021 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0721 22:17:13.525587 4021 api_server.go:279] https://192.168.49.2:8443/healthz returned 200: ok I0721 22:17:13.527571 4021 api_server.go:141] control plane version: v1.27.3 I0721 22:17:13.527591 4021 api_server.go:131] duration metric: took 32.2301ms to wait for apiserver health ... I0721 22:17:13.527601 4021 system_pods.go:43] waiting for kube-system pods to appear ... I0721 22:17:13.684192 4021 system_pods.go:59] 5 kube-system pods found I0721 22:17:13.684226 4021 system_pods.go:61] "etcd-minikube" [5595d6f3-7432-49ff-9ac3-44c80504174b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd]) I0721 22:17:13.684235 4021 system_pods.go:61] "kube-apiserver-minikube" [0a42f64b-98e5-4dc5-9408-a807df4f375d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver]) I0721 22:17:13.684243 4021 system_pods.go:61] "kube-controller-manager-minikube" [315703a2-7748-48d9-a42c-5308cf3690ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager]) I0721 22:17:13.684247 4021 system_pods.go:61] "kube-scheduler-minikube" [fc7bc46a-c4d5-491e-a467-2b045884ec7c] Running I0721 22:17:13.684252 4021 system_pods.go:61] "storage-provisioner" [937b90ed-9c7c-4e2d-a8d7-0aeba5d710ee] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..) I0721 22:17:13.684261 4021 system_pods.go:74] duration metric: took 156.6547ms to wait for pod list to return data ... I0721 22:17:13.684271 4021 kubeadm.go:581] duration metric: took 1.0117684s to wait for : map[apiserver:true system_pods:true] ... I0721 22:17:13.684284 4021 node_conditions.go:102] verifying NodePressure condition ... I0721 22:17:13.697494 4021 node_conditions.go:122] node storage ephemeral capacity is 32847680Ki I0721 22:17:13.697552 4021 node_conditions.go:123] node cpu capacity is 2 I0721 22:17:13.697565 4021 node_conditions.go:105] duration metric: took 13.2765ms to run NodePressure ... I0721 22:17:13.697579 4021 start.go:228] waiting for startup goroutines ... I0721 22:17:13.697587 4021 start.go:233] waiting for cluster config update ... I0721 22:17:13.697600 4021 start.go:242] writing updated cluster config ... I0721 22:17:13.698393 4021 ssh_runner.go:195] Run: rm -f paused I0721 22:17:13.769594 4021 start.go:596] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0) I0721 22:17:13.786327 4021 out.go:177] 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default * * ==> Docker <== * Jul 21 22:16:37 minikube dockerd[552]: time="2023-07-21T22:16:37.215207500Z" level=info msg="API listen on /var/run/docker.sock" Jul 21 22:16:37 minikube dockerd[552]: time="2023-07-21T22:16:37.215334100Z" level=info msg="API listen on [::]:2376" Jul 21 22:16:37 minikube systemd[1]: Started Docker Application Container Engine. Jul 21 22:16:38 minikube systemd[1]: Stopping Docker Application Container Engine... Jul 21 22:16:38 minikube dockerd[552]: time="2023-07-21T22:16:38.958943200Z" level=info msg="Processing signal 'terminated'" Jul 21 22:16:38 minikube dockerd[552]: time="2023-07-21T22:16:38.960295800Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby Jul 21 22:16:38 minikube dockerd[552]: time="2023-07-21T22:16:38.960403700Z" level=info msg="Daemon shutdown complete" Jul 21 22:16:38 minikube systemd[1]: docker.service: Deactivated successfully. Jul 21 22:16:38 minikube systemd[1]: Stopped Docker Application Container Engine. Jul 21 22:16:39 minikube systemd[1]: Starting Docker Application Container Engine... Jul 21 22:16:39 minikube dockerd[781]: time="2023-07-21T22:16:39.309402300Z" level=info msg="Starting up" Jul 21 22:16:39 minikube dockerd[781]: time="2023-07-21T22:16:39.415539000Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Jul 21 22:16:39 minikube dockerd[781]: time="2023-07-21T22:16:39.429305800Z" level=info msg="Loading containers: start." Jul 21 22:16:39 minikube dockerd[781]: time="2023-07-21T22:16:39.652534200Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 21 22:16:39 minikube dockerd[781]: time="2023-07-21T22:16:39.773081800Z" level=info msg="Loading containers: done." Jul 21 22:16:39 minikube dockerd[781]: time="2023-07-21T22:16:39.829536300Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 21 22:16:39 minikube dockerd[781]: time="2023-07-21T22:16:39.829830500Z" level=info msg="Docker daemon" commit=4ffc614 graphdriver=overlay2 version=24.0.4 Jul 21 22:16:39 minikube dockerd[781]: time="2023-07-21T22:16:39.829900500Z" level=info msg="Daemon has completed initialization" Jul 21 22:16:39 minikube dockerd[781]: time="2023-07-21T22:16:39.896608800Z" level=info msg="API listen on /var/run/docker.sock" Jul 21 22:16:39 minikube dockerd[781]: time="2023-07-21T22:16:39.896719000Z" level=info msg="API listen on [::]:2376" Jul 21 22:16:39 minikube systemd[1]: Started Docker Application Container Engine. Jul 21 22:16:39 minikube systemd[1]: Stopping Docker Application Container Engine... Jul 21 22:16:39 minikube dockerd[781]: time="2023-07-21T22:16:39.928216700Z" level=info msg="Processing signal 'terminated'" Jul 21 22:16:39 minikube dockerd[781]: time="2023-07-21T22:16:39.929258200Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby Jul 21 22:16:39 minikube dockerd[781]: time="2023-07-21T22:16:39.929794400Z" level=info msg="Daemon shutdown complete" Jul 21 22:16:39 minikube systemd[1]: docker.service: Deactivated successfully. Jul 21 22:16:39 minikube systemd[1]: Stopped Docker Application Container Engine. Jul 21 22:16:39 minikube systemd[1]: Starting Docker Application Container Engine... Jul 21 22:16:40 minikube dockerd[975]: time="2023-07-21T22:16:40.007481900Z" level=info msg="Starting up" Jul 21 22:16:40 minikube dockerd[975]: time="2023-07-21T22:16:40.044626500Z" level=info msg="[graphdriver] trying configured driver: overlay2" Jul 21 22:16:40 minikube dockerd[975]: time="2023-07-21T22:16:40.083642400Z" level=info msg="Loading containers: start." Jul 21 22:16:40 minikube dockerd[975]: time="2023-07-21T22:16:40.242389800Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 21 22:16:40 minikube dockerd[975]: time="2023-07-21T22:16:40.328631300Z" level=info msg="Loading containers: done." Jul 21 22:16:40 minikube dockerd[975]: time="2023-07-21T22:16:40.352063300Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 21 22:16:40 minikube dockerd[975]: time="2023-07-21T22:16:40.352386100Z" level=info msg="Docker daemon" commit=4ffc614 graphdriver=overlay2 version=24.0.4 Jul 21 22:16:40 minikube dockerd[975]: time="2023-07-21T22:16:40.352457100Z" level=info msg="Daemon has completed initialization" Jul 21 22:16:40 minikube dockerd[975]: time="2023-07-21T22:16:40.419246200Z" level=info msg="API listen on /var/run/docker.sock" Jul 21 22:16:40 minikube dockerd[975]: time="2023-07-21T22:16:40.419255400Z" level=info msg="API listen on [::]:2376" Jul 21 22:16:40 minikube systemd[1]: Started Docker Application Container Engine. Jul 21 22:16:41 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine... Jul 21 22:16:42 minikube cri-dockerd[1184]: time="2023-07-21T22:16:42Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock" Jul 21 22:16:42 minikube cri-dockerd[1184]: time="2023-07-21T22:16:42Z" level=info msg="Start docker client with request timeout 0s" Jul 21 22:16:42 minikube cri-dockerd[1184]: time="2023-07-21T22:16:42Z" level=info msg="Hairpin mode is set to hairpin-veth" Jul 21 22:16:42 minikube cri-dockerd[1184]: time="2023-07-21T22:16:42Z" level=info msg="Loaded network plugin cni" Jul 21 22:16:42 minikube cri-dockerd[1184]: time="2023-07-21T22:16:42Z" level=info msg="Docker cri networking managed by network plugin cni" Jul 21 22:16:42 minikube cri-dockerd[1184]: time="2023-07-21T22:16:42Z" level=info msg="Docker Info: &{ID:23af3c3a-2a0c-4d1d-8bf8-9576dfcf515d Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff false] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:25 OomKillDisable:false NGoroutines:36 SystemTime:2023-07-21T22:16:42.3045064Z LoggingDriver:json-file CgroupDriver:systemd CgroupVersion:2 NEventsListener:0 KernelVersion:5.15.0-1041-azure OperatingSystem:Ubuntu 22.04.2 LTS OSVersion:22.04 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00020c380 NCPU:2 MemTotal:4104077312 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy:control-plane.minikube.internal Name:minikube Labels:[provider=docker] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:} runc:{Path:runc Args:[] Shim:}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster: Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin name=cgroupns] ProductLicense: DefaultAddressPools:[] Warnings:[]}" Jul 21 22:16:42 minikube cri-dockerd[1184]: time="2023-07-21T22:16:42Z" level=info msg="Setting cgroupDriver systemd" Jul 21 22:16:42 minikube cri-dockerd[1184]: time="2023-07-21T22:16:42Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}" Jul 21 22:16:42 minikube cri-dockerd[1184]: time="2023-07-21T22:16:42Z" level=info msg="Starting the GRPC backend for the Docker CRI interface." Jul 21 22:16:42 minikube cri-dockerd[1184]: time="2023-07-21T22:16:42Z" level=info msg="Start cri-dockerd grpc backend" Jul 21 22:16:42 minikube systemd[1]: Started CRI Interface for Docker Application Container Engine. Jul 21 22:16:56 minikube cri-dockerd[1184]: time="2023-07-21T22:16:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/670c942b50a467177d8a844377c57bb301a03dfecd5c797fafb2f37b3b1c104c/resolv.conf as [nameserver 192.168.49.1 search bkiv3m0tgl2ulkpevy4u3z1qkb.cx.internal.cloudapp.net options timeout:1 attempts:5 ndots:0]" Jul 21 22:16:56 minikube cri-dockerd[1184]: time="2023-07-21T22:16:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/651b32f2f8ca87939463c005ff7436d3304e523ce20a7d859c73f48642e8ea00/resolv.conf as [nameserver 192.168.49.1 search bkiv3m0tgl2ulkpevy4u3z1qkb.cx.internal.cloudapp.net options timeout:1 attempts:5 ndots:0]" Jul 21 22:16:56 minikube cri-dockerd[1184]: time="2023-07-21T22:16:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/67c0a043f75b7735f0ed09d648cfc376a2fb2122f0e0852cbd3cf14d91e5a97f/resolv.conf as [nameserver 192.168.49.1 search bkiv3m0tgl2ulkpevy4u3z1qkb.cx.internal.cloudapp.net options timeout:1 attempts:5 ndots:0]" Jul 21 22:16:56 minikube cri-dockerd[1184]: time="2023-07-21T22:16:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/98b767ca78330ecf3f15fc4df7f2c19ac87233e65e934fa7169f1ab8320dd3a6/resolv.conf as [nameserver 192.168.49.1 search bkiv3m0tgl2ulkpevy4u3z1qkb.cx.internal.cloudapp.net options timeout:1 attempts:5 ndots:0]" Jul 21 22:17:21 minikube cri-dockerd[1184]: time="2023-07-21T22:17:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e3914f8cabc71b8d094a73324ccb626682b6a255edd1449bca7e08f376686193/resolv.conf as [nameserver 192.168.49.1 search bkiv3m0tgl2ulkpevy4u3z1qkb.cx.internal.cloudapp.net options timeout:1 attempts:5 ndots:0]" Jul 21 22:17:22 minikube cri-dockerd[1184]: time="2023-07-21T22:17:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bc1bc5bf2797c3fd2184f0503d46fad2bb6011ef2414df18e8100a500d317d0a/resolv.conf as [nameserver 192.168.49.1 search bkiv3m0tgl2ulkpevy4u3z1qkb.cx.internal.cloudapp.net options attempts:5 ndots:0 timeout:1]" Jul 21 22:17:22 minikube cri-dockerd[1184]: time="2023-07-21T22:17:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f3ed0749caea968a662a1aa2d70f370a1b2849960434893f52ec0b0d6ac81fdf/resolv.conf as [nameserver 192.168.49.1 search bkiv3m0tgl2ulkpevy4u3z1qkb.cx.internal.cloudapp.net options timeout:1 attempts:5 ndots:0]" Jul 21 22:17:29 minikube cri-dockerd[1184]: time="2023-07-21T22:17:29Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}" Jul 21 22:17:53 minikube dockerd[975]: time="2023-07-21T22:17:53.216624200Z" level=info msg="ignoring event" container=ecf6b43ff2a64dbaf7b40dac77a76ff9cd21a91f0159163eff2b23da9af0fe56 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD 712a9d1e73b98 6e38f40d628db About a minute ago Running storage-provisioner 1 e3914f8cabc71 storage-provisioner e95d7d0585b23 ead0a4a53df89 About a minute ago Running coredns 0 f3ed0749caea9 coredns-5d78c9869d-q4gsx 02cd8be79da31 5780543258cf0 About a minute ago Running kube-proxy 0 bc1bc5bf2797c kube-proxy-sncvt ecf6b43ff2a64 6e38f40d628db About a minute ago Exited storage-provisioner 0 e3914f8cabc71 storage-provisioner 1786b0f9573c3 08a0c939e61b7 2 minutes ago Running kube-apiserver 0 98b767ca78330 kube-apiserver-minikube 55c32d4a23733 7cffc01dba0e1 2 minutes ago Running kube-controller-manager 0 67c0a043f75b7 kube-controller-manager-minikube ff2a5e3bff68a 41697ceeb70b3 2 minutes ago Running kube-scheduler 0 651b32f2f8ca8 kube-scheduler-minikube 1f222cc84c747 86b6af7dd652c 2 minutes ago Running etcd 0 670c942b50a46 etcd-minikube * * ==> coredns [e95d7d0585b2] <== * [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API .:53 [INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86 CoreDNS-1.10.1 linux/amd64, go1.20, 055b2c3 [INFO] 127.0.0.1:38750 - 9979 "HINFO IN 3592794329320471765.5569314500437203831. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.1303624s [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" * * ==> describe nodes <== * Name: minikube Roles: control-plane Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=fd3f3801765d093a485d255043149f92ec0a695f minikube.k8s.io/name=minikube minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2023_07_21T22_17_09_0700 minikube.k8s.io/version=v1.31.1 node-role.kubernetes.io/control-plane= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Fri, 21 Jul 2023 22:17:03 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Fri, 21 Jul 2023 22:18:50 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Fri, 21 Jul 2023 22:17:29 +0000 Fri, 21 Jul 2023 22:17:00 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Fri, 21 Jul 2023 22:17:29 +0000 Fri, 21 Jul 2023 22:17:00 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Fri, 21 Jul 2023 22:17:29 +0000 Fri, 21 Jul 2023 22:17:00 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Fri, 21 Jul 2023 22:17:29 +0000 Fri, 21 Jul 2023 22:17:03 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: minikube Capacity: cpu: 2 ephemeral-storage: 32847680Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 4007888Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 32847680Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 4007888Ki pods: 110 System Info: Machine ID: 4929c2ba39ea45148d501440b7fc5092 System UUID: ed4b1139-e204-4667-b141-2bc3ff2c2a46 Boot ID: ccbfe056-0fdf-4d0a-94d8-9d193b76f9e8 Kernel Version: 5.15.0-1041-azure OS Image: Ubuntu 22.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://24.0.4 Kubelet Version: v1.27.3 Kube-Proxy Version: v1.27.3 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (7 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-5d78c9869d-q4gsx 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (4%!)(MISSING) 100s kube-system etcd-minikube 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (2%!)(MISSING) 0 (0%!)(MISSING) 111s kube-system kube-apiserver-minikube 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 111s kube-system kube-controller-manager-minikube 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 114s kube-system kube-proxy-sncvt 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 100s kube-system kube-scheduler-minikube 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 115s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 107s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (4%!)(MISSING) 170Mi (4%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 95s kube-proxy Normal Starting 2m6s kubelet Starting kubelet. Normal NodeHasSufficientMemory 2m6s (x8 over 2m6s) kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 2m6s (x8 over 2m6s) kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 2m6s (x7 over 2m6s) kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 2m6s kubelet Updated Node Allocatable limit across pods Normal Starting 112s kubelet Starting kubelet. Normal NodeAllocatableEnforced 111s kubelet Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 111s kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 111s kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 111s kubelet Node minikube status is now: NodeHasSufficientPID Normal RegisteredNode 100s node-controller Node minikube event: Registered Node minikube in Controller * * ==> dmesg <== * [ +0.896042] blk_update_request: operation not supported error, dev loop4, sector 50356480 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895931] blk_update_request: operation not supported error, dev loop4, sector 50360576 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.896080] blk_update_request: operation not supported error, dev loop4, sector 50364672 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895997] blk_update_request: operation not supported error, dev loop4, sector 50368768 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895960] blk_update_request: operation not supported error, dev loop4, sector 50372864 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.896039] blk_update_request: operation not supported error, dev loop4, sector 50376960 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895950] blk_update_request: operation not supported error, dev loop4, sector 50381056 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.896039] blk_update_request: operation not supported error, dev loop4, sector 50385152 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [Jul21 21:58] blk_update_request: operation not supported error, dev loop4, sector 50389248 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895980] blk_update_request: operation not supported error, dev loop4, sector 50393344 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895938] blk_update_request: operation not supported error, dev loop4, sector 54526216 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895996] blk_update_request: operation not supported error, dev loop4, sector 54530304 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.896060] blk_update_request: operation not supported error, dev loop4, sector 54534400 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +1.055901] blk_update_request: operation not supported error, dev loop4, sector 54538496 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.896024] blk_update_request: operation not supported error, dev loop4, sector 54542592 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895961] blk_update_request: operation not supported error, dev loop4, sector 54546688 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.896090] blk_update_request: operation not supported error, dev loop4, sector 54550784 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895913] blk_update_request: operation not supported error, dev loop4, sector 54554880 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.896088] blk_update_request: operation not supported error, dev loop4, sector 54558976 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.896035] blk_update_request: operation not supported error, dev loop4, sector 54563072 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895935] blk_update_request: operation not supported error, dev loop4, sector 54567168 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.896061] blk_update_request: operation not supported error, dev loop4, sector 54571264 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895969] blk_update_request: operation not supported error, dev loop4, sector 54575360 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895962] blk_update_request: operation not supported error, dev loop4, sector 54579456 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.896028] blk_update_request: operation not supported error, dev loop4, sector 54583552 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895984] blk_update_request: operation not supported error, dev loop4, sector 54587648 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.896037] blk_update_request: operation not supported error, dev loop4, sector 58720512 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.896118] blk_update_request: operation not supported error, dev loop4, sector 58724608 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895902] blk_update_request: operation not supported error, dev loop4, sector 58728704 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895993] blk_update_request: operation not supported error, dev loop4, sector 58732800 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895970] blk_update_request: operation not supported error, dev loop4, sector 58736896 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.896021] blk_update_request: operation not supported error, dev loop4, sector 58740992 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895951] blk_update_request: operation not supported error, dev loop4, sector 58745088 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.896067] blk_update_request: operation not supported error, dev loop4, sector 58749184 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895991] blk_update_request: operation not supported error, dev loop4, sector 58753280 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895981] blk_update_request: operation not supported error, dev loop4, sector 58757376 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.896072] blk_update_request: operation not supported error, dev loop4, sector 58761472 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895939] blk_update_request: operation not supported error, dev loop4, sector 58765568 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895894] blk_update_request: operation not supported error, dev loop4, sector 58769664 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.896072] blk_update_request: operation not supported error, dev loop4, sector 58773760 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.896016] blk_update_request: operation not supported error, dev loop4, sector 58777856 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895948] blk_update_request: operation not supported error, dev loop4, sector 58781952 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.896093] blk_update_request: operation not supported error, dev loop4, sector 62914816 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895970] blk_update_request: operation not supported error, dev loop4, sector 62918912 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895991] blk_update_request: operation not supported error, dev loop4, sector 62923008 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895995] blk_update_request: operation not supported error, dev loop4, sector 62927104 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895939] blk_update_request: operation not supported error, dev loop4, sector 62931200 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.896066] blk_update_request: operation not supported error, dev loop4, sector 62935296 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.896027] blk_update_request: operation not supported error, dev loop4, sector 62939392 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895979] blk_update_request: operation not supported error, dev loop4, sector 62943488 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.896035] blk_update_request: operation not supported error, dev loop4, sector 62947584 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895989] blk_update_request: operation not supported error, dev loop4, sector 62951680 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895964] blk_update_request: operation not supported error, dev loop4, sector 62955776 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.896003] blk_update_request: operation not supported error, dev loop4, sector 62959872 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895969] blk_update_request: operation not supported error, dev loop4, sector 62963968 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.896058] blk_update_request: operation not supported error, dev loop4, sector 62968064 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.895990] blk_update_request: operation not supported error, dev loop4, sector 62972160 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [ +0.896024] blk_update_request: operation not supported error, dev loop4, sector 62976256 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0 [Jul21 22:18] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000132] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. * * ==> etcd [1f222cc84c74] <== * {"level":"info","ts":"2023-07-21T22:16:57.343Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.49.2:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.49.2:2380","--initial-cluster=minikube=https://192.168.49.2:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.49.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.49.2:2380","--name=minikube","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]} {"level":"info","ts":"2023-07-21T22:16:57.343Z","caller":"embed/etcd.go:124","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.49.2:2380"]} {"level":"info","ts":"2023-07-21T22:16:57.344Z","caller":"embed/etcd.go:484","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2023-07-21T22:16:57.344Z","caller":"embed/etcd.go:132","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"]} {"level":"info","ts":"2023-07-21T22:16:57.346Z","caller":"embed/etcd.go:306","msg":"starting an etcd server","etcd-version":"3.5.7","git-sha":"215b53cf3","go-version":"go1.17.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":false,"name":"minikube","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"minikube=https://192.168.49.2:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"} {"level":"info","ts":"2023-07-21T22:16:57.378Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"29.4011ms"} {"level":"info","ts":"2023-07-21T22:16:57.428Z","caller":"etcdserver/raft.go:494","msg":"starting local member","local-member-id":"aec36adc501070cc","cluster-id":"fa54960ea34d58be"} {"level":"info","ts":"2023-07-21T22:16:57.429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=()"} {"level":"info","ts":"2023-07-21T22:16:57.429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 0"} {"level":"info","ts":"2023-07-21T22:16:57.430Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"} {"level":"info","ts":"2023-07-21T22:16:57.430Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 1"} {"level":"info","ts":"2023-07-21T22:16:57.430Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"} {"level":"warn","ts":"2023-07-21T22:16:57.461Z","caller":"auth/store.go:1234","msg":"simple token is not cryptographically signed"} {"level":"info","ts":"2023-07-21T22:16:57.508Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1} {"level":"info","ts":"2023-07-21T22:16:57.537Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"} {"level":"info","ts":"2023-07-21T22:16:57.555Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.7","cluster-version":"to_be_decided"} {"level":"info","ts":"2023-07-21T22:16:57.557Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} {"level":"info","ts":"2023-07-21T22:16:57.557Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"} {"level":"info","ts":"2023-07-21T22:16:57.557Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"} {"level":"info","ts":"2023-07-21T22:16:57.557Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"} {"level":"info","ts":"2023-07-21T22:16:57.559Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2023-07-21T22:16:57.560Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]} {"level":"info","ts":"2023-07-21T22:16:57.560Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"} {"level":"info","ts":"2023-07-21T22:16:57.561Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.49.2:2380"} {"level":"info","ts":"2023-07-21T22:16:57.561Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.49.2:2380"} {"level":"info","ts":"2023-07-21T22:16:57.567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"} {"level":"info","ts":"2023-07-21T22:16:57.568Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]} {"level":"info","ts":"2023-07-21T22:16:57.831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"} {"level":"info","ts":"2023-07-21T22:16:57.831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"} {"level":"info","ts":"2023-07-21T22:16:57.831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"} {"level":"info","ts":"2023-07-21T22:16:57.831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"} {"level":"info","ts":"2023-07-21T22:16:57.832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"} {"level":"info","ts":"2023-07-21T22:16:57.832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"} {"level":"info","ts":"2023-07-21T22:16:57.832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"} {"level":"info","ts":"2023-07-21T22:16:57.870Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2023-07-21T22:16:57.885Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:minikube ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"} {"level":"info","ts":"2023-07-21T22:16:57.888Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"} {"level":"info","ts":"2023-07-21T22:16:57.895Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"} {"level":"info","ts":"2023-07-21T22:16:57.889Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"} {"level":"info","ts":"2023-07-21T22:16:57.897Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"} {"level":"info","ts":"2023-07-21T22:16:57.903Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} {"level":"info","ts":"2023-07-21T22:16:57.903Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} {"level":"info","ts":"2023-07-21T22:16:57.903Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"} {"level":"info","ts":"2023-07-21T22:16:57.903Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2023-07-21T22:16:57.903Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"} * * ==> kernel <== * 22:19:00 up 1:38, 0 users, load average: 1.32, 1.91, 0.97 Linux minikube 5.15.0-1041-azure #48-Ubuntu SMP Tue Jun 20 20:34:08 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 22.04.2 LTS" * * ==> kube-apiserver [1786b0f9573c] <== * I0721 22:17:02.729321 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I0721 22:17:02.729350 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0721 22:17:02.729392 1 controller.go:80] Starting OpenAPI V3 AggregationController I0721 22:17:02.729683 1 customresource_discovery_controller.go:289] Starting DiscoveryController I0721 22:17:02.731849 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0721 22:17:02.744460 1 apf_controller.go:361] Starting API Priority and Fairness config controller I0721 22:17:02.744525 1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key" I0721 22:17:02.758605 1 system_namespaces_controller.go:67] Starting system namespaces controller I0721 22:17:02.759446 1 gc_controller.go:78] Starting apiserver lease garbage collector I0721 22:17:02.759562 1 aggregator.go:150] waiting for initial CRD sync... I0721 22:17:02.760541 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0721 22:17:02.760556 1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller I0721 22:17:02.760768 1 gc_controller.go:78] Starting apiserver lease garbage collector I0721 22:17:02.760905 1 controller.go:83] Starting OpenAPI AggregationController I0721 22:17:02.760967 1 handler_discovery.go:392] Starting ResourceDiscoveryManager I0721 22:17:02.761316 1 available_controller.go:423] Starting AvailableConditionController I0721 22:17:02.761332 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0721 22:17:02.761422 1 controller.go:85] Starting OpenAPI controller I0721 22:17:02.761481 1 controller.go:85] Starting OpenAPI V3 controller I0721 22:17:02.761749 1 naming_controller.go:291] Starting NamingConditionController I0721 22:17:02.761817 1 establishing_controller.go:76] Starting EstablishingController I0721 22:17:02.761836 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0721 22:17:02.764931 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0721 22:17:02.767330 1 crd_finalizer.go:266] Starting CRDFinalizer I0721 22:17:02.767392 1 controller.go:121] Starting legacy_token_tracking_controller I0721 22:17:02.767404 1 shared_informer.go:311] Waiting for caches to sync for configmaps I0721 22:17:02.808898 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0721 22:17:02.808926 1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister I0721 22:17:02.809181 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0721 22:17:02.809368 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0721 22:17:03.029403 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0721 22:17:03.044548 1 apf_controller.go:366] Running API Priority and Fairness config worker I0721 22:17:03.045508 1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process I0721 22:17:03.077245 1 shared_informer.go:318] Caches are synced for node_authorizer I0721 22:17:03.077244 1 shared_informer.go:318] Caches are synced for configmaps I0721 22:17:03.079448 1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller I0721 22:17:03.079537 1 cache.go:39] Caches are synced for AvailableConditionController controller I0721 22:17:03.102222 1 controller.go:624] quota admission added evaluator for: namespaces I0721 22:17:03.109110 1 shared_informer.go:318] Caches are synced for crd-autoregister I0721 22:17:03.110914 1 aggregator.go:152] initial CRD sync complete... I0721 22:17:03.111346 1 autoregister_controller.go:141] Starting autoregister controller I0721 22:17:03.111506 1 cache.go:32] Waiting for caches to sync for autoregister controller I0721 22:17:03.111597 1 cache.go:39] Caches are synced for autoregister controller I0721 22:17:03.181492 1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io I0721 22:17:03.367407 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0721 22:17:03.811836 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000 I0721 22:17:03.852320 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000 I0721 22:17:03.852344 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist. I0721 22:17:06.431476 1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0721 22:17:06.563878 1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0721 22:17:06.720837 1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0721 22:17:06.770431 1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.49.2] I0721 22:17:06.771811 1 controller.go:624] quota admission added evaluator for: endpoints I0721 22:17:06.791589 1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io I0721 22:17:07.069477 1 controller.go:624] quota admission added evaluator for: serviceaccounts I0721 22:17:08.753013 1 controller.go:624] quota admission added evaluator for: deployments.apps I0721 22:17:08.843410 1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0721 22:17:08.873387 1 controller.go:624] quota admission added evaluator for: daemonsets.apps I0721 22:17:20.435311 1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps I0721 22:17:20.730114 1 controller.go:624] quota admission added evaluator for: replicasets.apps * * ==> kube-controller-manager [55c32d4a2373] <== * I0721 22:17:19.585280 1 pv_protection_controller.go:78] "Starting PV protection controller" I0721 22:17:19.585297 1 shared_informer.go:311] Waiting for caches to sync for PV protection I0721 22:17:19.734520 1 controllermanager.go:638] "Started controller" controller="replicaset" I0721 22:17:19.734851 1 replica_set.go:201] "Starting controller" name="replicaset" I0721 22:17:19.734934 1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet I0721 22:17:19.770887 1 shared_informer.go:311] Waiting for caches to sync for resource quota I0721 22:17:19.896436 1 shared_informer.go:318] Caches are synced for PV protection I0721 22:17:19.923930 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving I0721 22:17:19.924384 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client I0721 22:17:19.926607 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client I0721 22:17:19.933126 1 shared_informer.go:318] Caches are synced for certificate-csrapproving I0721 22:17:19.936746 1 shared_informer.go:311] Waiting for caches to sync for garbage collector I0721 22:17:19.937061 1 shared_informer.go:318] Caches are synced for expand I0721 22:17:19.937279 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown I0721 22:17:19.941760 1 shared_informer.go:318] Caches are synced for ReplicaSet I0721 22:17:19.978860 1 shared_informer.go:318] Caches are synced for endpoint_slice I0721 22:17:19.979371 1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"minikube\" does not exist" I0721 22:17:19.979640 1 shared_informer.go:318] Caches are synced for TTL after finished I0721 22:17:19.979751 1 shared_informer.go:318] Caches are synced for node I0721 22:17:19.979860 1 range_allocator.go:174] "Sending events to api server" I0721 22:17:19.979948 1 range_allocator.go:178] "Starting range CIDR allocator" I0721 22:17:19.979963 1 shared_informer.go:311] Waiting for caches to sync for cidrallocator I0721 22:17:19.980010 1 shared_informer.go:318] Caches are synced for cidrallocator I0721 22:17:19.980387 1 shared_informer.go:318] Caches are synced for GC I0721 22:17:19.980490 1 shared_informer.go:318] Caches are synced for HPA I0721 22:17:19.989097 1 shared_informer.go:318] Caches are synced for daemon sets I0721 22:17:20.001843 1 shared_informer.go:318] Caches are synced for endpoint I0721 22:17:20.001916 1 shared_informer.go:318] Caches are synced for crt configmap I0721 22:17:20.010156 1 shared_informer.go:318] Caches are synced for cronjob I0721 22:17:20.010363 1 shared_informer.go:318] Caches are synced for stateful set I0721 22:17:20.010449 1 shared_informer.go:318] Caches are synced for job I0721 22:17:20.010532 1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring I0721 22:17:20.010764 1 shared_informer.go:318] Caches are synced for PVC protection I0721 22:17:20.010804 1 shared_informer.go:318] Caches are synced for namespace I0721 22:17:20.025122 1 shared_informer.go:318] Caches are synced for persistent volume I0721 22:17:20.025457 1 shared_informer.go:318] Caches are synced for service account I0721 22:17:20.025872 1 shared_informer.go:318] Caches are synced for TTL I0721 22:17:20.044860 1 shared_informer.go:318] Caches are synced for disruption I0721 22:17:20.044942 1 shared_informer.go:318] Caches are synced for resource quota I0721 22:17:20.045026 1 shared_informer.go:318] Caches are synced for attach detach I0721 22:17:20.060604 1 shared_informer.go:318] Caches are synced for ephemeral I0721 22:17:20.060674 1 shared_informer.go:318] Caches are synced for ReplicationController I0721 22:17:20.077209 1 shared_informer.go:318] Caches are synced for bootstrap_signer I0721 22:17:20.080120 1 shared_informer.go:318] Caches are synced for resource quota I0721 22:17:20.099913 1 shared_informer.go:318] Caches are synced for taint I0721 22:17:20.100042 1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone="" I0721 22:17:20.100183 1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="minikube" I0721 22:17:20.100243 1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal I0721 22:17:20.100269 1 shared_informer.go:318] Caches are synced for deployment I0721 22:17:20.124796 1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator I0721 22:17:20.139177 1 taint_manager.go:206] "Starting NoExecuteTaintManager" I0721 22:17:20.139530 1 taint_manager.go:211] "Sending events to api server" I0721 22:17:20.141027 1 event.go:307] "Event occurred" object="minikube" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I0721 22:17:20.149589 1 range_allocator.go:380] "Set node PodCIDR" node="minikube" podCIDRs=[10.244.0.0/24] I0721 22:17:20.340490 1 shared_informer.go:318] Caches are synced for garbage collector I0721 22:17:20.418944 1 shared_informer.go:318] Caches are synced for garbage collector I0721 22:17:20.419212 1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage" I0721 22:17:20.475446 1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sncvt" I0721 22:17:20.760725 1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 1" I0721 22:17:20.898907 1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-q4gsx" * * ==> kube-proxy [02cd8be79da3] <== * I0721 22:17:24.250230 1 node.go:141] Successfully retrieved node IP: 192.168.49.2 I0721 22:17:24.250403 1 server_others.go:110] "Detected node IP" address="192.168.49.2" I0721 22:17:24.250489 1 server_others.go:554] "Using iptables proxy" I0721 22:17:24.892524 1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6 I0721 22:17:24.892551 1 server_others.go:192] "Using iptables Proxier" I0721 22:17:24.893192 1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" I0721 22:17:24.893931 1 server.go:658] "Version info" version="v1.27.3" I0721 22:17:24.893954 1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0721 22:17:24.897860 1 config.go:188] "Starting service config controller" I0721 22:17:24.898499 1 shared_informer.go:311] Waiting for caches to sync for service config I0721 22:17:24.898879 1 config.go:97] "Starting endpoint slice config controller" I0721 22:17:24.899101 1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config I0721 22:17:24.900148 1 config.go:315] "Starting node config controller" I0721 22:17:24.900329 1 shared_informer.go:311] Waiting for caches to sync for node config I0721 22:17:24.999433 1 shared_informer.go:318] Caches are synced for endpoint slice config I0721 22:17:24.999490 1 shared_informer.go:318] Caches are synced for service config I0721 22:17:25.001312 1 shared_informer.go:318] Caches are synced for node config * * ==> kube-scheduler [ff2a5e3bff68] <== * E0721 22:17:03.085880 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0721 22:17:03.086082 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0721 22:17:03.086105 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0721 22:17:03.086279 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0721 22:17:03.086297 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0721 22:17:03.086405 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0721 22:17:03.086423 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0721 22:17:03.086581 1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0721 22:17:03.086603 1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0721 22:17:03.086469 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0721 22:17:03.086723 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0721 22:17:03.086862 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0721 22:17:03.086882 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0721 22:17:03.086907 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0721 22:17:03.086925 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0721 22:17:03.087975 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0721 22:17:03.088007 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0721 22:17:03.088285 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0721 22:17:03.088318 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0721 22:17:03.088616 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0721 22:17:03.088964 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0721 22:17:03.088869 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0721 22:17:03.089312 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0721 22:17:03.088880 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0721 22:17:03.089614 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0721 22:17:03.088925 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0721 22:17:03.089854 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0721 22:17:04.054286 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0721 22:17:04.054325 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0721 22:17:04.114707 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0721 22:17:04.115178 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0721 22:17:04.144801 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0721 22:17:04.145131 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0721 22:17:04.153004 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0721 22:17:04.153040 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0721 22:17:04.253438 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0721 22:17:04.253474 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0721 22:17:04.255596 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0721 22:17:04.255627 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0721 22:17:04.261168 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0721 22:17:04.261456 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0721 22:17:04.376621 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0721 22:17:04.377029 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0721 22:17:04.420529 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0721 22:17:04.421474 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0721 22:17:04.421425 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0721 22:17:04.421531 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0721 22:17:04.497371 1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0721 22:17:04.497410 1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0721 22:17:04.511640 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0721 22:17:04.512067 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0721 22:17:04.556211 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0721 22:17:04.556365 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0721 22:17:04.599905 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0721 22:17:04.599952 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0721 22:17:04.640905 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0721 22:17:04.640957 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0721 22:17:06.452639 1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0721 22:17:06.452674 1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" I0721 22:17:10.407458 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.007367 2275 state_mem.go:35] "Initializing new in-memory state store" Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.007755 2275 state_mem.go:75] "Updated machine memory state" Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.031426 2275 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.038943 2275 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.083071 2275 kubelet_node_status.go:70] "Attempting to register node" node="minikube" Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.090418 2275 topology_manager.go:212] "Topology Admit Handler" Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.090615 2275 topology_manager.go:212] "Topology Admit Handler" Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.090844 2275 topology_manager.go:212] "Topology Admit Handler" Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.091009 2275 topology_manager.go:212] "Topology Admit Handler" Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.111841 2275 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/8af0e85a28544808d52bb7c47ad824ed-etcd-certs\") pod \"etcd-minikube\" (UID: \"8af0e85a28544808d52bb7c47ad824ed\") " pod="kube-system/etcd-minikube" Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.111919 2275 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e33f7a2a0d6aad5df18c7258d3116e25-ca-certs\") pod \"kube-controller-manager-minikube\" (UID: \"e33f7a2a0d6aad5df18c7258d3116e25\") " pod="kube-system/kube-controller-manager-minikube" Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.111954 2275 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e33f7a2a0d6aad5df18c7258d3116e25-etc-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"e33f7a2a0d6aad5df18c7258d3116e25\") " pod="kube-system/kube-controller-manager-minikube" Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.112079 2275 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e33f7a2a0d6aad5df18c7258d3116e25-flexvolume-dir\") pod \"kube-controller-manager-minikube\" (UID: \"e33f7a2a0d6aad5df18c7258d3116e25\") " pod="kube-system/kube-controller-manager-minikube" Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.112121 2275 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e33f7a2a0d6aad5df18c7258d3116e25-usr-share-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"e33f7a2a0d6aad5df18c7258d3116e25\") " pod="kube-system/kube-controller-manager-minikube" Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.112251 2275 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4e275e35949ad3fdfeb753c1099308e7-etc-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"4e275e35949ad3fdfeb753c1099308e7\") " pod="kube-system/kube-apiserver-minikube" Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.112394 2275 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4e275e35949ad3fdfeb753c1099308e7-k8s-certs\") pod \"kube-apiserver-minikube\" (UID: \"4e275e35949ad3fdfeb753c1099308e7\") " pod="kube-system/kube-apiserver-minikube" Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.112444 2275 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4e275e35949ad3fdfeb753c1099308e7-usr-local-share-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"4e275e35949ad3fdfeb753c1099308e7\") " pod="kube-system/kube-apiserver-minikube" Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.112522 2275 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8af0e85a28544808d52bb7c47ad824ed-etcd-data\") pod \"etcd-minikube\" (UID: \"8af0e85a28544808d52bb7c47ad824ed\") " pod="kube-system/etcd-minikube" Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.112755 2275 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4e275e35949ad3fdfeb753c1099308e7-usr-share-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"4e275e35949ad3fdfeb753c1099308e7\") " pod="kube-system/kube-apiserver-minikube" Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.112837 2275 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e33f7a2a0d6aad5df18c7258d3116e25-usr-local-share-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"e33f7a2a0d6aad5df18c7258d3116e25\") " pod="kube-system/kube-controller-manager-minikube" Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.112912 2275 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e14e2f92c469337ac62a252dad99dcc5-kubeconfig\") pod \"kube-scheduler-minikube\" (UID: \"e14e2f92c469337ac62a252dad99dcc5\") " pod="kube-system/kube-scheduler-minikube" Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.112982 2275 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4e275e35949ad3fdfeb753c1099308e7-ca-certs\") pod \"kube-apiserver-minikube\" (UID: \"4e275e35949ad3fdfeb753c1099308e7\") " pod="kube-system/kube-apiserver-minikube" Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.113065 2275 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e33f7a2a0d6aad5df18c7258d3116e25-k8s-certs\") pod \"kube-controller-manager-minikube\" (UID: \"e33f7a2a0d6aad5df18c7258d3116e25\") " pod="kube-system/kube-controller-manager-minikube" Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.113142 2275 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e33f7a2a0d6aad5df18c7258d3116e25-kubeconfig\") pod \"kube-controller-manager-minikube\" (UID: \"e33f7a2a0d6aad5df18c7258d3116e25\") " pod="kube-system/kube-controller-manager-minikube" Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.188667 2275 kubelet_node_status.go:108] "Node was previously registered" node="minikube" Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.188888 2275 kubelet_node_status.go:73] "Successfully registered node" node="minikube" Jul 21 22:17:09 minikube kubelet[2275]: E0721 22:17:09.191625 2275 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-minikube\" already exists" pod="kube-system/kube-controller-manager-minikube" Jul 21 22:17:09 minikube kubelet[2275]: E0721 22:17:09.191942 2275 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-scheduler-minikube\" already exists" pod="kube-system/kube-scheduler-minikube" Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.872203 2275 apiserver.go:52] "Watching apiserver" Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.908990 2275 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Jul 21 22:17:09 minikube kubelet[2275]: I0721 22:17:09.918321 2275 reconciler.go:41] "Reconciler: start to sync state" Jul 21 22:17:10 minikube kubelet[2275]: I0721 22:17:10.195188 2275 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-minikube" podStartSLOduration=1.1950405 podCreationTimestamp="2023-07-21 22:17:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-21 22:17:10.1949235 +0000 UTC m=+1.488231501" watchObservedRunningTime="2023-07-21 22:17:10.1950405 +0000 UTC m=+1.488348601" Jul 21 22:17:10 minikube kubelet[2275]: E0721 22:17:10.195348 2275 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-minikube\" already exists" pod="kube-system/kube-apiserver-minikube" Jul 21 22:17:10 minikube kubelet[2275]: E0721 22:17:10.206413 2275 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-scheduler-minikube\" already exists" pod="kube-system/kube-scheduler-minikube" Jul 21 22:17:10 minikube kubelet[2275]: I0721 22:17:10.480907 2275 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-minikube" podStartSLOduration=1.4808227 podCreationTimestamp="2023-07-21 22:17:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-21 22:17:10.4807876 +0000 UTC m=+1.774095701" watchObservedRunningTime="2023-07-21 22:17:10.4808227 +0000 UTC m=+1.774130701" Jul 21 22:17:20 minikube kubelet[2275]: I0721 22:17:20.235854 2275 topology_manager.go:212] "Topology Admit Handler" Jul 21 22:17:20 minikube kubelet[2275]: I0721 22:17:20.355564 2275 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9972\" (UniqueName: \"kubernetes.io/projected/937b90ed-9c7c-4e2d-a8d7-0aeba5d710ee-kube-api-access-z9972\") pod \"storage-provisioner\" (UID: \"937b90ed-9c7c-4e2d-a8d7-0aeba5d710ee\") " pod="kube-system/storage-provisioner" Jul 21 22:17:20 minikube kubelet[2275]: I0721 22:17:20.355656 2275 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/937b90ed-9c7c-4e2d-a8d7-0aeba5d710ee-tmp\") pod \"storage-provisioner\" (UID: \"937b90ed-9c7c-4e2d-a8d7-0aeba5d710ee\") " pod="kube-system/storage-provisioner" Jul 21 22:17:20 minikube kubelet[2275]: I0721 22:17:20.540078 2275 topology_manager.go:212] "Topology Admit Handler" Jul 21 22:17:20 minikube kubelet[2275]: I0721 22:17:20.558904 2275 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0ea2d4b-77b2-4991-b7da-9c6dcecad9fb-xtables-lock\") pod \"kube-proxy-sncvt\" (UID: \"d0ea2d4b-77b2-4991-b7da-9c6dcecad9fb\") " pod="kube-system/kube-proxy-sncvt" Jul 21 22:17:20 minikube kubelet[2275]: I0721 22:17:20.558971 2275 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d0ea2d4b-77b2-4991-b7da-9c6dcecad9fb-kube-proxy\") pod \"kube-proxy-sncvt\" (UID: \"d0ea2d4b-77b2-4991-b7da-9c6dcecad9fb\") " pod="kube-system/kube-proxy-sncvt" Jul 21 22:17:20 minikube kubelet[2275]: I0721 22:17:20.559003 2275 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk2mn\" (UniqueName: \"kubernetes.io/projected/d0ea2d4b-77b2-4991-b7da-9c6dcecad9fb-kube-api-access-dk2mn\") pod \"kube-proxy-sncvt\" (UID: \"d0ea2d4b-77b2-4991-b7da-9c6dcecad9fb\") " pod="kube-system/kube-proxy-sncvt" Jul 21 22:17:20 minikube kubelet[2275]: I0721 22:17:20.559046 2275 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0ea2d4b-77b2-4991-b7da-9c6dcecad9fb-lib-modules\") pod \"kube-proxy-sncvt\" (UID: \"d0ea2d4b-77b2-4991-b7da-9c6dcecad9fb\") " pod="kube-system/kube-proxy-sncvt" Jul 21 22:17:20 minikube kubelet[2275]: E0721 22:17:20.571847 2275 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 21 22:17:20 minikube kubelet[2275]: E0721 22:17:20.571910 2275 projected.go:198] Error preparing data for projected volume kube-api-access-z9972 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found Jul 21 22:17:20 minikube kubelet[2275]: E0721 22:17:20.572009 2275 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/937b90ed-9c7c-4e2d-a8d7-0aeba5d710ee-kube-api-access-z9972 podName:937b90ed-9c7c-4e2d-a8d7-0aeba5d710ee nodeName:}" failed. No retries permitted until 2023-07-21 22:17:21.0719826 +0000 UTC m=+12.365290701 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-z9972" (UniqueName: "kubernetes.io/projected/937b90ed-9c7c-4e2d-a8d7-0aeba5d710ee-kube-api-access-z9972") pod "storage-provisioner" (UID: "937b90ed-9c7c-4e2d-a8d7-0aeba5d710ee") : configmap "kube-root-ca.crt" not found Jul 21 22:17:20 minikube kubelet[2275]: E0721 22:17:20.681699 2275 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 21 22:17:20 minikube kubelet[2275]: E0721 22:17:20.681981 2275 projected.go:198] Error preparing data for projected volume kube-api-access-dk2mn for pod kube-system/kube-proxy-sncvt: configmap "kube-root-ca.crt" not found Jul 21 22:17:20 minikube kubelet[2275]: E0721 22:17:20.682679 2275 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0ea2d4b-77b2-4991-b7da-9c6dcecad9fb-kube-api-access-dk2mn podName:d0ea2d4b-77b2-4991-b7da-9c6dcecad9fb nodeName:}" failed. No retries permitted until 2023-07-21 22:17:21.1823985 +0000 UTC m=+12.475706501 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dk2mn" (UniqueName: "kubernetes.io/projected/d0ea2d4b-77b2-4991-b7da-9c6dcecad9fb-kube-api-access-dk2mn") pod "kube-proxy-sncvt" (UID: "d0ea2d4b-77b2-4991-b7da-9c6dcecad9fb") : configmap "kube-root-ca.crt" not found Jul 21 22:17:20 minikube kubelet[2275]: I0721 22:17:20.970757 2275 topology_manager.go:212] "Topology Admit Handler" Jul 21 22:17:21 minikube kubelet[2275]: I0721 22:17:21.064752 2275 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c780b25a-1672-45c5-8261-72df6d0a17c5-config-volume\") pod \"coredns-5d78c9869d-q4gsx\" (UID: \"c780b25a-1672-45c5-8261-72df6d0a17c5\") " pod="kube-system/coredns-5d78c9869d-q4gsx" Jul 21 22:17:21 minikube kubelet[2275]: I0721 22:17:21.064842 2275 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsfrd\" (UniqueName: \"kubernetes.io/projected/c780b25a-1672-45c5-8261-72df6d0a17c5-kube-api-access-xsfrd\") pod \"coredns-5d78c9869d-q4gsx\" (UID: \"c780b25a-1672-45c5-8261-72df6d0a17c5\") " pod="kube-system/coredns-5d78c9869d-q4gsx" Jul 21 22:17:22 minikube kubelet[2275]: I0721 22:17:22.360307 2275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc1bc5bf2797c3fd2184f0503d46fad2bb6011ef2414df18e8100a500d317d0a" Jul 21 22:17:22 minikube kubelet[2275]: I0721 22:17:22.582856 2275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3ed0749caea968a662a1aa2d70f370a1b2849960434893f52ec0b0d6ac81fdf" Jul 21 22:17:23 minikube kubelet[2275]: I0721 22:17:23.747065 2275 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=10.7470005 podCreationTimestamp="2023-07-21 22:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-21 22:17:22.6304897 +0000 UTC m=+13.923797701" watchObservedRunningTime="2023-07-21 22:17:23.7470005 +0000 UTC m=+15.040308501" Jul 21 22:17:23 minikube kubelet[2275]: I0721 22:17:23.747695 2275 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-sncvt" podStartSLOduration=3.7476595 podCreationTimestamp="2023-07-21 22:17:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-21 22:17:23.7463534 +0000 UTC m=+15.039661401" watchObservedRunningTime="2023-07-21 22:17:23.7476595 +0000 UTC m=+15.040967501" Jul 21 22:17:23 minikube kubelet[2275]: I0721 22:17:23.835930 2275 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-q4gsx" podStartSLOduration=3.8357631 podCreationTimestamp="2023-07-21 22:17:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-21 22:17:23.8342177 +0000 UTC m=+15.127525801" watchObservedRunningTime="2023-07-21 22:17:23.8357631 +0000 UTC m=+15.129071101" Jul 21 22:17:29 minikube kubelet[2275]: I0721 22:17:29.873572 2275 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24" Jul 21 22:17:29 minikube kubelet[2275]: I0721 22:17:29.874448 2275 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24" Jul 21 22:17:53 minikube kubelet[2275]: I0721 22:17:53.867938 2275 scope.go:115] "RemoveContainer" containerID="ecf6b43ff2a64dbaf7b40dac77a76ff9cd21a91f0159163eff2b23da9af0fe56" * * ==> storage-provisioner [712a9d1e73b9] <== * I0721 22:17:54.292244 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0721 22:17:54.318069 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0721 22:17:54.318315 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0721 22:17:54.347655 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0721 22:17:54.348895 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6ceeb900-ef5c-436f-8d17-bbe052f3860f", APIVersion:"v1", ResourceVersion:"426", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_d3895616-a7ed-40e7-accc-bc7b96edb46f became leader I0721 22:17:54.349315 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_d3895616-a7ed-40e7-accc-bc7b96edb46f! I0721 22:17:54.450365 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_d3895616-a7ed-40e7-accc-bc7b96edb46f! * * ==> storage-provisioner [ecf6b43ff2a6] <== * I0721 22:17:23.115944 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0721 22:17:53.133806 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout