* * ==> Audit <== * |-----------|-----------------------|----------|--------|---------|---------------------|---------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |-----------|-----------------------|----------|--------|---------|---------------------|---------------------| | start | | minikube | jelgar | v1.32.0 | 19 Jan 24 11:41 GMT | 19 Jan 24 11:43 GMT | | kubectl | get pods | minikube | jelgar | v1.32.0 | 19 Jan 24 11:44 GMT | 19 Jan 24 11:44 GMT | | dashboard | | minikube | jelgar | v1.32.0 | 19 Jan 24 11:45 GMT | | | stop | | minikube | jelgar | v1.32.0 | 19 Jan 24 11:49 GMT | 19 Jan 24 11:49 GMT | | start | | minikube | jelgar | v1.32.0 | 19 Jan 24 11:50 GMT | 19 Jan 24 11:50 GMT | | dashboard | | minikube | jelgar | v1.32.0 | 19 Jan 24 11:50 GMT | | | addons | enable metrics-server | minikube | jelgar | v1.32.0 | 19 Jan 24 11:53 GMT | 19 Jan 24 11:53 GMT | | dashboard | | minikube | jelgar | v1.32.0 | 19 Jan 24 11:54 GMT | | | dashboard | | minikube | jelgar | v1.32.0 | 19 Jan 24 11:55 GMT | | | stop | | minikube | jelgar | v1.32.0 | 19 Jan 24 11:56 GMT | 19 Jan 24 11:56 GMT | | start | | minikube | jelgar | v1.32.0 | 19 Jan 24 11:56 GMT | 19 Jan 24 11:56 GMT | | dashboard | | minikube | jelgar | v1.32.0 | 19 Jan 24 11:57 GMT | | | addons | enable metrics-server | minikube | jelgar | v1.32.0 | 19 Jan 24 11:57 GMT | 19 Jan 24 11:57 GMT | | dashboard | | minikube | jelgar | v1.32.0 | 19 Jan 24 11:58 GMT | | | stop | | minikube | jelgar | v1.32.0 | 19 Jan 24 11:59 GMT | 19 Jan 24 11:59 GMT | | delete | | minikube | jelgar | v1.32.0 | 19 Jan 24 11:59 GMT | 19 Jan 24 11:59 GMT | | start | | minikube | jelgar | v1.32.0 | 19 Jan 24 12:00 GMT | 19 Jan 24 12:00 GMT | | stop | | minikube | jelgar | v1.32.0 | 19 Jan 24 12:02 GMT | 19 Jan 24 12:02 GMT | | delete | | minikube | jelgar | v1.32.0 | 19 Jan 24 12:02 GMT | 19 Jan 24 12:02 GMT | | start | | minikube | jelgar | v1.32.0 | 19 Jan 24 12:02 GMT | 19 Jan 24 12:02 GMT | | delete | | minikube | jelgar | v1.32.0 | 19 Jan 24 12:08 GMT | 19 Jan 24 12:08 GMT | | start | | minikube | jelgar | v1.32.0 | 19 Jan 24 12:08 GMT | 19 Jan 24 12:08 GMT | |-----------|-----------------------|----------|--------|---------|---------------------|---------------------| * * ==> Last Start <== * Log file created at: 2024/01/19 12:08:36 Running on machine: JamesLaptop Binary: Built with gc go1.21.5 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0119 12:08:36.102791 760595 out.go:296] Setting OutFile to fd 1 ... I0119 12:08:36.102921 760595 out.go:348] isatty.IsTerminal(1) = true I0119 12:08:36.102924 760595 out.go:309] Setting ErrFile to fd 2... I0119 12:08:36.102926 760595 out.go:348] isatty.IsTerminal(2) = true I0119 12:08:36.103045 760595 root.go:338] Updating PATH: /home/jelgar/.minikube/bin I0119 12:08:36.103383 760595 out.go:303] Setting JSON to false I0119 12:08:36.105021 760595 start.go:128] hostinfo: {"hostname":"JamesLaptop","uptime":411574,"bootTime":1705254543,"procs":422,"os":"linux","platform":"arch","platformFamily":"arch","platformVersion":"23.1.3","kernelVersion":"6.1.64-1-MANJARO","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"ea7c6baa-eb10-49fe-8586-d8ff30208cb7"} I0119 12:08:36.105063 760595 start.go:138] virtualization: kvm host I0119 12:08:36.106033 760595 out.go:177] ๐Ÿ˜„ minikube v1.32.0 on Arch 23.1.3 I0119 12:08:36.106727 760595 notify.go:220] Checking for updates... I0119 12:08:36.106757 760595 driver.go:378] Setting default libvirt URI to qemu:///system I0119 12:08:36.106776 760595 global.go:111] Querying for installed drivers using PATH=/home/jelgar/.minikube/bin:/home/jelgar/.rvm/gems/ruby-2.5.0/bin:/home/jelgar/.rvm/gems/ruby-2.5.0@global/bin:/home/jelgar/.rvm/rubies/ruby-2.5.0/bin:/home/jelgar/.local/share/pnpm:/home/jelgar/.rbenv/bin:/home/jelgar/Documents/dev/go/bin:/bin:/home/jelgar/.rvm/gems/ruby-2.5.0/bin:/home/jelgar/.rvm/gems/ruby-2.5.0@global/bin:/home/jelgar/.rvm/rubies/ruby-2.5.0/bin:/home/jelgar/.local/bin:/home/jelgar/.cargo/bin:/usr/local/bin:/usr/bin:/var/lib/snapd/snap/bin:/usr/local/sbin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/home/jelgar/.rvm/bin:/home/jelgar/Documents/dev/flutter/bin:/home/jelgar/Documents/dev/flutter/.pub-cache/bin:/home/jelgar/Documents/dev/android-studio/bin:/home/jelgar/.npm-packages/bin:/home/jelgar/.node_modules/bin:/Documents/dev/node/bin:/usr/lib/dart/bin:/home/jelgar/bin:/home/jelgar/.emacs.d/bin:/home/jelgar/Documents/dev/scripts:/home/jelgar/.config/scripts:/home/jelgar/.local/share/gem/ruby/3.0.0/bin:/usr/local/go/bin:/home/jelgar/.pub-cache/bin:/home/jelgar/.rvm/bin:/home/jelgar/.pulumi/bin I0119 12:08:36.106868 760595 global.go:122] virtualbox default: true priority: 6, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:unable to find VBoxManage in $PATH Reason: Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/ Version:} I0119 12:08:36.106915 760595 global.go:122] vmware default: false priority: 5, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "vmrun": executable file not found in $PATH Reason: Fix:Install vmrun Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/ Version:} I0119 12:08:36.123388 760595 docker.go:122] docker version: linux-24.0.7: I0119 12:08:36.123464 760595 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0119 12:08:36.165444 760595 info.go:266] docker info: {ID:73d65d7c-269a-4696-9ef7-c1e152f6e4f1 Containers:10 ContainersRunning:5 ContainersPaused:0 ContainersStopped:5 Images:56 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy true] [Native Overlay Diff false] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2024-01-19 12:08:36.157865598 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.1.64-1-MANJARO OperatingSystem:Manjaro Linux OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:16361623552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:JamesLaptop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:4f03e100cb967922bec7459a78d16ccbac9bb81d.m Expected:4f03e100cb967922bec7459a78d16ccbac9bb81d.m} RuncCommit:{ID: Expected:} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:compose Path:/usr/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.23.3]] Warnings:}} I0119 12:08:36.165524 760595 docker.go:295] overlay module found I0119 12:08:36.165534 760595 global.go:122] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0119 12:08:36.176812 760595 global.go:122] none default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0119 12:08:36.176936 760595 global.go:122] podman default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Reason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/ Version:} I0119 12:08:36.176971 760595 global.go:122] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0119 12:08:36.210613 760595 global.go:122] kvm2 default: true priority: 8, state: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:libvirt group membership check failed: user is not a member of the appropriate libvirt group Reason:PR_KVM_USER_PERMISSION Fix:Check that libvirtd is properly installed and that you are a member of the appropriate libvirt group (remember to relogin for group changes to take effect!) Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/ Version:} I0119 12:08:36.210689 760595 global.go:122] qemu2 default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:stat /usr/share/OVMF/OVMF_CODE.fd: no such file or directory Reason: Fix:Install uefi firmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/qemu/ Version:} I0119 12:08:36.210704 760595 driver.go:313] not recommending "none" due to default: false I0119 12:08:36.210707 760595 driver.go:313] not recommending "ssh" due to default: false I0119 12:08:36.210709 760595 driver.go:308] not recommending "kvm2" due to health: libvirt group membership check failed: user is not a member of the appropriate libvirt group I0119 12:08:36.210720 760595 driver.go:348] Picked: docker I0119 12:08:36.210724 760595 driver.go:349] Alternatives: [none ssh] I0119 12:08:36.210727 760595 driver.go:350] Rejects: [virtualbox vmware podman kvm2 qemu2] I0119 12:08:36.211503 760595 out.go:177] โœจ Automatically selected the docker driver. Other choices: none, ssh I0119 12:08:36.212166 760595 start.go:298] selected driver: docker I0119 12:08:36.212168 760595 start.go:902] validating driver "docker" against I0119 12:08:36.212174 760595 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0119 12:08:36.212229 760595 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0119 12:08:36.242482 760595 info.go:266] docker info: {ID:73d65d7c-269a-4696-9ef7-c1e152f6e4f1 Containers:10 ContainersRunning:5 ContainersPaused:0 ContainersStopped:5 Images:56 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy true] [Native Overlay Diff false] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2024-01-19 12:08:36.23766282 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.1.64-1-MANJARO OperatingSystem:Manjaro Linux OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:16361623552 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:JamesLaptop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:4f03e100cb967922bec7459a78d16ccbac9bb81d.m Expected:4f03e100cb967922bec7459a78d16ccbac9bb81d.m} RuncCommit:{ID: Expected:} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:compose Path:/usr/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.23.3]] Warnings:}} I0119 12:08:36.242583 760595 start_flags.go:309] no existing cluster config was found, will generate one from the flags I0119 12:08:36.243313 760595 start_flags.go:394] Using suggested 3900MB memory alloc based on sys=15603MB, container=15603MB I0119 12:08:36.243396 760595 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true] I0119 12:08:36.244421 760595 out.go:177] ๐Ÿ“Œ Using Docker driver with root privileges I0119 12:08:36.245145 760595 cni.go:84] Creating CNI manager for "" I0119 12:08:36.245154 760595 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0119 12:08:36.245159 760595 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni I0119 12:08:36.245163 760595 start_flags.go:323] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jelgar:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} I0119 12:08:36.245901 760595 out.go:177] ๐Ÿ‘ Starting control plane node minikube in cluster minikube I0119 12:08:36.246443 760595 cache.go:121] Beginning downloading kic base image for docker with docker I0119 12:08:36.247002 760595 out.go:177] ๐Ÿšœ Pulling base image ... I0119 12:08:36.247561 760595 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker I0119 12:08:36.247573 760595 preload.go:148] Found local preload: /home/jelgar/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 I0119 12:08:36.247575 760595 cache.go:56] Caching tarball of preloaded images I0119 12:08:36.247595 760595 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon I0119 12:08:36.247616 760595 preload.go:174] Found /home/jelgar/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0119 12:08:36.247621 760595 cache.go:59] Finished verifying existence of preloaded tar for v1.28.3 on docker I0119 12:08:36.247808 760595 profile.go:148] Saving config to /home/jelgar/.minikube/profiles/minikube/config.json ... I0119 12:08:36.247816 760595 lock.go:35] WriteFile acquiring /home/jelgar/.minikube/profiles/minikube/config.json: {Name:mk105638a6ad78791c034067d11610d0af820be5 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0119 12:08:36.264330 760595 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull I0119 12:08:36.264339 760595 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load I0119 12:08:36.264348 760595 cache.go:194] Successfully downloaded all kic artifacts I0119 12:08:36.264375 760595 start.go:365] acquiring machines lock for minikube: {Name:mk3e10aa38ff9883c3d323b65687a6e6fe6dd83e Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0119 12:08:36.264408 760595 start.go:369] acquired machines lock for "minikube" in 24.832ยตs I0119 12:08:36.264415 760595 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jelgar:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0119 12:08:36.264449 760595 start.go:125] createHost starting for "" (driver="docker") I0119 12:08:36.265263 760595 out.go:204] ๐Ÿ”ฅ Creating docker container (CPUs=2, Memory=3900MB) ... I0119 12:08:36.265402 760595 start.go:159] libmachine.API.Create for "minikube" (driver="docker") I0119 12:08:36.265412 760595 client.go:168] LocalClient.Create starting I0119 12:08:36.265445 760595 main.go:141] libmachine: Reading certificate data from /home/jelgar/.minikube/certs/ca.pem I0119 12:08:36.265459 760595 main.go:141] libmachine: Decoding PEM data... I0119 12:08:36.265466 760595 main.go:141] libmachine: Parsing certificate... I0119 12:08:36.265494 760595 main.go:141] libmachine: Reading certificate data from /home/jelgar/.minikube/certs/cert.pem I0119 12:08:36.265501 760595 main.go:141] libmachine: Decoding PEM data... I0119 12:08:36.265505 760595 main.go:141] libmachine: Parsing certificate... I0119 12:08:36.265679 760595 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0119 12:08:36.273820 760595 cli_runner.go:211] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0119 12:08:36.273854 760595 network_create.go:281] running [docker network inspect minikube] to gather additional debugging logs... I0119 12:08:36.273862 760595 cli_runner.go:164] Run: docker network inspect minikube W0119 12:08:36.282692 760595 cli_runner.go:211] docker network inspect minikube returned with exit code 1 I0119 12:08:36.282704 760595 network_create.go:284] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1 stdout: [] stderr: Error response from daemon: network minikube not found I0119 12:08:36.282712 760595 network_create.go:286] output of [docker network inspect minikube]: -- stdout -- [] -- /stdout -- ** stderr ** Error response from daemon: network minikube not found ** /stderr ** I0119 12:08:36.282770 760595 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0119 12:08:36.291637 760595 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00234f6b0} I0119 12:08:36.291657 760595 network_create.go:124] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ... I0119 12:08:36.291682 760595 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=minikube minikube I0119 12:08:36.319325 760595 network_create.go:108] docker network minikube 192.168.49.0/24 created I0119 12:08:36.319342 760595 kic.go:121] calculated static IP "192.168.49.2" for the "minikube" container I0119 12:08:36.319382 760595 cli_runner.go:164] Run: docker ps -a --format {{.Names}} I0119 12:08:36.328596 760595 cli_runner.go:164] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I0119 12:08:36.337186 760595 oci.go:103] Successfully created a docker volume minikube I0119 12:08:36.337221 760595 cli_runner.go:164] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib I0119 12:08:36.813381 760595 oci.go:107] Successfully prepared a docker volume minikube I0119 12:08:36.813401 760595 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker I0119 12:08:36.813414 760595 kic.go:194] Starting extracting preloaded images to volume ... I0119 12:08:36.813456 760595 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jelgar/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir I0119 12:08:38.388290 760595 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jelgar/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (1.574797256s) I0119 12:08:38.388311 760595 kic.go:203] duration metric: took 1.574895 seconds to extract preloaded images to volume W0119 12:08:38.388368 760595 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0119 12:08:38.388381 760595 oci.go:243] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted. I0119 12:08:38.388415 760595 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'" I0119 12:08:38.428882 760595 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=3900mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 I0119 12:08:38.744377 760595 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Running}} I0119 12:08:38.756749 760595 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0119 12:08:38.766058 760595 cli_runner.go:164] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables I0119 12:08:38.833196 760595 oci.go:144] the created container "minikube" has a running status. I0119 12:08:38.833209 760595 kic.go:225] Creating ssh key for kic: /home/jelgar/.minikube/machines/minikube/id_rsa... I0119 12:08:39.003633 760595 kic_runner.go:191] docker (temp): /home/jelgar/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0119 12:08:39.014616 760595 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0119 12:08:39.023758 760595 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0119 12:08:39.023766 760595 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys] I0119 12:08:39.075583 760595 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0119 12:08:39.084778 760595 machine.go:88] provisioning docker machine ... I0119 12:08:39.084798 760595 ubuntu.go:169] provisioning hostname "minikube" I0119 12:08:39.084842 760595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0119 12:08:39.094227 760595 main.go:141] libmachine: Using SSH client type: native I0119 12:08:39.094580 760595 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x55d59cd42140] 0x55d59cd44e20 [] 0s} 127.0.0.1 32797 } I0119 12:08:39.094588 760595 main.go:141] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0119 12:08:39.201393 760595 main.go:141] libmachine: SSH cmd err, output: : minikube I0119 12:08:39.201445 760595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0119 12:08:39.210690 760595 main.go:141] libmachine: Using SSH client type: native I0119 12:08:39.210902 760595 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x55d59cd42140] 0x55d59cd44e20 [] 0s} 127.0.0.1 32797 } I0119 12:08:39.210910 760595 main.go:141] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0119 12:08:39.316512 760595 main.go:141] libmachine: SSH cmd err, output: : I0119 12:08:39.316536 760595 ubuntu.go:175] set auth options {CertDir:/home/jelgar/.minikube CaCertPath:/home/jelgar/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jelgar/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jelgar/.minikube/machines/server.pem ServerKeyPath:/home/jelgar/.minikube/machines/server-key.pem ClientKeyPath:/home/jelgar/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jelgar/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jelgar/.minikube} I0119 12:08:39.316571 760595 ubuntu.go:177] setting up certificates I0119 12:08:39.316581 760595 provision.go:83] configureAuth start I0119 12:08:39.316656 760595 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0119 12:08:39.330664 760595 provision.go:138] copyHostCerts I0119 12:08:39.330692 760595 exec_runner.go:144] found /home/jelgar/.minikube/ca.pem, removing ... I0119 12:08:39.330697 760595 exec_runner.go:203] rm: /home/jelgar/.minikube/ca.pem I0119 12:08:39.330746 760595 exec_runner.go:151] cp: /home/jelgar/.minikube/certs/ca.pem --> /home/jelgar/.minikube/ca.pem (1078 bytes) I0119 12:08:39.330792 760595 exec_runner.go:144] found /home/jelgar/.minikube/cert.pem, removing ... I0119 12:08:39.330794 760595 exec_runner.go:203] rm: /home/jelgar/.minikube/cert.pem I0119 12:08:39.330807 760595 exec_runner.go:151] cp: /home/jelgar/.minikube/certs/cert.pem --> /home/jelgar/.minikube/cert.pem (1123 bytes) I0119 12:08:39.330827 760595 exec_runner.go:144] found /home/jelgar/.minikube/key.pem, removing ... I0119 12:08:39.330829 760595 exec_runner.go:203] rm: /home/jelgar/.minikube/key.pem I0119 12:08:39.330838 760595 exec_runner.go:151] cp: /home/jelgar/.minikube/certs/key.pem --> /home/jelgar/.minikube/key.pem (1675 bytes) I0119 12:08:39.330873 760595 provision.go:112] generating server cert: /home/jelgar/.minikube/machines/server.pem ca-key=/home/jelgar/.minikube/certs/ca.pem private-key=/home/jelgar/.minikube/certs/ca-key.pem org=jelgar.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] I0119 12:08:39.418092 760595 provision.go:172] copyRemoteCerts I0119 12:08:39.418125 760595 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0119 12:08:39.418170 760595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0119 12:08:39.427215 760595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32797 SSHKeyPath:/home/jelgar/.minikube/machines/minikube/id_rsa Username:docker} I0119 12:08:39.505558 760595 ssh_runner.go:362] scp /home/jelgar/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes) I0119 12:08:39.526104 760595 ssh_runner.go:362] scp /home/jelgar/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes) I0119 12:08:39.541326 760595 ssh_runner.go:362] scp /home/jelgar/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0119 12:08:39.557465 760595 provision.go:86] duration metric: configureAuth took 240.875242ms I0119 12:08:39.557492 760595 ubuntu.go:193] setting minikube options for container-runtime I0119 12:08:39.557618 760595 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3 I0119 12:08:39.557672 760595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0119 12:08:39.569063 760595 main.go:141] libmachine: Using SSH client type: native I0119 12:08:39.569448 760595 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x55d59cd42140] 0x55d59cd44e20 [] 0s} 127.0.0.1 32797 } I0119 12:08:39.569457 760595 main.go:141] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0119 12:08:39.687672 760595 main.go:141] libmachine: SSH cmd err, output: : overlay I0119 12:08:39.687691 760595 ubuntu.go:71] root file system type: overlay I0119 12:08:39.687822 760595 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0119 12:08:39.687904 760595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0119 12:08:39.703431 760595 main.go:141] libmachine: Using SSH client type: native I0119 12:08:39.703853 760595 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x55d59cd42140] 0x55d59cd44e20 [] 0s} 127.0.0.1 32797 } I0119 12:08:39.703929 760595 main.go:141] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0119 12:08:39.831704 760595 main.go:141] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0119 12:08:39.831756 760595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0119 12:08:39.845286 760595 main.go:141] libmachine: Using SSH client type: native I0119 12:08:39.845611 760595 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x55d59cd42140] 0x55d59cd44e20 [] 0s} 127.0.0.1 32797 } I0119 12:08:39.845623 760595 main.go:141] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0119 12:08:40.324927 760595 main.go:141] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2023-10-26 09:06:22.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2024-01-19 12:08:39.829560610 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com -After=network-online.target docker.socket firewalld.service containerd.service time-set.target -Wants=network-online.target containerd.service +BindsTo=containerd.service +After=network-online.target firewalld.service containerd.service +Wants=network-online.target Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutStartSec=0 -RestartSec=2 -Restart=always +Restart=on-failure -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0119 12:08:40.324938 760595 machine.go:91] provisioned docker machine in 1.240152864s I0119 12:08:40.324943 760595 client.go:171] LocalClient.Create took 4.059529057s I0119 12:08:40.324949 760595 start.go:167] duration metric: libmachine.API.Create for "minikube" took 4.059547013s I0119 12:08:40.324953 760595 start.go:300] post-start starting for "minikube" (driver="docker") I0119 12:08:40.324957 760595 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0119 12:08:40.324991 760595 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0119 12:08:40.325011 760595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0119 12:08:40.333552 760595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32797 SSHKeyPath:/home/jelgar/.minikube/machines/minikube/id_rsa Username:docker} I0119 12:08:40.418918 760595 ssh_runner.go:195] Run: cat /etc/os-release I0119 12:08:40.422068 760595 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0119 12:08:40.422100 760595 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0119 12:08:40.422114 760595 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0119 12:08:40.422120 760595 info.go:137] Remote host: Ubuntu 22.04.3 LTS I0119 12:08:40.422130 760595 filesync.go:126] Scanning /home/jelgar/.minikube/addons for local assets ... I0119 12:08:40.422193 760595 filesync.go:126] Scanning /home/jelgar/.minikube/files for local assets ... I0119 12:08:40.422219 760595 start.go:303] post-start completed in 97.262033ms I0119 12:08:40.422586 760595 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0119 12:08:40.433648 760595 profile.go:148] Saving config to /home/jelgar/.minikube/profiles/minikube/config.json ... I0119 12:08:40.433820 760595 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0119 12:08:40.433846 760595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0119 12:08:40.442121 760595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32797 SSHKeyPath:/home/jelgar/.minikube/machines/minikube/id_rsa Username:docker} I0119 12:08:40.520612 760595 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'" I0119 12:08:40.526087 760595 start.go:128] duration metric: createHost completed in 4.261627418s I0119 12:08:40.526099 760595 start.go:83] releasing machines lock for "minikube", held for 4.261685314s I0119 12:08:40.526158 760595 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0119 12:08:40.539709 760595 ssh_runner.go:195] Run: cat /version.json I0119 12:08:40.539745 760595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0119 12:08:40.539766 760595 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/ I0119 12:08:40.539799 760595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0119 12:08:40.549085 760595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32797 SSHKeyPath:/home/jelgar/.minikube/machines/minikube/id_rsa Username:docker} I0119 12:08:40.549105 760595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32797 SSHKeyPath:/home/jelgar/.minikube/machines/minikube/id_rsa Username:docker} I0119 12:08:40.757891 760595 ssh_runner.go:195] Run: systemctl --version I0119 12:08:40.763239 760595 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*" I0119 12:08:40.769086 760595 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ; I0119 12:08:40.795347 760595 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found I0119 12:08:40.795419 760595 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ; I0119 12:08:40.816619 760595 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s) I0119 12:08:40.816631 760595 start.go:472] detecting cgroup driver to use... I0119 12:08:40.816661 760595 detect.go:199] detected "systemd" cgroup driver on host os I0119 12:08:40.816758 760595 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock " | sudo tee /etc/crictl.yaml" I0119 12:08:40.830846 760595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml" I0119 12:08:40.837478 760595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml" I0119 12:08:40.843446 760595 containerd.go:145] configuring containerd to use "systemd" as cgroup driver... I0119 12:08:40.843477 760595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml" I0119 12:08:40.851076 760595 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0119 12:08:40.859416 760595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml" I0119 12:08:40.867793 760595 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0119 12:08:40.876088 760595 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk" I0119 12:08:40.882841 760595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml" I0119 12:08:40.888573 760595 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables I0119 12:08:40.893510 760595 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" I0119 12:08:40.898463 760595 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0119 12:08:40.944076 760595 ssh_runner.go:195] Run: sudo systemctl restart containerd I0119 12:08:41.024792 760595 start.go:472] detecting cgroup driver to use... I0119 12:08:41.024817 760595 detect.go:199] detected "systemd" cgroup driver on host os I0119 12:08:41.024855 760595 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0119 12:08:41.030912 760595 cruntime.go:279] skipping containerd shutdown because we are bound to it I0119 12:08:41.030960 760595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0119 12:08:41.038357 760595 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0119 12:08:41.049845 760595 ssh_runner.go:195] Run: which cri-dockerd I0119 12:08:41.052528 760595 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d I0119 12:08:41.057448 760595 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes) I0119 12:08:41.066575 760595 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0119 12:08:41.110975 760595 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0119 12:08:41.155646 760595 docker.go:560] configuring docker to use "systemd" as cgroup driver... I0119 12:08:41.155719 760595 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes) I0119 12:08:41.164201 760595 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0119 12:08:41.209863 760595 ssh_runner.go:195] Run: sudo systemctl restart docker I0119 12:08:41.371720 760595 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0119 12:08:41.410091 760595 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket I0119 12:08:41.453849 760595 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0119 12:08:41.507799 760595 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0119 12:08:41.553937 760595 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket I0119 12:08:41.607803 760595 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0119 12:08:41.667918 760595 ssh_runner.go:195] Run: sudo systemctl restart cri-docker I0119 12:08:41.772035 760595 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock I0119 12:08:41.772121 760595 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0119 12:08:41.774323 760595 start.go:540] Will wait 60s for crictl version I0119 12:08:41.774365 760595 ssh_runner.go:195] Run: which crictl I0119 12:08:41.776083 760595 ssh_runner.go:195] Run: sudo /usr/bin/crictl version I0119 12:08:41.800566 760595 start.go:556] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 24.0.7 RuntimeApiVersion: v1 I0119 12:08:41.800608 760595 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0119 12:08:41.817352 760595 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0119 12:08:41.836393 760595 out.go:204] ๐Ÿณ Preparing Kubernetes v1.28.3 on Docker 24.0.7 ... I0119 12:08:41.836459 760595 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0119 12:08:41.845355 760595 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I0119 12:08:41.847755 760595 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0119 12:08:41.854007 760595 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker I0119 12:08:41.854031 760595 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0119 12:08:41.864220 760595 docker.go:671] Got preloaded images: -- stdout -- registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 registry.k8s.io/pause:3.9 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0119 12:08:41.864230 760595 docker.go:601] Images already preloaded, skipping extraction I0119 12:08:41.864278 760595 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0119 12:08:41.874623 760595 docker.go:671] Got preloaded images: -- stdout -- registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 registry.k8s.io/pause:3.9 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0119 12:08:41.874631 760595 cache_images.go:84] Images are preloaded, skipping loading I0119 12:08:41.874669 760595 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0119 12:08:41.903292 760595 cni.go:84] Creating CNI manager for "" I0119 12:08:41.903300 760595 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0119 12:08:41.903311 760595 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0119 12:08:41.903321 760595 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true} I0119 12:08:41.903387 760595 kubeadm.go:181] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: unix:///var/run/cri-dockerd.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.28.3 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd hairpinMode: hairpin-veth runtimeRequestTimeout: 15m clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0119 12:08:41.903414 760595 kubeadm.go:976] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.28.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0119 12:08:41.903446 760595 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3 I0119 12:08:41.908613 760595 binaries.go:44] Found k8s binaries, skipping transfer I0119 12:08:41.908664 760595 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0119 12:08:41.913149 760595 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes) I0119 12:08:41.922547 760595 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0119 12:08:41.930930 760595 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2090 bytes) I0119 12:08:41.939891 760595 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0119 12:08:41.941548 760595 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0119 12:08:41.946585 760595 certs.go:56] Setting up /home/jelgar/.minikube/profiles/minikube for IP: 192.168.49.2 I0119 12:08:41.946598 760595 certs.go:190] acquiring lock for shared ca certs: {Name:mk12d65d9e92bc119040cea2f41718de58c99b83 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0119 12:08:41.946680 760595 certs.go:199] skipping minikubeCA CA generation: /home/jelgar/.minikube/ca.key I0119 12:08:41.946704 760595 certs.go:199] skipping proxyClientCA CA generation: /home/jelgar/.minikube/proxy-client-ca.key I0119 12:08:41.946731 760595 certs.go:319] generating minikube-user signed cert: /home/jelgar/.minikube/profiles/minikube/client.key I0119 12:08:41.946742 760595 crypto.go:68] Generating cert /home/jelgar/.minikube/profiles/minikube/client.crt with IP's: [] I0119 12:08:41.976936 760595 crypto.go:156] Writing cert to /home/jelgar/.minikube/profiles/minikube/client.crt ... I0119 12:08:41.976947 760595 lock.go:35] WriteFile acquiring /home/jelgar/.minikube/profiles/minikube/client.crt: {Name:mk53d581b958a82b1b6ae97d98790d90915768ff Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0119 12:08:41.977718 760595 crypto.go:164] Writing key to /home/jelgar/.minikube/profiles/minikube/client.key ... I0119 12:08:41.977723 760595 lock.go:35] WriteFile acquiring /home/jelgar/.minikube/profiles/minikube/client.key: {Name:mk6f1cd257e75712fe32035163ab2bd3f96632c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0119 12:08:41.977769 760595 certs.go:319] generating minikube signed cert: /home/jelgar/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I0119 12:08:41.977775 760595 crypto.go:68] Generating cert /home/jelgar/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1] I0119 12:08:42.047894 760595 crypto.go:156] Writing cert to /home/jelgar/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ... I0119 12:08:42.047900 760595 lock.go:35] WriteFile acquiring /home/jelgar/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mk15091b6abfe3a1bc5a2a462ae8aeec126b16ac Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0119 12:08:42.047961 760595 crypto.go:164] Writing key to /home/jelgar/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ... I0119 12:08:42.047964 760595 lock.go:35] WriteFile acquiring /home/jelgar/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mkdce2f27fe9484cefea9d343fb58e20278123c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0119 12:08:42.047984 760595 certs.go:337] copying /home/jelgar/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/jelgar/.minikube/profiles/minikube/apiserver.crt I0119 12:08:42.048022 760595 certs.go:341] copying /home/jelgar/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/jelgar/.minikube/profiles/minikube/apiserver.key I0119 12:08:42.048042 760595 certs.go:319] generating aggregator signed cert: /home/jelgar/.minikube/profiles/minikube/proxy-client.key I0119 12:08:42.048047 760595 crypto.go:68] Generating cert /home/jelgar/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I0119 12:08:42.118493 760595 crypto.go:156] Writing cert to /home/jelgar/.minikube/profiles/minikube/proxy-client.crt ... I0119 12:08:42.118502 760595 lock.go:35] WriteFile acquiring /home/jelgar/.minikube/profiles/minikube/proxy-client.crt: {Name:mk7f4c1f518cbd0884c4c9e62a4cd3f3f3067dcc Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0119 12:08:42.118573 760595 crypto.go:164] Writing key to /home/jelgar/.minikube/profiles/minikube/proxy-client.key ... I0119 12:08:42.118575 760595 lock.go:35] WriteFile acquiring /home/jelgar/.minikube/profiles/minikube/proxy-client.key: {Name:mkb0954a18f0156988fb9ab557181d31a8790221 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0119 12:08:42.118637 760595 certs.go:437] found cert: /home/jelgar/.minikube/certs/home/jelgar/.minikube/certs/ca-key.pem (1679 bytes) I0119 12:08:42.118651 760595 certs.go:437] found cert: /home/jelgar/.minikube/certs/home/jelgar/.minikube/certs/ca.pem (1078 bytes) I0119 12:08:42.118661 760595 certs.go:437] found cert: /home/jelgar/.minikube/certs/home/jelgar/.minikube/certs/cert.pem (1123 bytes) I0119 12:08:42.118669 760595 certs.go:437] found cert: /home/jelgar/.minikube/certs/home/jelgar/.minikube/certs/key.pem (1675 bytes) I0119 12:08:42.118939 760595 ssh_runner.go:362] scp /home/jelgar/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0119 12:08:42.131844 760595 ssh_runner.go:362] scp /home/jelgar/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0119 12:08:42.143125 760595 ssh_runner.go:362] scp /home/jelgar/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0119 12:08:42.154834 760595 ssh_runner.go:362] scp /home/jelgar/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0119 12:08:42.166791 760595 ssh_runner.go:362] scp /home/jelgar/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0119 12:08:42.178443 760595 ssh_runner.go:362] scp /home/jelgar/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I0119 12:08:42.189703 760595 ssh_runner.go:362] scp /home/jelgar/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0119 12:08:42.201989 760595 ssh_runner.go:362] scp /home/jelgar/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0119 12:08:42.216759 760595 ssh_runner.go:362] scp /home/jelgar/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0119 12:08:42.230192 760595 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0119 12:08:42.240467 760595 ssh_runner.go:195] Run: openssl version I0119 12:08:42.243904 760595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0119 12:08:42.249556 760595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0119 12:08:42.251962 760595 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 19 11:43 /usr/share/ca-certificates/minikubeCA.pem I0119 12:08:42.251987 760595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0119 12:08:42.255940 760595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0119 12:08:42.261239 760595 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd I0119 12:08:42.263700 760595 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2 stdout: stderr: ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory I0119 12:08:42.263724 760595 kubeadm.go:404] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jelgar:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} I0119 12:08:42.263790 760595 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0119 12:08:42.275328 760595 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0119 12:08:42.280753 760595 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0119 12:08:42.285901 760595 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver I0119 12:08:42.285939 760595 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0119 12:08:42.290897 760595 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0119 12:08:42.290918 760595 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0119 12:08:42.349337 760595 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3 I0119 12:08:42.349373 760595 kubeadm.go:322] [preflight] Running pre-flight checks I0119 12:08:42.477424 760595 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster I0119 12:08:42.477490 760595 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection I0119 12:08:42.477551 760595 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' I0119 12:08:42.648462 760595 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs" I0119 12:08:42.649277 760595 out.go:204] โ–ช Generating certificates and keys ... I0119 12:08:42.649363 760595 kubeadm.go:322] [certs] Using existing ca certificate authority I0119 12:08:42.649417 760595 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk I0119 12:08:42.709508 760595 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key I0119 12:08:42.796172 760595 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key I0119 12:08:42.862199 760595 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key I0119 12:08:42.940484 760595 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key I0119 12:08:43.011424 760595 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key I0119 12:08:43.011499 760595 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] I0119 12:08:43.087192 760595 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key I0119 12:08:43.087272 760595 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] I0119 12:08:43.288479 760595 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key I0119 12:08:43.347236 760595 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key I0119 12:08:43.379711 760595 kubeadm.go:322] [certs] Generating "sa" key and public key I0119 12:08:43.379754 760595 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes" I0119 12:08:43.460625 760595 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file I0119 12:08:43.576002 760595 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file I0119 12:08:43.649077 760595 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file I0119 12:08:43.727896 760595 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file I0119 12:08:43.727977 760595 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" I0119 12:08:43.736622 760595 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests" I0119 12:08:43.737563 760595 out.go:204] โ–ช Booting up control plane ... I0119 12:08:43.737644 760595 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver" I0119 12:08:43.737709 760595 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager" I0119 12:08:43.737902 760595 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler" I0119 12:08:43.744920 760595 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" I0119 12:08:43.745163 760595 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" I0119 12:08:43.745186 760595 kubeadm.go:322] [kubelet-start] Starting the kubelet I0119 12:08:43.809472 760595 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s I0119 12:08:48.313041 760595 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.503517 seconds I0119 12:08:48.313269 760595 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace I0119 12:08:48.334824 760595 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster I0119 12:08:48.860657 760595 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs I0119 12:08:48.860811 760595 kubeadm.go:322] [mark-control-plane] Marking the node minikube as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] I0119 12:08:49.371830 760595 kubeadm.go:322] [bootstrap-token] Using token: 4hfay6.styd8s7n93wailm0 I0119 12:08:49.372494 760595 out.go:204] โ–ช Configuring RBAC rules ... I0119 12:08:49.372649 760595 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles I0119 12:08:49.378216 760595 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes I0119 12:08:49.385768 760595 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials I0119 12:08:49.388821 760595 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token I0119 12:08:49.391831 760595 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster I0119 12:08:49.395023 760595 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace I0119 12:08:49.405150 760595 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key I0119 12:08:49.571409 760595 kubeadm.go:322] [addons] Applied essential addon: CoreDNS I0119 12:08:49.783512 760595 kubeadm.go:322] [addons] Applied essential addon: kube-proxy I0119 12:08:49.784413 760595 kubeadm.go:322] I0119 12:08:49.784589 760595 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully! I0119 12:08:49.784601 760595 kubeadm.go:322] I0119 12:08:49.784723 760595 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user: I0119 12:08:49.784731 760595 kubeadm.go:322] I0119 12:08:49.784766 760595 kubeadm.go:322] mkdir -p $HOME/.kube I0119 12:08:49.784852 760595 kubeadm.go:322] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config I0119 12:08:49.784931 760595 kubeadm.go:322] sudo chown $(id -u):$(id -g) $HOME/.kube/config I0119 12:08:49.784937 760595 kubeadm.go:322] I0119 12:08:49.785012 760595 kubeadm.go:322] Alternatively, if you are the root user, you can run: I0119 12:08:49.785017 760595 kubeadm.go:322] I0119 12:08:49.785097 760595 kubeadm.go:322] export KUBECONFIG=/etc/kubernetes/admin.conf I0119 12:08:49.785109 760595 kubeadm.go:322] I0119 12:08:49.785197 760595 kubeadm.go:322] You should now deploy a pod network to the cluster. I0119 12:08:49.785305 760595 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: I0119 12:08:49.785413 760595 kubeadm.go:322] https://kubernetes.io/docs/concepts/cluster-administration/addons/ I0119 12:08:49.785418 760595 kubeadm.go:322] I0119 12:08:49.785539 760595 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities I0119 12:08:49.785679 760595 kubeadm.go:322] and service account keys on each node and then running the following as root: I0119 12:08:49.785690 760595 kubeadm.go:322] I0119 12:08:49.785807 760595 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 4hfay6.styd8s7n93wailm0 \ I0119 12:08:49.785960 760595 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:521e4d74dcc39809e855e1dd7b51daa16ec9fe81194b240b310d7a4c5839d057 \ I0119 12:08:49.786003 760595 kubeadm.go:322] --control-plane I0119 12:08:49.786010 760595 kubeadm.go:322] I0119 12:08:49.786172 760595 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root: I0119 12:08:49.786178 760595 kubeadm.go:322] I0119 12:08:49.786315 760595 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 4hfay6.styd8s7n93wailm0 \ I0119 12:08:49.786458 760595 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:521e4d74dcc39809e855e1dd7b51daa16ec9fe81194b240b310d7a4c5839d057 I0119 12:08:49.788787 760595 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' I0119 12:08:49.788804 760595 cni.go:84] Creating CNI manager for "" I0119 12:08:49.788821 760595 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0119 12:08:49.789651 760595 out.go:177] ๐Ÿ”— Configuring bridge CNI (Container Networking Interface) ... I0119 12:08:49.790212 760595 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d I0119 12:08:49.800424 760595 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes) I0119 12:08:49.816542 760595 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0119 12:08:49.816637 760595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0119 12:08:49.816638 760595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=8220a6eb95f0a4d75f7f2d7b14cef975f050512d-dirty minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2024_01_19T12_08_49_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0119 12:08:49.903293 760595 kubeadm.go:1081] duration metric: took 86.720339ms to wait for elevateKubeSystemPrivileges. I0119 12:08:49.903315 760595 ops.go:34] apiserver oom_adj: -16 I0119 12:08:49.910336 760595 kubeadm.go:406] StartCluster complete in 7.646610751s I0119 12:08:49.910351 760595 settings.go:142] acquiring lock: {Name:mk6ea598d290439f039c1ac0d79689db5f84a907 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0119 12:08:49.910383 760595 settings.go:150] Updating kubeconfig: /home/jelgar/.kube/config I0119 12:08:49.910753 760595 lock.go:35] WriteFile acquiring /home/jelgar/.kube/config: {Name:mk9d4a29baf46115311ebe1c37cc27df98e1a94e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0119 12:08:49.910870 760595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0119 12:08:49.910969 760595 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] I0119 12:08:49.911008 760595 addons.go:69] Setting storage-provisioner=true in profile "minikube" I0119 12:08:49.911013 760595 addons.go:69] Setting default-storageclass=true in profile "minikube" I0119 12:08:49.911018 760595 addons.go:231] Setting addon storage-provisioner=true in "minikube" I0119 12:08:49.911017 760595 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3 I0119 12:08:49.911022 760595 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0119 12:08:49.911045 760595 host.go:66] Checking if "minikube" exists ... I0119 12:08:49.911250 760595 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0119 12:08:49.911326 760595 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0119 12:08:49.921767 760595 addons.go:231] Setting addon default-storageclass=true in "minikube" I0119 12:08:49.921807 760595 host.go:66] Checking if "minikube" exists ... I0119 12:08:49.922343 760595 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0119 12:08:49.923729 760595 out.go:177] โ–ช Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0119 12:08:49.924513 760595 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml I0119 12:08:49.924524 760595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0119 12:08:49.924586 760595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0119 12:08:49.928217 760595 kapi.go:248] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas I0119 12:08:49.928238 760595 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0119 12:08:49.928804 760595 out.go:177] ๐Ÿ”Ž Verifying Kubernetes components... I0119 12:08:49.929791 760595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0119 12:08:49.934764 760595 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml I0119 12:08:49.934777 760595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0119 12:08:49.934830 760595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0119 12:08:49.936747 760595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32797 SSHKeyPath:/home/jelgar/.minikube/machines/minikube/id_rsa Username:docker} I0119 12:08:49.946905 760595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32797 SSHKeyPath:/home/jelgar/.minikube/machines/minikube/id_rsa Username:docker} I0119 12:08:49.959381 760595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0119 12:08:49.959812 760595 api_server.go:52] waiting for apiserver process to appear ... I0119 12:08:49.959861 760595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0119 12:08:50.026296 760595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0119 12:08:50.034025 760595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0119 12:08:50.259265 760595 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap I0119 12:08:50.259288 760595 api_server.go:72] duration metric: took 331.03385ms to wait for apiserver process to appear ... I0119 12:08:50.259292 760595 api_server.go:88] waiting for apiserver healthz status ... I0119 12:08:50.259300 760595 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0119 12:08:50.262045 760595 api_server.go:279] https://192.168.49.2:8443/healthz returned 200: ok I0119 12:08:50.263115 760595 api_server.go:141] control plane version: v1.28.3 I0119 12:08:50.263122 760595 api_server.go:131] duration metric: took 3.826707ms to wait for apiserver health ... I0119 12:08:50.263125 760595 system_pods.go:43] waiting for kube-system pods to appear ... I0119 12:08:50.266186 760595 system_pods.go:59] 4 kube-system pods found I0119 12:08:50.266198 760595 system_pods.go:61] "etcd-minikube" [c7fc37fd-ab0c-4c41-91da-1c69a7617898] Pending I0119 12:08:50.266201 760595 system_pods.go:61] "kube-apiserver-minikube" [a9a3238a-b62d-44e6-830b-2d13411e0fb6] Pending I0119 12:08:50.266204 760595 system_pods.go:61] "kube-controller-manager-minikube" [b6fb3668-92de-4467-9a99-d6829f52f5a1] Pending I0119 12:08:50.266224 760595 system_pods.go:61] "kube-scheduler-minikube" [d6429edc-37f9-45d1-84a8-70f563a7d7d8] Pending I0119 12:08:50.266227 760595 system_pods.go:74] duration metric: took 3.099304ms to wait for pod list to return data ... I0119 12:08:50.266237 760595 kubeadm.go:581] duration metric: took 337.984359ms to wait for : map[apiserver:true system_pods:true] ... I0119 12:08:50.266245 760595 node_conditions.go:102] verifying NodePressure condition ... I0119 12:08:50.268045 760595 node_conditions.go:122] node storage ephemeral capacity is 772966856Ki I0119 12:08:50.268056 760595 node_conditions.go:123] node cpu capacity is 16 I0119 12:08:50.268063 760595 node_conditions.go:105] duration metric: took 1.81549ms to run NodePressure ... I0119 12:08:50.268070 760595 start.go:228] waiting for startup goroutines ... I0119 12:08:50.339450 760595 out.go:177] ๐ŸŒŸ Enabled addons: storage-provisioner, default-storageclass I0119 12:08:50.339915 760595 addons.go:502] enable addons completed in 428.951708ms: enabled=[storage-provisioner default-storageclass] I0119 12:08:50.339929 760595 start.go:233] waiting for cluster config update ... I0119 12:08:50.339936 760595 start.go:242] writing updated cluster config ... I0119 12:08:50.340100 760595 ssh_runner.go:195] Run: rm -f paused I0119 12:08:50.390684 760595 start.go:600] kubectl: 1.29.0, cluster: 1.28.3 (minor skew: 1) I0119 12:08:50.391682 760595 out.go:177] ๐Ÿ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default * * ==> Docker <== * Jan 19 12:11:36 minikube cri-dockerd[1279]: time="2024-01-19T12:11:36Z" level=error msg="Error adding pod kube-system/coredns-5dd5756b68-wjn26 to network {docker 30ab6cbcdbbd0cb9e5247601edc0fa65b53895d4a6ab6a9ee89c6b8fa78b1bdb}:/proc/26272/ns/net:bridge:bridge: plugin type=\"bridge\" failed (add): running [/usr/sbin/iptables -t nat -C CNI-10d16d32eb4729d298b6ca34 -d 10.244.0.150/16 -j ACCEPT -m comment --comment name: \"bridge\" id: \"30ab6cbcdbbd0cb9e5247601edc0fa65b53895d4a6ab6a9ee89c6b8fa78b1bdb\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" Jan 19 12:11:36 minikube cri-dockerd[1279]: time="2024-01-19T12:11:36Z" level=error msg="Error deleting pod kube-system/coredns-5dd5756b68-wjn26 from network {docker 30ab6cbcdbbd0cb9e5247601edc0fa65b53895d4a6ab6a9ee89c6b8fa78b1bdb}:/proc/26272/ns/net:bridge:bridge: plugin type=\"bridge\" failed (delete): running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.244.0.150 -j CNI-10d16d32eb4729d298b6ca34 -m comment --comment name: \"bridge\" id: \"30ab6cbcdbbd0cb9e5247601edc0fa65b53895d4a6ab6a9ee89c6b8fa78b1bdb\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" Jan 19 12:11:36 minikube dockerd[1057]: time="2024-01-19T12:11:36.178797578Z" level=info msg="ignoring event" container=30ab6cbcdbbd0cb9e5247601edc0fa65b53895d4a6ab6a9ee89c6b8fa78b1bdb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jan 19 12:11:36 minikube cri-dockerd[1279]: time="2024-01-19T12:11:36Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-wjn26_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"30ab6cbcdbbd0cb9e5247601edc0fa65b53895d4a6ab6a9ee89c6b8fa78b1bdb\"" Jan 19 12:11:37 minikube cri-dockerd[1279]: time="2024-01-19T12:11:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/61e8151e66134dcbff24a5177ea9c81d481296907d8ca640898854da5096e4d2/resolv.conf as [nameserver 192.168.49.1 search localdomain options ndots:0]" Jan 19 12:11:37 minikube cri-dockerd[1279]: time="2024-01-19T12:11:37Z" level=error msg="Error adding pod kube-system/coredns-5dd5756b68-wjn26 to network {docker 61e8151e66134dcbff24a5177ea9c81d481296907d8ca640898854da5096e4d2}:/proc/26430/ns/net:bridge:bridge: plugin type=\"bridge\" failed (add): running [/usr/sbin/iptables -t nat -C CNI-d51f727544fc90005a3ef5ed -d 10.244.0.151/16 -j ACCEPT -m comment --comment name: \"bridge\" id: \"61e8151e66134dcbff24a5177ea9c81d481296907d8ca640898854da5096e4d2\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" Jan 19 12:11:37 minikube cri-dockerd[1279]: time="2024-01-19T12:11:37Z" level=error msg="Error deleting pod kube-system/coredns-5dd5756b68-wjn26 from network {docker 61e8151e66134dcbff24a5177ea9c81d481296907d8ca640898854da5096e4d2}:/proc/26430/ns/net:bridge:bridge: plugin type=\"bridge\" failed (delete): running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.244.0.151 -j CNI-d51f727544fc90005a3ef5ed -m comment --comment name: \"bridge\" id: \"61e8151e66134dcbff24a5177ea9c81d481296907d8ca640898854da5096e4d2\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" Jan 19 12:11:37 minikube dockerd[1057]: time="2024-01-19T12:11:37.200746999Z" level=info msg="ignoring event" container=61e8151e66134dcbff24a5177ea9c81d481296907d8ca640898854da5096e4d2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jan 19 12:11:37 minikube cri-dockerd[1279]: time="2024-01-19T12:11:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-wjn26_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"61e8151e66134dcbff24a5177ea9c81d481296907d8ca640898854da5096e4d2\"" Jan 19 12:11:38 minikube cri-dockerd[1279]: time="2024-01-19T12:11:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/50618780a1287f65e8e6ab9eb0f95d14e2af5cab4d84fa9b8b1c61ce02af5183/resolv.conf as [nameserver 192.168.49.1 search localdomain options ndots:0]" Jan 19 12:11:38 minikube cri-dockerd[1279]: time="2024-01-19T12:11:38Z" level=error msg="Error adding pod kube-system/coredns-5dd5756b68-wjn26 to network {docker 50618780a1287f65e8e6ab9eb0f95d14e2af5cab4d84fa9b8b1c61ce02af5183}:/proc/26573/ns/net:bridge:bridge: plugin type=\"bridge\" failed (add): running [/usr/sbin/iptables -t nat -C CNI-15355e1e39d3a5cf181daa29 -d 10.244.0.152/16 -j ACCEPT -m comment --comment name: \"bridge\" id: \"50618780a1287f65e8e6ab9eb0f95d14e2af5cab4d84fa9b8b1c61ce02af5183\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" Jan 19 12:11:38 minikube cri-dockerd[1279]: time="2024-01-19T12:11:38Z" level=error msg="Error deleting pod kube-system/coredns-5dd5756b68-wjn26 from network {docker 50618780a1287f65e8e6ab9eb0f95d14e2af5cab4d84fa9b8b1c61ce02af5183}:/proc/26573/ns/net:bridge:bridge: plugin type=\"bridge\" failed (delete): running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.244.0.152 -j CNI-15355e1e39d3a5cf181daa29 -m comment --comment name: \"bridge\" id: \"50618780a1287f65e8e6ab9eb0f95d14e2af5cab4d84fa9b8b1c61ce02af5183\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" Jan 19 12:11:38 minikube dockerd[1057]: time="2024-01-19T12:11:38.207221233Z" level=info msg="ignoring event" container=50618780a1287f65e8e6ab9eb0f95d14e2af5cab4d84fa9b8b1c61ce02af5183 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jan 19 12:11:38 minikube dockerd[1057]: time="2024-01-19T12:11:38.726679928Z" level=info msg="ignoring event" container=99319ebbfa7880882a1ff9258d59856eafb04521060a2b0e7bf7fe7e5c09dd10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jan 19 12:11:38 minikube cri-dockerd[1279]: time="2024-01-19T12:11:38Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-wjn26_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"50618780a1287f65e8e6ab9eb0f95d14e2af5cab4d84fa9b8b1c61ce02af5183\"" Jan 19 12:11:39 minikube cri-dockerd[1279]: time="2024-01-19T12:11:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/aa79cd352b6503a94418cff2db08f199c5724512bf3d7712cb5fd5506c36dad4/resolv.conf as [nameserver 192.168.49.1 search localdomain options ndots:0]" Jan 19 12:11:39 minikube cri-dockerd[1279]: time="2024-01-19T12:11:39Z" level=error msg="Error adding pod kube-system/coredns-5dd5756b68-wjn26 to network {docker aa79cd352b6503a94418cff2db08f199c5724512bf3d7712cb5fd5506c36dad4}:/proc/26737/ns/net:bridge:bridge: plugin type=\"bridge\" failed (add): running [/usr/sbin/iptables -t nat -C CNI-3fdcbd9fcb2abb545771a5fc -d 10.244.0.153/16 -j ACCEPT -m comment --comment name: \"bridge\" id: \"aa79cd352b6503a94418cff2db08f199c5724512bf3d7712cb5fd5506c36dad4\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" Jan 19 12:11:39 minikube cri-dockerd[1279]: time="2024-01-19T12:11:39Z" level=error msg="Error deleting pod kube-system/coredns-5dd5756b68-wjn26 from network {docker aa79cd352b6503a94418cff2db08f199c5724512bf3d7712cb5fd5506c36dad4}:/proc/26737/ns/net:bridge:bridge: plugin type=\"bridge\" failed (delete): running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.244.0.153 -j CNI-3fdcbd9fcb2abb545771a5fc -m comment --comment name: \"bridge\" id: \"aa79cd352b6503a94418cff2db08f199c5724512bf3d7712cb5fd5506c36dad4\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" Jan 19 12:11:39 minikube dockerd[1057]: time="2024-01-19T12:11:39.317714080Z" level=info msg="ignoring event" container=aa79cd352b6503a94418cff2db08f199c5724512bf3d7712cb5fd5506c36dad4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jan 19 12:11:40 minikube cri-dockerd[1279]: time="2024-01-19T12:11:40Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-wjn26_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"aa79cd352b6503a94418cff2db08f199c5724512bf3d7712cb5fd5506c36dad4\"" Jan 19 12:11:40 minikube cri-dockerd[1279]: time="2024-01-19T12:11:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1e62312152e66b88cfda357bd30aa73aa2295f1d3173c2a84cf07bf4b5f8d521/resolv.conf as [nameserver 192.168.49.1 search localdomain options ndots:0]" Jan 19 12:11:40 minikube cri-dockerd[1279]: time="2024-01-19T12:11:40Z" level=error msg="Error adding pod kube-system/coredns-5dd5756b68-wjn26 to network {docker 1e62312152e66b88cfda357bd30aa73aa2295f1d3173c2a84cf07bf4b5f8d521}:/proc/26887/ns/net:bridge:bridge: plugin type=\"bridge\" failed (add): running [/usr/sbin/iptables -t nat -C CNI-64d4a156465bf89e93406e43 -d 10.244.0.154/16 -j ACCEPT -m comment --comment name: \"bridge\" id: \"1e62312152e66b88cfda357bd30aa73aa2295f1d3173c2a84cf07bf4b5f8d521\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" Jan 19 12:11:40 minikube cri-dockerd[1279]: time="2024-01-19T12:11:40Z" level=error msg="Error deleting pod kube-system/coredns-5dd5756b68-wjn26 from network {docker 1e62312152e66b88cfda357bd30aa73aa2295f1d3173c2a84cf07bf4b5f8d521}:/proc/26887/ns/net:bridge:bridge: plugin type=\"bridge\" failed (delete): running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.244.0.154 -j CNI-64d4a156465bf89e93406e43 -m comment --comment name: \"bridge\" id: \"1e62312152e66b88cfda357bd30aa73aa2295f1d3173c2a84cf07bf4b5f8d521\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" Jan 19 12:11:40 minikube dockerd[1057]: time="2024-01-19T12:11:40.357203169Z" level=info msg="ignoring event" container=1e62312152e66b88cfda357bd30aa73aa2295f1d3173c2a84cf07bf4b5f8d521 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jan 19 12:11:41 minikube cri-dockerd[1279]: time="2024-01-19T12:11:41Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-wjn26_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"1e62312152e66b88cfda357bd30aa73aa2295f1d3173c2a84cf07bf4b5f8d521\"" Jan 19 12:11:41 minikube cri-dockerd[1279]: time="2024-01-19T12:11:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c15bd5902b17b5f337201ce53636582790ff85f494a3c3ac2b2cbcdf02e67a4c/resolv.conf as [nameserver 192.168.49.1 search localdomain options ndots:0]" Jan 19 12:11:41 minikube cri-dockerd[1279]: time="2024-01-19T12:11:41Z" level=error msg="Error adding pod kube-system/coredns-5dd5756b68-wjn26 to network {docker c15bd5902b17b5f337201ce53636582790ff85f494a3c3ac2b2cbcdf02e67a4c}:/proc/27081/ns/net:bridge:bridge: plugin type=\"bridge\" failed (add): running [/usr/sbin/iptables -t nat -C CNI-679d91e861a5d0d44dd474a1 -d 10.244.0.155/16 -j ACCEPT -m comment --comment name: \"bridge\" id: \"c15bd5902b17b5f337201ce53636582790ff85f494a3c3ac2b2cbcdf02e67a4c\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" Jan 19 12:11:41 minikube cri-dockerd[1279]: time="2024-01-19T12:11:41Z" level=error msg="Error deleting pod kube-system/coredns-5dd5756b68-wjn26 from network {docker c15bd5902b17b5f337201ce53636582790ff85f494a3c3ac2b2cbcdf02e67a4c}:/proc/27081/ns/net:bridge:bridge: plugin type=\"bridge\" failed (delete): running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.244.0.155 -j CNI-679d91e861a5d0d44dd474a1 -m comment --comment name: \"bridge\" id: \"c15bd5902b17b5f337201ce53636582790ff85f494a3c3ac2b2cbcdf02e67a4c\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" Jan 19 12:11:41 minikube dockerd[1057]: time="2024-01-19T12:11:41.385172439Z" level=info msg="ignoring event" container=c15bd5902b17b5f337201ce53636582790ff85f494a3c3ac2b2cbcdf02e67a4c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jan 19 12:11:42 minikube cri-dockerd[1279]: time="2024-01-19T12:11:42Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-wjn26_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"c15bd5902b17b5f337201ce53636582790ff85f494a3c3ac2b2cbcdf02e67a4c\"" Jan 19 12:11:42 minikube cri-dockerd[1279]: time="2024-01-19T12:11:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/de68ed62c878ca5f7825bbb6d081e54c15ede7f5b9470e9ae6b626dae387b960/resolv.conf as [nameserver 192.168.49.1 search localdomain options ndots:0]" Jan 19 12:11:42 minikube cri-dockerd[1279]: time="2024-01-19T12:11:42Z" level=error msg="Error adding pod kube-system/coredns-5dd5756b68-wjn26 to network {docker de68ed62c878ca5f7825bbb6d081e54c15ede7f5b9470e9ae6b626dae387b960}:/proc/27237/ns/net:bridge:bridge: plugin type=\"bridge\" failed (add): running [/usr/sbin/iptables -t nat -C CNI-7287bd89c04218efc32ec3d5 -d 10.244.0.156/16 -j ACCEPT -m comment --comment name: \"bridge\" id: \"de68ed62c878ca5f7825bbb6d081e54c15ede7f5b9470e9ae6b626dae387b960\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" Jan 19 12:11:42 minikube cri-dockerd[1279]: time="2024-01-19T12:11:42Z" level=error msg="Error deleting pod kube-system/coredns-5dd5756b68-wjn26 from network {docker de68ed62c878ca5f7825bbb6d081e54c15ede7f5b9470e9ae6b626dae387b960}:/proc/27237/ns/net:bridge:bridge: plugin type=\"bridge\" failed (delete): running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.244.0.156 -j CNI-7287bd89c04218efc32ec3d5 -m comment --comment name: \"bridge\" id: \"de68ed62c878ca5f7825bbb6d081e54c15ede7f5b9470e9ae6b626dae387b960\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" Jan 19 12:11:42 minikube dockerd[1057]: time="2024-01-19T12:11:42.494230725Z" level=info msg="ignoring event" container=de68ed62c878ca5f7825bbb6d081e54c15ede7f5b9470e9ae6b626dae387b960 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jan 19 12:11:43 minikube cri-dockerd[1279]: time="2024-01-19T12:11:43Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-wjn26_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"de68ed62c878ca5f7825bbb6d081e54c15ede7f5b9470e9ae6b626dae387b960\"" Jan 19 12:11:43 minikube cri-dockerd[1279]: time="2024-01-19T12:11:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7a4e972f42b7aa7ea37a7e9cacc482aa1a13aa09f86c0ea0ac663a7559a9a3e5/resolv.conf as [nameserver 192.168.49.1 search localdomain options ndots:0]" Jan 19 12:11:43 minikube cri-dockerd[1279]: time="2024-01-19T12:11:43Z" level=error msg="Error adding pod kube-system/coredns-5dd5756b68-wjn26 to network {docker 7a4e972f42b7aa7ea37a7e9cacc482aa1a13aa09f86c0ea0ac663a7559a9a3e5}:/proc/27387/ns/net:bridge:bridge: plugin type=\"bridge\" failed (add): running [/usr/sbin/iptables -t nat -C CNI-7954a1e8406cca291ed0e243 -d 10.244.0.157/16 -j ACCEPT -m comment --comment name: \"bridge\" id: \"7a4e972f42b7aa7ea37a7e9cacc482aa1a13aa09f86c0ea0ac663a7559a9a3e5\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" Jan 19 12:11:43 minikube cri-dockerd[1279]: time="2024-01-19T12:11:43Z" level=error msg="Error deleting pod kube-system/coredns-5dd5756b68-wjn26 from network {docker 7a4e972f42b7aa7ea37a7e9cacc482aa1a13aa09f86c0ea0ac663a7559a9a3e5}:/proc/27387/ns/net:bridge:bridge: plugin type=\"bridge\" failed (delete): running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.244.0.157 -j CNI-7954a1e8406cca291ed0e243 -m comment --comment name: \"bridge\" id: \"7a4e972f42b7aa7ea37a7e9cacc482aa1a13aa09f86c0ea0ac663a7559a9a3e5\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" Jan 19 12:11:43 minikube dockerd[1057]: time="2024-01-19T12:11:43.536845894Z" level=info msg="ignoring event" container=7a4e972f42b7aa7ea37a7e9cacc482aa1a13aa09f86c0ea0ac663a7559a9a3e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jan 19 12:11:44 minikube cri-dockerd[1279]: time="2024-01-19T12:11:44Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-wjn26_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"7a4e972f42b7aa7ea37a7e9cacc482aa1a13aa09f86c0ea0ac663a7559a9a3e5\"" Jan 19 12:11:44 minikube cri-dockerd[1279]: time="2024-01-19T12:11:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a657a7c6d6982a33fe2952265309dd89fc42feaad70dfc77f4915100b3e2a933/resolv.conf as [nameserver 192.168.49.1 search localdomain options ndots:0]" Jan 19 12:11:44 minikube cri-dockerd[1279]: time="2024-01-19T12:11:44Z" level=error msg="Error adding pod kube-system/coredns-5dd5756b68-wjn26 to network {docker a657a7c6d6982a33fe2952265309dd89fc42feaad70dfc77f4915100b3e2a933}:/proc/27532/ns/net:bridge:bridge: plugin type=\"bridge\" failed (add): running [/usr/sbin/iptables -t nat -C CNI-18b3b78794bb98d4f2b4033f -d 10.244.0.158/16 -j ACCEPT -m comment --comment name: \"bridge\" id: \"a657a7c6d6982a33fe2952265309dd89fc42feaad70dfc77f4915100b3e2a933\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" Jan 19 12:11:44 minikube cri-dockerd[1279]: time="2024-01-19T12:11:44Z" level=error msg="Error deleting pod kube-system/coredns-5dd5756b68-wjn26 from network {docker a657a7c6d6982a33fe2952265309dd89fc42feaad70dfc77f4915100b3e2a933}:/proc/27532/ns/net:bridge:bridge: plugin type=\"bridge\" failed (delete): running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.244.0.158 -j CNI-18b3b78794bb98d4f2b4033f -m comment --comment name: \"bridge\" id: \"a657a7c6d6982a33fe2952265309dd89fc42feaad70dfc77f4915100b3e2a933\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" Jan 19 12:11:44 minikube dockerd[1057]: time="2024-01-19T12:11:44.605978100Z" level=info msg="ignoring event" container=a657a7c6d6982a33fe2952265309dd89fc42feaad70dfc77f4915100b3e2a933 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jan 19 12:11:45 minikube cri-dockerd[1279]: time="2024-01-19T12:11:45Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-wjn26_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a657a7c6d6982a33fe2952265309dd89fc42feaad70dfc77f4915100b3e2a933\"" Jan 19 12:11:45 minikube cri-dockerd[1279]: time="2024-01-19T12:11:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e12e61ea38bb017550f61342460cca5eedeeef8ab20b60cd8e1aea0fafd64a3f/resolv.conf as [nameserver 192.168.49.1 search localdomain options ndots:0]" Jan 19 12:11:45 minikube cri-dockerd[1279]: time="2024-01-19T12:11:45Z" level=error msg="Error adding pod kube-system/coredns-5dd5756b68-wjn26 to network {docker e12e61ea38bb017550f61342460cca5eedeeef8ab20b60cd8e1aea0fafd64a3f}:/proc/27684/ns/net:bridge:bridge: plugin type=\"bridge\" failed (add): running [/usr/sbin/iptables -t nat -C CNI-2a3bf98dcfeb05257a937caa -d 10.244.0.159/16 -j ACCEPT -m comment --comment name: \"bridge\" id: \"e12e61ea38bb017550f61342460cca5eedeeef8ab20b60cd8e1aea0fafd64a3f\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" Jan 19 12:11:45 minikube cri-dockerd[1279]: time="2024-01-19T12:11:45Z" level=error msg="Error deleting pod kube-system/coredns-5dd5756b68-wjn26 from network {docker e12e61ea38bb017550f61342460cca5eedeeef8ab20b60cd8e1aea0fafd64a3f}:/proc/27684/ns/net:bridge:bridge: plugin type=\"bridge\" failed (delete): running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.244.0.159 -j CNI-2a3bf98dcfeb05257a937caa -m comment --comment name: \"bridge\" id: \"e12e61ea38bb017550f61342460cca5eedeeef8ab20b60cd8e1aea0fafd64a3f\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" Jan 19 12:11:45 minikube dockerd[1057]: time="2024-01-19T12:11:45.646173248Z" level=info msg="ignoring event" container=e12e61ea38bb017550f61342460cca5eedeeef8ab20b60cd8e1aea0fafd64a3f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jan 19 12:11:46 minikube cri-dockerd[1279]: time="2024-01-19T12:11:46Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-wjn26_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"e12e61ea38bb017550f61342460cca5eedeeef8ab20b60cd8e1aea0fafd64a3f\"" Jan 19 12:11:46 minikube cri-dockerd[1279]: time="2024-01-19T12:11:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fa2bbccd136d38217cce621cf37c612a6df82d4abd4b7d4d3377be7db79a6b7e/resolv.conf as [nameserver 192.168.49.1 search localdomain options ndots:0]" Jan 19 12:11:46 minikube cri-dockerd[1279]: time="2024-01-19T12:11:46Z" level=error msg="Error adding pod kube-system/coredns-5dd5756b68-wjn26 to network {docker fa2bbccd136d38217cce621cf37c612a6df82d4abd4b7d4d3377be7db79a6b7e}:/proc/27828/ns/net:bridge:bridge: plugin type=\"bridge\" failed (add): running [/usr/sbin/iptables -t nat -C CNI-8df4d110fcddc2155fbcd834 -d 10.244.0.160/16 -j ACCEPT -m comment --comment name: \"bridge\" id: \"fa2bbccd136d38217cce621cf37c612a6df82d4abd4b7d4d3377be7db79a6b7e\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" Jan 19 12:11:46 minikube cri-dockerd[1279]: time="2024-01-19T12:11:46Z" level=error msg="Error deleting pod kube-system/coredns-5dd5756b68-wjn26 from network {docker fa2bbccd136d38217cce621cf37c612a6df82d4abd4b7d4d3377be7db79a6b7e}:/proc/27828/ns/net:bridge:bridge: plugin type=\"bridge\" failed (delete): running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.244.0.160 -j CNI-8df4d110fcddc2155fbcd834 -m comment --comment name: \"bridge\" id: \"fa2bbccd136d38217cce621cf37c612a6df82d4abd4b7d4d3377be7db79a6b7e\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" Jan 19 12:11:46 minikube dockerd[1057]: time="2024-01-19T12:11:46.702312389Z" level=info msg="ignoring event" container=fa2bbccd136d38217cce621cf37c612a6df82d4abd4b7d4d3377be7db79a6b7e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jan 19 12:11:47 minikube cri-dockerd[1279]: time="2024-01-19T12:11:47Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-wjn26_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"fa2bbccd136d38217cce621cf37c612a6df82d4abd4b7d4d3377be7db79a6b7e\"" Jan 19 12:11:47 minikube cri-dockerd[1279]: time="2024-01-19T12:11:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c4869bfe857acb0f6a68ea25a29c0b6ffb65fdb6ba76af5e3a6bb3c808b6c3c9/resolv.conf as [nameserver 192.168.49.1 search localdomain options ndots:0]" Jan 19 12:11:47 minikube cri-dockerd[1279]: time="2024-01-19T12:11:47Z" level=error msg="Error adding pod kube-system/coredns-5dd5756b68-wjn26 to network {docker c4869bfe857acb0f6a68ea25a29c0b6ffb65fdb6ba76af5e3a6bb3c808b6c3c9}:/proc/27984/ns/net:bridge:bridge: plugin type=\"bridge\" failed (add): running [/usr/sbin/iptables -t nat -C CNI-16635ebd9950c3cdb4c14dca -d 10.244.0.161/16 -j ACCEPT -m comment --comment name: \"bridge\" id: \"c4869bfe857acb0f6a68ea25a29c0b6ffb65fdb6ba76af5e3a6bb3c808b6c3c9\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" Jan 19 12:11:47 minikube cri-dockerd[1279]: time="2024-01-19T12:11:47Z" level=error msg="Error deleting pod kube-system/coredns-5dd5756b68-wjn26 from network {docker c4869bfe857acb0f6a68ea25a29c0b6ffb65fdb6ba76af5e3a6bb3c808b6c3c9}:/proc/27984/ns/net:bridge:bridge: plugin type=\"bridge\" failed (delete): running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.244.0.161 -j CNI-16635ebd9950c3cdb4c14dca -m comment --comment name: \"bridge\" id: \"c4869bfe857acb0f6a68ea25a29c0b6ffb65fdb6ba76af5e3a6bb3c808b6c3c9\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" Jan 19 12:11:47 minikube dockerd[1057]: time="2024-01-19T12:11:47.767803658Z" level=info msg="ignoring event" container=c4869bfe857acb0f6a68ea25a29c0b6ffb65fdb6ba76af5e3a6bb3c808b6c3c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jan 19 12:11:48 minikube cri-dockerd[1279]: time="2024-01-19T12:11:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-wjn26_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"c4869bfe857acb0f6a68ea25a29c0b6ffb65fdb6ba76af5e3a6bb3c808b6c3c9\"" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD 99319ebbfa788 6e38f40d628db 40 seconds ago Exited storage-provisioner 3 1eb619732341c storage-provisioner f97d4b5e18150 bfc896cf80fba 2 minutes ago Running kube-proxy 0 e3e8939ec27a2 kube-proxy-4g7fb 67fe33a864bb2 6d1b4fd1b182d 3 minutes ago Running kube-scheduler 0 0e46974e55c1b kube-scheduler-minikube 55a2ae0feeabd 10baa1ca17068 3 minutes ago Running kube-controller-manager 0 b8fa540dc8e34 kube-controller-manager-minikube 3f8eadada0b17 73deb9a3f7025 3 minutes ago Running etcd 0 dab3f2349dbe5 etcd-minikube 990b8f7807a73 5374347291230 3 minutes ago Running kube-apiserver 0 85d47ce92298f kube-apiserver-minikube * * ==> describe nodes <== * Name: minikube Roles: control-plane Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=8220a6eb95f0a4d75f7f2d7b14cef975f050512d-dirty minikube.k8s.io/name=minikube minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2024_01_19T12_08_49_0700 minikube.k8s.io/version=v1.32.0 node-role.kubernetes.io/control-plane= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Fri, 19 Jan 2024 12:08:46 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Fri, 19 Jan 2024 12:11:43 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Fri, 19 Jan 2024 12:09:10 +0000 Fri, 19 Jan 2024 12:08:45 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Fri, 19 Jan 2024 12:09:10 +0000 Fri, 19 Jan 2024 12:08:45 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Fri, 19 Jan 2024 12:09:10 +0000 Fri, 19 Jan 2024 12:08:45 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Fri, 19 Jan 2024 12:09:10 +0000 Fri, 19 Jan 2024 12:08:46 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: minikube Capacity: cpu: 16 ephemeral-storage: 772966856Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15978148Ki pods: 110 Allocatable: cpu: 16 ephemeral-storage: 772966856Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15978148Ki pods: 110 System Info: Machine ID: cd2c719159d745ffb1dd427b933ddb37 System UUID: d5e5209a-33de-4f6b-8ea8-716274b038c3 Boot ID: c6b993e9-2593-4dac-8d13-454e202ac57a Kernel Version: 6.1.64-1-MANJARO OS Image: Ubuntu 22.04.3 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://24.0.7 Kubelet Version: v1.28.3 Kube-Proxy Version: v1.28.3 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (7 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-5dd5756b68-wjn26 100m (0%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (1%!)(MISSING) 2m46s kube-system etcd-minikube 100m (0%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 2m59s kube-system kube-apiserver-minikube 250m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m59s kube-system kube-controller-manager-minikube 200m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m59s kube-system kube-proxy-4g7fb 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m46s kube-system kube-scheduler-minikube 100m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m59s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m58s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (4%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (1%!)(MISSING) 170Mi (1%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 2m45s kube-proxy Normal Starting 3m5s kubelet Starting kubelet. Normal NodeHasSufficientMemory 3m4s (x9 over 3m5s) kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 3m4s (x8 over 3m5s) kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 3m4s (x7 over 3m5s) kubelet Node minikube status is now: NodeHasSufficientPID Normal Starting 2m59s kubelet Starting kubelet. Normal NodeAllocatableEnforced 2m59s kubelet Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 2m59s kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 2m59s kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 2m59s kubelet Node minikube status is now: NodeHasSufficientPID Normal RegisteredNode 2m47s node-controller Node minikube event: Registered Node minikube in Controller * * ==> dmesg <== * * * ==> etcd [3f8eadada0b1] <== * {"level":"warn","ts":"2024-01-19T12:08:44.694126Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."} {"level":"info","ts":"2024-01-19T12:08:44.694231Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.49.2:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.49.2:2380","--initial-cluster=minikube=https://192.168.49.2:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.49.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.49.2:2380","--name=minikube","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]} {"level":"warn","ts":"2024-01-19T12:08:44.694339Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."} {"level":"info","ts":"2024-01-19T12:08:44.694369Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.49.2:2380"]} {"level":"info","ts":"2024-01-19T12:08:44.694402Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2024-01-19T12:08:44.694703Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"]} {"level":"info","ts":"2024-01-19T12:08:44.694786Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.9","git-sha":"bdbbde998","go-version":"go1.19.9","go-os":"linux","go-arch":"amd64","max-cpu-set":16,"max-cpu-available":16,"member-initialized":false,"name":"minikube","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"minikube=https://192.168.49.2:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"} {"level":"info","ts":"2024-01-19T12:08:44.695956Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"974.696ยตs"} {"level":"info","ts":"2024-01-19T12:08:44.698033Z","caller":"etcdserver/raft.go:495","msg":"starting local member","local-member-id":"aec36adc501070cc","cluster-id":"fa54960ea34d58be"} {"level":"info","ts":"2024-01-19T12:08:44.698074Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=()"} {"level":"info","ts":"2024-01-19T12:08:44.69809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 0"} {"level":"info","ts":"2024-01-19T12:08:44.698096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"} {"level":"info","ts":"2024-01-19T12:08:44.698102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 1"} {"level":"info","ts":"2024-01-19T12:08:44.698127Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"} {"level":"warn","ts":"2024-01-19T12:08:44.699497Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"} {"level":"info","ts":"2024-01-19T12:08:44.700258Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1} {"level":"info","ts":"2024-01-19T12:08:44.700793Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"} {"level":"info","ts":"2024-01-19T12:08:44.70158Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.9","cluster-version":"to_be_decided"} {"level":"info","ts":"2024-01-19T12:08:44.701661Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} {"level":"info","ts":"2024-01-19T12:08:44.701692Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"} {"level":"info","ts":"2024-01-19T12:08:44.701717Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"} {"level":"info","ts":"2024-01-19T12:08:44.701722Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"} {"level":"info","ts":"2024-01-19T12:08:44.702282Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"} {"level":"info","ts":"2024-01-19T12:08:44.702495Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]} {"level":"info","ts":"2024-01-19T12:08:44.703146Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2024-01-19T12:08:44.703197Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"} {"level":"info","ts":"2024-01-19T12:08:44.703217Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"} {"level":"info","ts":"2024-01-19T12:08:44.703243Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]} {"level":"info","ts":"2024-01-19T12:08:44.703254Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"} {"level":"info","ts":"2024-01-19T12:08:45.698785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"} {"level":"info","ts":"2024-01-19T12:08:45.698853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"} {"level":"info","ts":"2024-01-19T12:08:45.698908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"} {"level":"info","ts":"2024-01-19T12:08:45.698934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"} {"level":"info","ts":"2024-01-19T12:08:45.69895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"} {"level":"info","ts":"2024-01-19T12:08:45.698979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"} {"level":"info","ts":"2024-01-19T12:08:45.698997Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"} {"level":"info","ts":"2024-01-19T12:08:45.707201Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2024-01-19T12:08:45.707783Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:minikube ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"} {"level":"info","ts":"2024-01-19T12:08:45.707857Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"} {"level":"info","ts":"2024-01-19T12:08:45.70801Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"} {"level":"info","ts":"2024-01-19T12:08:45.708202Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} {"level":"info","ts":"2024-01-19T12:08:45.708246Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} {"level":"info","ts":"2024-01-19T12:08:45.708241Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"} {"level":"info","ts":"2024-01-19T12:08:45.708395Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2024-01-19T12:08:45.708435Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2024-01-19T12:08:45.709763Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"} {"level":"info","ts":"2024-01-19T12:08:45.710028Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"} * * ==> kernel <== * 12:11:48 up 4 days, 18:22, 0 users, load average: 1.55, 1.59, 1.78 Linux minikube 6.1.64-1-MANJARO #1 SMP PREEMPT_DYNAMIC Tue Nov 28 20:31:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 22.04.3 LTS" * * ==> kube-apiserver [990b8f7807a7] <== * I0119 12:08:46.402373 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0119 12:08:46.402460 1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key" I0119 12:08:46.402482 1 controller.go:134] Starting OpenAPI controller I0119 12:08:46.402491 1 controller.go:85] Starting OpenAPI V3 controller I0119 12:08:46.402498 1 naming_controller.go:291] Starting NamingConditionController I0119 12:08:46.402466 1 customresource_discovery_controller.go:289] Starting DiscoveryController I0119 12:08:46.402624 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0119 12:08:46.402627 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0119 12:08:46.402631 1 crd_finalizer.go:266] Starting CRDFinalizer I0119 12:08:46.402662 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0119 12:08:46.402636 1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller I0119 12:08:46.402633 1 establishing_controller.go:76] Starting EstablishingController I0119 12:08:46.402636 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0119 12:08:46.402691 1 available_controller.go:423] Starting AvailableConditionController I0119 12:08:46.402651 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0119 12:08:46.402702 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0119 12:08:46.402710 1 controller.go:80] Starting OpenAPI V3 AggregationController I0119 12:08:46.402716 1 controller.go:78] Starting OpenAPI AggregationController I0119 12:08:46.402729 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I0119 12:08:46.402748 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0119 12:08:46.402756 1 handler_discovery.go:412] Starting ResourceDiscoveryManager I0119 12:08:46.403031 1 gc_controller.go:78] Starting apiserver lease garbage collector I0119 12:08:46.403068 1 system_namespaces_controller.go:67] Starting system namespaces controller I0119 12:08:46.403087 1 controller.go:116] Starting legacy_token_tracking_controller I0119 12:08:46.403096 1 shared_informer.go:311] Waiting for caches to sync for configmaps I0119 12:08:46.403132 1 aggregator.go:164] waiting for initial CRD sync... I0119 12:08:46.403245 1 gc_controller.go:78] Starting apiserver lease garbage collector I0119 12:08:46.403453 1 apf_controller.go:372] Starting API Priority and Fairness config controller I0119 12:08:46.403656 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0119 12:08:46.403704 1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister I0119 12:08:46.469909 1 shared_informer.go:318] Caches are synced for node_authorizer E0119 12:08:46.471499 1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms" I0119 12:08:46.503528 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0119 12:08:46.503663 1 apf_controller.go:377] Running API Priority and Fairness config worker I0119 12:08:46.503682 1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process I0119 12:08:46.503810 1 shared_informer.go:318] Caches are synced for configmaps I0119 12:08:46.503810 1 shared_informer.go:318] Caches are synced for crd-autoregister I0119 12:08:46.503844 1 aggregator.go:166] initial CRD sync complete... I0119 12:08:46.503856 1 autoregister_controller.go:141] Starting autoregister controller I0119 12:08:46.503866 1 cache.go:32] Waiting for caches to sync for autoregister controller I0119 12:08:46.503881 1 cache.go:39] Caches are synced for autoregister controller I0119 12:08:46.503896 1 cache.go:39] Caches are synced for AvailableConditionController controller I0119 12:08:46.503931 1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller I0119 12:08:46.505815 1 controller.go:624] quota admission added evaluator for: namespaces I0119 12:08:46.674915 1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io I0119 12:08:47.417621 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000 I0119 12:08:47.421727 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000 I0119 12:08:47.421750 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist. I0119 12:08:47.780530 1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0119 12:08:47.814454 1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0119 12:08:47.914498 1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"} W0119 12:08:47.921263 1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.49.2] I0119 12:08:47.922323 1 controller.go:624] quota admission added evaluator for: endpoints I0119 12:08:47.926263 1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io I0119 12:08:48.440887 1 controller.go:624] quota admission added evaluator for: serviceaccounts I0119 12:08:49.563716 1 controller.go:624] quota admission added evaluator for: deployments.apps I0119 12:08:49.570593 1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"} I0119 12:08:49.575670 1 controller.go:624] quota admission added evaluator for: daemonsets.apps I0119 12:09:02.097143 1 controller.go:624] quota admission added evaluator for: replicasets.apps I0119 12:09:02.197485 1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps * * ==> kube-controller-manager [55a2ae0feeab] <== * I0119 12:09:01.200585 1 shared_informer.go:311] Waiting for caches to sync for deployment I0119 12:09:01.204568 1 shared_informer.go:311] Waiting for caches to sync for resource quota I0119 12:09:01.211638 1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"minikube\" does not exist" I0119 12:09:01.212303 1 shared_informer.go:318] Caches are synced for service account I0119 12:09:01.213525 1 shared_informer.go:311] Waiting for caches to sync for garbage collector I0119 12:09:01.221889 1 shared_informer.go:318] Caches are synced for node I0119 12:09:01.221942 1 range_allocator.go:174] "Sending events to api server" I0119 12:09:01.221971 1 range_allocator.go:178] "Starting range CIDR allocator" I0119 12:09:01.221980 1 shared_informer.go:311] Waiting for caches to sync for cidrallocator I0119 12:09:01.221988 1 shared_informer.go:318] Caches are synced for cidrallocator I0119 12:09:01.226063 1 shared_informer.go:318] Caches are synced for taint I0119 12:09:01.226119 1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone="" I0119 12:09:01.226148 1 taint_manager.go:206] "Starting NoExecuteTaintManager" I0119 12:09:01.226211 1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="minikube" I0119 12:09:01.226271 1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal" I0119 12:09:01.226268 1 taint_manager.go:211] "Sending events to api server" I0119 12:09:01.226362 1 event.go:307] "Event occurred" object="minikube" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I0119 12:09:01.229177 1 range_allocator.go:380] "Set node PodCIDR" node="minikube" podCIDRs=["10.244.0.0/24"] I0119 12:09:01.233966 1 shared_informer.go:318] Caches are synced for PV protection I0119 12:09:01.238263 1 shared_informer.go:318] Caches are synced for crt configmap I0119 12:09:01.239436 1 shared_informer.go:318] Caches are synced for certificate-csrapproving I0119 12:09:01.243857 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving I0119 12:09:01.243892 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client I0119 12:09:01.244952 1 shared_informer.go:318] Caches are synced for attach detach I0119 12:09:01.245096 1 shared_informer.go:318] Caches are synced for ephemeral I0119 12:09:01.245106 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client I0119 12:09:01.247638 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown I0119 12:09:01.253772 1 shared_informer.go:318] Caches are synced for expand I0119 12:09:01.262234 1 shared_informer.go:318] Caches are synced for endpoint_slice I0119 12:09:01.270750 1 shared_informer.go:318] Caches are synced for GC I0119 12:09:01.273119 1 shared_informer.go:318] Caches are synced for namespace I0119 12:09:01.284979 1 shared_informer.go:318] Caches are synced for cronjob I0119 12:09:01.286295 1 shared_informer.go:318] Caches are synced for HPA I0119 12:09:01.288705 1 shared_informer.go:318] Caches are synced for ReplicaSet I0119 12:09:01.288755 1 shared_informer.go:318] Caches are synced for persistent volume I0119 12:09:01.288911 1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring I0119 12:09:01.291914 1 shared_informer.go:318] Caches are synced for job I0119 12:09:01.293058 1 shared_informer.go:318] Caches are synced for bootstrap_signer I0119 12:09:01.295304 1 shared_informer.go:318] Caches are synced for ReplicationController I0119 12:09:01.297603 1 shared_informer.go:318] Caches are synced for TTL after finished I0119 12:09:01.299801 1 shared_informer.go:318] Caches are synced for TTL I0119 12:09:01.301075 1 shared_informer.go:318] Caches are synced for endpoint I0119 12:09:01.301247 1 shared_informer.go:318] Caches are synced for deployment I0119 12:09:01.305910 1 shared_informer.go:318] Caches are synced for PVC protection I0119 12:09:01.339513 1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator I0119 12:09:01.380728 1 shared_informer.go:318] Caches are synced for disruption I0119 12:09:01.401342 1 shared_informer.go:318] Caches are synced for daemon sets I0119 12:09:01.437741 1 shared_informer.go:318] Caches are synced for stateful set I0119 12:09:01.455977 1 shared_informer.go:318] Caches are synced for resource quota I0119 12:09:01.505553 1 shared_informer.go:318] Caches are synced for resource quota I0119 12:09:01.814065 1 shared_informer.go:318] Caches are synced for garbage collector I0119 12:09:01.842375 1 shared_informer.go:318] Caches are synced for garbage collector I0119 12:09:01.842412 1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage" I0119 12:09:02.099439 1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 1" I0119 12:09:02.203153 1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4g7fb" I0119 12:09:02.301498 1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-wjn26" I0119 12:09:02.306739 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="207.299193ms" I0119 12:09:02.312787 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="5.999881ms" I0119 12:09:02.312933 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.378ยตs" I0119 12:09:02.318022 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.229ยตs" * * ==> kube-proxy [f97d4b5e1815] <== * > table="mangle" chain="KUBE-PROXY-CANARY" I0119 12:09:02.918653 1 shared_informer.go:318] Caches are synced for endpoint slice config I0119 12:09:02.918741 1 shared_informer.go:318] Caches are synced for node config I0119 12:09:02.918831 1 shared_informer.go:318] Caches are synced for service config E0119 12:09:02.926183 1 proxier.go:836] "Failed to ensure chain jumps" err=< error checking rule: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory Try `iptables -h' or 'iptables --help' for more information. > table="filter" srcChain="INPUT" dstChain="KUBE-EXTERNAL-SERVICES" I0119 12:09:02.926212 1 proxier.go:801] "Sync failed" retryingTime="30s" E0119 12:09:32.830070 1 iptables.go:575] "Could not set up iptables canary" err=< error creating chain "KUBE-PROXY-CANARY": exit status 3: iptables v1.8.7 (legacy): can't initialize iptables table `mangle': Table does not exist (do you need to insmod?) Perhaps iptables or your kernel needs to be upgraded. > table="mangle" chain="KUBE-PROXY-CANARY" E0119 12:09:32.933836 1 proxier.go:836] "Failed to ensure chain jumps" err=< error checking rule: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory Try `iptables -h' or 'iptables --help' for more information. > table="filter" srcChain="INPUT" dstChain="KUBE-EXTERNAL-SERVICES" I0119 12:09:32.933859 1 proxier.go:801] "Sync failed" retryingTime="30s" E0119 12:10:02.830216 1 iptables.go:575] "Could not set up iptables canary" err=< error creating chain "KUBE-PROXY-CANARY": exit status 3: iptables v1.8.7 (legacy): can't initialize iptables table `mangle': Table does not exist (do you need to insmod?) Perhaps iptables or your kernel needs to be upgraded. > table="mangle" chain="KUBE-PROXY-CANARY" E0119 12:10:02.939770 1 proxier.go:836] "Failed to ensure chain jumps" err=< error checking rule: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory Try `iptables -h' or 'iptables --help' for more information. > table="filter" srcChain="INPUT" dstChain="KUBE-EXTERNAL-SERVICES" I0119 12:10:02.939793 1 proxier.go:801] "Sync failed" retryingTime="30s" E0119 12:10:32.828411 1 iptables.go:575] "Could not set up iptables canary" err=< error creating chain "KUBE-PROXY-CANARY": exit status 3: iptables v1.8.7 (legacy): can't initialize iptables table `mangle': Table does not exist (do you need to insmod?) Perhaps iptables or your kernel needs to be upgraded. > table="mangle" chain="KUBE-PROXY-CANARY" E0119 12:10:32.947794 1 proxier.go:836] "Failed to ensure chain jumps" err=< error checking rule: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory Try `iptables -h' or 'iptables --help' for more information. > table="filter" srcChain="INPUT" dstChain="KUBE-EXTERNAL-SERVICES" I0119 12:10:32.947816 1 proxier.go:801] "Sync failed" retryingTime="30s" E0119 12:11:02.826276 1 iptables.go:575] "Could not set up iptables canary" err=< error creating chain "KUBE-PROXY-CANARY": exit status 3: iptables v1.8.7 (legacy): can't initialize iptables table `mangle': Table does not exist (do you need to insmod?) Perhaps iptables or your kernel needs to be upgraded. > table="mangle" chain="KUBE-PROXY-CANARY" E0119 12:11:02.955552 1 proxier.go:836] "Failed to ensure chain jumps" err=< error checking rule: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory Try `iptables -h' or 'iptables --help' for more information. > table="filter" srcChain="INPUT" dstChain="KUBE-EXTERNAL-SERVICES" I0119 12:11:02.955582 1 proxier.go:801] "Sync failed" retryingTime="30s" E0119 12:11:32.824962 1 iptables.go:575] "Could not set up iptables canary" err=< error creating chain "KUBE-PROXY-CANARY": exit status 3: iptables v1.8.7 (legacy): can't initialize iptables table `mangle': Table does not exist (do you need to insmod?) Perhaps iptables or your kernel needs to be upgraded. > table="mangle" chain="KUBE-PROXY-CANARY" E0119 12:11:32.959797 1 proxier.go:836] "Failed to ensure chain jumps" err=< error checking rule: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory Try `iptables -h' or 'iptables --help' for more information. > table="filter" srcChain="INPUT" dstChain="KUBE-EXTERNAL-SERVICES" I0119 12:11:32.959814 1 proxier.go:801] "Sync failed" retryingTime="30s" * * ==> kube-scheduler [67fe33a864bb] <== * I0119 12:08:45.263563 1 serving.go:348] Generated self-signed cert in-memory W0119 12:08:46.416501 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0119 12:08:46.416523 1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0119 12:08:46.416532 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous. W0119 12:08:46.416539 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0119 12:08:46.439305 1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3" I0119 12:08:46.439319 1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0119 12:08:46.439993 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0119 12:08:46.440205 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0119 12:08:46.440333 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259 I0119 12:08:46.440356 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" W0119 12:08:46.441504 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0119 12:08:46.441528 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0119 12:08:46.441530 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0119 12:08:46.441536 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0119 12:08:46.441532 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0119 12:08:46.441537 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0119 12:08:46.441609 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0119 12:08:46.441669 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0119 12:08:46.441677 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0119 12:08:46.441679 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0119 12:08:46.441684 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0119 12:08:46.441549 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0119 12:08:46.441548 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0119 12:08:46.441547 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0119 12:08:46.441713 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0119 12:08:46.441548 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0119 12:08:46.441733 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0119 12:08:46.441724 1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0119 12:08:46.441759 1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0119 12:08:46.441791 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0119 12:08:46.441806 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0119 12:08:46.441813 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0119 12:08:46.441819 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0119 12:08:46.441737 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0119 12:08:46.442067 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0119 12:08:46.441820 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0119 12:08:46.442079 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0119 12:08:46.442088 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0119 12:08:46.442110 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0119 12:08:46.442117 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0119 12:08:47.528265 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0119 12:08:47.528303 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0119 12:08:47.721429 1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0119 12:08:47.721465 1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" I0119 12:08:50.540621 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * Jan 19 12:11:45 minikube kubelet[2481]: E0119 12:11:45.662119 2481 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-wjn26_kube-system(e4fdc5a9-bfba-495e-b29c-1cc6d4eb8a83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-wjn26_kube-system(e4fdc5a9-bfba-495e-b29c-1cc6d4eb8a83)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"e12e61ea38bb017550f61342460cca5eedeeef8ab20b60cd8e1aea0fafd64a3f\\\" network for pod \\\"coredns-5dd5756b68-wjn26\\\": networkPlugin cni failed to set up pod \\\"coredns-5dd5756b68-wjn26_kube-system\\\" network: plugin type=\\\"bridge\\\" failed (add): running [/usr/sbin/iptables -t nat -C CNI-2a3bf98dcfeb05257a937caa -d 10.244.0.159/16 -j ACCEPT -m comment --comment name: \\\"bridge\\\" id: \\\"e12e61ea38bb017550f61342460cca5eedeeef8ab20b60cd8e1aea0fafd64a3f\\\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n, failed to clean up sandbox container \\\"e12e61ea38bb017550f61342460cca5eedeeef8ab20b60cd8e1aea0fafd64a3f\\\" network for pod \\\"coredns-5dd5756b68-wjn26\\\": networkPlugin cni failed to teardown pod \\\"coredns-5dd5756b68-wjn26_kube-system\\\" network: plugin type=\\\"bridge\\\" failed (delete): running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.244.0.159 -j CNI-2a3bf98dcfeb05257a937caa -m comment --comment name: \\\"bridge\\\" id: \\\"e12e61ea38bb017550f61342460cca5eedeeef8ab20b60cd8e1aea0fafd64a3f\\\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/coredns-5dd5756b68-wjn26" podUID="e4fdc5a9-bfba-495e-b29c-1cc6d4eb8a83" Jan 19 12:11:46 minikube kubelet[2481]: I0119 12:11:46.403259 2481 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e12e61ea38bb017550f61342460cca5eedeeef8ab20b60cd8e1aea0fafd64a3f" Jan 19 12:11:46 minikube kubelet[2481]: E0119 12:11:46.713178 2481 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err=< Jan 19 12:11:46 minikube kubelet[2481]: rpc error: code = Unknown desc = [failed to set up sandbox container "fa2bbccd136d38217cce621cf37c612a6df82d4abd4b7d4d3377be7db79a6b7e" network for pod "coredns-5dd5756b68-wjn26": networkPlugin cni failed to set up pod "coredns-5dd5756b68-wjn26_kube-system" network: plugin type="bridge" failed (add): running [/usr/sbin/iptables -t nat -C CNI-8df4d110fcddc2155fbcd834 -d 10.244.0.160/16 -j ACCEPT -m comment --comment name: "bridge" id: "fa2bbccd136d38217cce621cf37c612a6df82d4abd4b7d4d3377be7db79a6b7e" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory Jan 19 12:11:46 minikube kubelet[2481]: Jan 19 12:11:46 minikube kubelet[2481]: Try `iptables -h' or 'iptables --help' for more information. Jan 19 12:11:46 minikube kubelet[2481]: , failed to clean up sandbox container "fa2bbccd136d38217cce621cf37c612a6df82d4abd4b7d4d3377be7db79a6b7e" network for pod "coredns-5dd5756b68-wjn26": networkPlugin cni failed to teardown pod "coredns-5dd5756b68-wjn26_kube-system" network: plugin type="bridge" failed (delete): running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.244.0.160 -j CNI-8df4d110fcddc2155fbcd834 -m comment --comment name: "bridge" id: "fa2bbccd136d38217cce621cf37c612a6df82d4abd4b7d4d3377be7db79a6b7e" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory Jan 19 12:11:46 minikube kubelet[2481]: Jan 19 12:11:46 minikube kubelet[2481]: Try `iptables -h' or 'iptables --help' for more information. Jan 19 12:11:46 minikube kubelet[2481]: ] Jan 19 12:11:46 minikube kubelet[2481]: > Jan 19 12:11:46 minikube kubelet[2481]: E0119 12:11:46.713243 2481 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 19 12:11:46 minikube kubelet[2481]: rpc error: code = Unknown desc = [failed to set up sandbox container "fa2bbccd136d38217cce621cf37c612a6df82d4abd4b7d4d3377be7db79a6b7e" network for pod "coredns-5dd5756b68-wjn26": networkPlugin cni failed to set up pod "coredns-5dd5756b68-wjn26_kube-system" network: plugin type="bridge" failed (add): running [/usr/sbin/iptables -t nat -C CNI-8df4d110fcddc2155fbcd834 -d 10.244.0.160/16 -j ACCEPT -m comment --comment name: "bridge" id: "fa2bbccd136d38217cce621cf37c612a6df82d4abd4b7d4d3377be7db79a6b7e" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory Jan 19 12:11:46 minikube kubelet[2481]: Jan 19 12:11:46 minikube kubelet[2481]: Try `iptables -h' or 'iptables --help' for more information. Jan 19 12:11:46 minikube kubelet[2481]: , failed to clean up sandbox container "fa2bbccd136d38217cce621cf37c612a6df82d4abd4b7d4d3377be7db79a6b7e" network for pod "coredns-5dd5756b68-wjn26": networkPlugin cni failed to teardown pod "coredns-5dd5756b68-wjn26_kube-system" network: plugin type="bridge" failed (delete): running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.244.0.160 -j CNI-8df4d110fcddc2155fbcd834 -m comment --comment name: "bridge" id: "fa2bbccd136d38217cce621cf37c612a6df82d4abd4b7d4d3377be7db79a6b7e" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory Jan 19 12:11:46 minikube kubelet[2481]: Jan 19 12:11:46 minikube kubelet[2481]: Try `iptables -h' or 'iptables --help' for more information. Jan 19 12:11:46 minikube kubelet[2481]: ] Jan 19 12:11:46 minikube kubelet[2481]: > pod="kube-system/coredns-5dd5756b68-wjn26" Jan 19 12:11:46 minikube kubelet[2481]: E0119 12:11:46.713273 2481 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err=< Jan 19 12:11:46 minikube kubelet[2481]: rpc error: code = Unknown desc = [failed to set up sandbox container "fa2bbccd136d38217cce621cf37c612a6df82d4abd4b7d4d3377be7db79a6b7e" network for pod "coredns-5dd5756b68-wjn26": networkPlugin cni failed to set up pod "coredns-5dd5756b68-wjn26_kube-system" network: plugin type="bridge" failed (add): running [/usr/sbin/iptables -t nat -C CNI-8df4d110fcddc2155fbcd834 -d 10.244.0.160/16 -j ACCEPT -m comment --comment name: "bridge" id: "fa2bbccd136d38217cce621cf37c612a6df82d4abd4b7d4d3377be7db79a6b7e" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory Jan 19 12:11:46 minikube kubelet[2481]: Jan 19 12:11:46 minikube kubelet[2481]: Try `iptables -h' or 'iptables --help' for more information. Jan 19 12:11:46 minikube kubelet[2481]: , failed to clean up sandbox container "fa2bbccd136d38217cce621cf37c612a6df82d4abd4b7d4d3377be7db79a6b7e" network for pod "coredns-5dd5756b68-wjn26": networkPlugin cni failed to teardown pod "coredns-5dd5756b68-wjn26_kube-system" network: plugin type="bridge" failed (delete): running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.244.0.160 -j CNI-8df4d110fcddc2155fbcd834 -m comment --comment name: "bridge" id: "fa2bbccd136d38217cce621cf37c612a6df82d4abd4b7d4d3377be7db79a6b7e" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory Jan 19 12:11:46 minikube kubelet[2481]: Jan 19 12:11:46 minikube kubelet[2481]: Try `iptables -h' or 'iptables --help' for more information. Jan 19 12:11:46 minikube kubelet[2481]: ] Jan 19 12:11:46 minikube kubelet[2481]: > pod="kube-system/coredns-5dd5756b68-wjn26" Jan 19 12:11:46 minikube kubelet[2481]: E0119 12:11:46.713402 2481 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-wjn26_kube-system(e4fdc5a9-bfba-495e-b29c-1cc6d4eb8a83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-wjn26_kube-system(e4fdc5a9-bfba-495e-b29c-1cc6d4eb8a83)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"fa2bbccd136d38217cce621cf37c612a6df82d4abd4b7d4d3377be7db79a6b7e\\\" network for pod \\\"coredns-5dd5756b68-wjn26\\\": networkPlugin cni failed to set up pod \\\"coredns-5dd5756b68-wjn26_kube-system\\\" network: plugin type=\\\"bridge\\\" failed (add): running [/usr/sbin/iptables -t nat -C CNI-8df4d110fcddc2155fbcd834 -d 10.244.0.160/16 -j ACCEPT -m comment --comment name: \\\"bridge\\\" id: \\\"fa2bbccd136d38217cce621cf37c612a6df82d4abd4b7d4d3377be7db79a6b7e\\\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n, failed to clean up sandbox container \\\"fa2bbccd136d38217cce621cf37c612a6df82d4abd4b7d4d3377be7db79a6b7e\\\" network for pod \\\"coredns-5dd5756b68-wjn26\\\": networkPlugin cni failed to teardown pod \\\"coredns-5dd5756b68-wjn26_kube-system\\\" network: plugin type=\\\"bridge\\\" failed (delete): running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.244.0.160 -j CNI-8df4d110fcddc2155fbcd834 -m comment --comment name: \\\"bridge\\\" id: \\\"fa2bbccd136d38217cce621cf37c612a6df82d4abd4b7d4d3377be7db79a6b7e\\\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/coredns-5dd5756b68-wjn26" podUID="e4fdc5a9-bfba-495e-b29c-1cc6d4eb8a83" Jan 19 12:11:47 minikube kubelet[2481]: I0119 12:11:47.464190 2481 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa2bbccd136d38217cce621cf37c612a6df82d4abd4b7d4d3377be7db79a6b7e" Jan 19 12:11:47 minikube kubelet[2481]: E0119 12:11:47.782626 2481 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err=< Jan 19 12:11:47 minikube kubelet[2481]: rpc error: code = Unknown desc = [failed to set up sandbox container "c4869bfe857acb0f6a68ea25a29c0b6ffb65fdb6ba76af5e3a6bb3c808b6c3c9" network for pod "coredns-5dd5756b68-wjn26": networkPlugin cni failed to set up pod "coredns-5dd5756b68-wjn26_kube-system" network: plugin type="bridge" failed (add): running [/usr/sbin/iptables -t nat -C CNI-16635ebd9950c3cdb4c14dca -d 10.244.0.161/16 -j ACCEPT -m comment --comment name: "bridge" id: "c4869bfe857acb0f6a68ea25a29c0b6ffb65fdb6ba76af5e3a6bb3c808b6c3c9" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory Jan 19 12:11:47 minikube kubelet[2481]: Jan 19 12:11:47 minikube kubelet[2481]: Try `iptables -h' or 'iptables --help' for more information. Jan 19 12:11:47 minikube kubelet[2481]: , failed to clean up sandbox container "c4869bfe857acb0f6a68ea25a29c0b6ffb65fdb6ba76af5e3a6bb3c808b6c3c9" network for pod "coredns-5dd5756b68-wjn26": networkPlugin cni failed to teardown pod "coredns-5dd5756b68-wjn26_kube-system" network: plugin type="bridge" failed (delete): running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.244.0.161 -j CNI-16635ebd9950c3cdb4c14dca -m comment --comment name: "bridge" id: "c4869bfe857acb0f6a68ea25a29c0b6ffb65fdb6ba76af5e3a6bb3c808b6c3c9" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory Jan 19 12:11:47 minikube kubelet[2481]: Jan 19 12:11:47 minikube kubelet[2481]: Try `iptables -h' or 'iptables --help' for more information. Jan 19 12:11:47 minikube kubelet[2481]: ] Jan 19 12:11:47 minikube kubelet[2481]: > Jan 19 12:11:47 minikube kubelet[2481]: E0119 12:11:47.782704 2481 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 19 12:11:47 minikube kubelet[2481]: rpc error: code = Unknown desc = [failed to set up sandbox container "c4869bfe857acb0f6a68ea25a29c0b6ffb65fdb6ba76af5e3a6bb3c808b6c3c9" network for pod "coredns-5dd5756b68-wjn26": networkPlugin cni failed to set up pod "coredns-5dd5756b68-wjn26_kube-system" network: plugin type="bridge" failed (add): running [/usr/sbin/iptables -t nat -C CNI-16635ebd9950c3cdb4c14dca -d 10.244.0.161/16 -j ACCEPT -m comment --comment name: "bridge" id: "c4869bfe857acb0f6a68ea25a29c0b6ffb65fdb6ba76af5e3a6bb3c808b6c3c9" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory Jan 19 12:11:47 minikube kubelet[2481]: Jan 19 12:11:47 minikube kubelet[2481]: Try `iptables -h' or 'iptables --help' for more information. Jan 19 12:11:47 minikube kubelet[2481]: , failed to clean up sandbox container "c4869bfe857acb0f6a68ea25a29c0b6ffb65fdb6ba76af5e3a6bb3c808b6c3c9" network for pod "coredns-5dd5756b68-wjn26": networkPlugin cni failed to teardown pod "coredns-5dd5756b68-wjn26_kube-system" network: plugin type="bridge" failed (delete): running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.244.0.161 -j CNI-16635ebd9950c3cdb4c14dca -m comment --comment name: "bridge" id: "c4869bfe857acb0f6a68ea25a29c0b6ffb65fdb6ba76af5e3a6bb3c808b6c3c9" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory Jan 19 12:11:47 minikube kubelet[2481]: Jan 19 12:11:47 minikube kubelet[2481]: Try `iptables -h' or 'iptables --help' for more information. Jan 19 12:11:47 minikube kubelet[2481]: ] Jan 19 12:11:47 minikube kubelet[2481]: > pod="kube-system/coredns-5dd5756b68-wjn26" Jan 19 12:11:47 minikube kubelet[2481]: E0119 12:11:47.782736 2481 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err=< Jan 19 12:11:47 minikube kubelet[2481]: rpc error: code = Unknown desc = [failed to set up sandbox container "c4869bfe857acb0f6a68ea25a29c0b6ffb65fdb6ba76af5e3a6bb3c808b6c3c9" network for pod "coredns-5dd5756b68-wjn26": networkPlugin cni failed to set up pod "coredns-5dd5756b68-wjn26_kube-system" network: plugin type="bridge" failed (add): running [/usr/sbin/iptables -t nat -C CNI-16635ebd9950c3cdb4c14dca -d 10.244.0.161/16 -j ACCEPT -m comment --comment name: "bridge" id: "c4869bfe857acb0f6a68ea25a29c0b6ffb65fdb6ba76af5e3a6bb3c808b6c3c9" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory Jan 19 12:11:47 minikube kubelet[2481]: Jan 19 12:11:47 minikube kubelet[2481]: Try `iptables -h' or 'iptables --help' for more information. Jan 19 12:11:47 minikube kubelet[2481]: , failed to clean up sandbox container "c4869bfe857acb0f6a68ea25a29c0b6ffb65fdb6ba76af5e3a6bb3c808b6c3c9" network for pod "coredns-5dd5756b68-wjn26": networkPlugin cni failed to teardown pod "coredns-5dd5756b68-wjn26_kube-system" network: plugin type="bridge" failed (delete): running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.244.0.161 -j CNI-16635ebd9950c3cdb4c14dca -m comment --comment name: "bridge" id: "c4869bfe857acb0f6a68ea25a29c0b6ffb65fdb6ba76af5e3a6bb3c808b6c3c9" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory Jan 19 12:11:47 minikube kubelet[2481]: Jan 19 12:11:47 minikube kubelet[2481]: Try `iptables -h' or 'iptables --help' for more information. Jan 19 12:11:47 minikube kubelet[2481]: ] Jan 19 12:11:47 minikube kubelet[2481]: > pod="kube-system/coredns-5dd5756b68-wjn26" Jan 19 12:11:47 minikube kubelet[2481]: E0119 12:11:47.782897 2481 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-wjn26_kube-system(e4fdc5a9-bfba-495e-b29c-1cc6d4eb8a83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-wjn26_kube-system(e4fdc5a9-bfba-495e-b29c-1cc6d4eb8a83)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"c4869bfe857acb0f6a68ea25a29c0b6ffb65fdb6ba76af5e3a6bb3c808b6c3c9\\\" network for pod \\\"coredns-5dd5756b68-wjn26\\\": networkPlugin cni failed to set up pod \\\"coredns-5dd5756b68-wjn26_kube-system\\\" network: plugin type=\\\"bridge\\\" failed (add): running [/usr/sbin/iptables -t nat -C CNI-16635ebd9950c3cdb4c14dca -d 10.244.0.161/16 -j ACCEPT -m comment --comment name: \\\"bridge\\\" id: \\\"c4869bfe857acb0f6a68ea25a29c0b6ffb65fdb6ba76af5e3a6bb3c808b6c3c9\\\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n, failed to clean up sandbox container \\\"c4869bfe857acb0f6a68ea25a29c0b6ffb65fdb6ba76af5e3a6bb3c808b6c3c9\\\" network for pod \\\"coredns-5dd5756b68-wjn26\\\": networkPlugin cni failed to teardown pod \\\"coredns-5dd5756b68-wjn26_kube-system\\\" network: plugin type=\\\"bridge\\\" failed (delete): running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.244.0.161 -j CNI-16635ebd9950c3cdb4c14dca -m comment --comment name: \\\"bridge\\\" id: \\\"c4869bfe857acb0f6a68ea25a29c0b6ffb65fdb6ba76af5e3a6bb3c808b6c3c9\\\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load match `comment':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/coredns-5dd5756b68-wjn26" podUID="e4fdc5a9-bfba-495e-b29c-1cc6d4eb8a83" Jan 19 12:11:48 minikube kubelet[2481]: I0119 12:11:48.499254 2481 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4869bfe857acb0f6a68ea25a29c0b6ffb65fdb6ba76af5e3a6bb3c808b6c3c9" * * ==> storage-provisioner [99319ebbfa78] <== * I0119 12:11:08.707107 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0119 12:11:38.708974 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout