* * ==> Audit <== * |---------|-------------------------------------|----------|---------|---------|---------------------|---------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|-------------------------------------|----------|---------|---------|---------------------|---------------------| | start | --driver=ssh | minikube | vagrant | v1.28.0 | 28 Nov 22 23:34 CST | | | | --ssh-ip-address=10.239.241.111 | | | | | | | | --ssh-user=ssp -v=4 | | | | | | | | --alsologtostderr | | | | | | | | --ssh-key=/home/fhl/.ssh/id_rsa | | | | | | | delete | | minikube | vagrant | v1.28.0 | 28 Nov 22 23:35 CST | 28 Nov 22 23:35 CST | | start | --driver=ssh | minikube | vagrant | v1.28.0 | 28 Nov 22 23:36 CST | | | | --ssh-ip-address=10.239.241.111 | | | | | | | | --ssh-user=ssp -v=4 | | | | | | | | --alsologtostderr | | | | | | | | --ssh-key=/home/vagrant/.ssh/id_rsa | | | | | | | delete | | minikube | vagrant | v1.28.0 | 28 Nov 22 23:47 CST | 28 Nov 22 23:47 CST | | start | --driver=ssh | minikube | vagrant | v1.28.0 | 29 Nov 22 00:33 CST | | | | --ssh-ip-address=10.239.241.111 | | | | | | | | --ssh-user=ssp -v=4 | | | | | | | | --alsologtostderr | | | | | | | | --ssh-key=/home/vagrant/.ssh/id_rsa | | | | | | |---------|-------------------------------------|----------|---------|---------|---------------------|---------------------| * * ==> Last Start <== * Log file created at: 2022/11/29 00:33:23 Running on machine: ceph-server4 Binary: Built with gc go1.19.2 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I1129 00:33:23.327973 3924022 out.go:296] Setting OutFile to fd 1 ... I1129 00:33:23.328059 3924022 out.go:343] TERM=xterm,COLORTERM=, which probably does not support color I1129 00:33:23.328065 3924022 out.go:309] Setting ErrFile to fd 2... I1129 00:33:23.328072 3924022 out.go:343] TERM=xterm,COLORTERM=, which probably does not support color I1129 00:33:23.328168 3924022 root.go:334] Updating PATH: /home/vagrant/.minikube/bin W1129 00:33:23.328274 3924022 root.go:311] Error reading config file at /home/vagrant/.minikube/config/config.json: open /home/vagrant/.minikube/config/config.json: no such file or directory I1129 00:33:23.328453 3924022 out.go:303] Setting JSON to false I1129 00:33:23.364380 3924022 start.go:116] hostinfo: {"hostname":"ceph-server4","uptime":3683642,"bootTime":1665969561,"procs":1168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.4.0-128-generic","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"3b7a7eb7-a39d-439a-b2af-082b4fde43e8"} I1129 00:33:23.364429 3924022 start.go:126] virtualization: kvm host I1129 00:33:23.365450 3924022 out.go:177] * minikube v1.28.0 on Ubuntu 20.04 W1129 00:33:23.365865 3924022 preload.go:295] Failed to list preload files: open /home/vagrant/.minikube/cache/preloaded-tarball: no such file or directory I1129 00:33:23.365974 3924022 notify.go:220] Checking for updates... I1129 00:33:23.366671 3924022 config.go:180] Loaded profile config "minikube": Driver=ssh, ContainerRuntime=docker, KubernetesVersion=v1.25.3 I1129 00:33:23.366905 3924022 ssh_runner.go:195] Run: systemctl --version I1129 00:33:23.366965 3924022 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: IP address is not set I1129 00:33:23.643348 3924022 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: IP address is not set I1129 00:33:24.184521 3924022 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: IP address is not set I1129 00:33:24.840802 3924022 driver.go:365] Setting default libvirt URI to qemu:///system I1129 00:33:24.841838 3924022 out.go:177] * Using the ssh driver based on existing profile I1129 00:33:24.842123 3924022 start.go:282] selected driver: ssh I1129 00:33:24.842155 3924022 start.go:808] validating driver "ssh" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:ssh HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress:10.239.241.111 SSHUser:ssp SSHKey:/home/vagrant/.ssh/id_rsa SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.239.241.111 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/vagrant:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} I1129 00:33:24.842338 3924022 start.go:819] status for ssh: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I1129 00:33:24.891832 3924022 cni.go:95] Creating CNI manager for "" I1129 00:33:24.891853 3924022 cni.go:169] CNI unnecessary in this configuration, recommending no CNI I1129 00:33:24.891868 3924022 start_flags.go:317] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:ssh HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress:10.239.241.111 SSHUser:ssp SSHKey:/home/vagrant/.ssh/id_rsa SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.239.241.111 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/vagrant:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} I1129 00:33:24.892697 3924022 out.go:177] * Starting control plane node minikube in cluster minikube I1129 00:33:24.893098 3924022 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker I1129 00:33:24.893368 3924022 profile.go:148] Saving config to /home/vagrant/.minikube/profiles/minikube/config.json ... I1129 00:33:24.893470 3924022 cache.go:107] acquiring lock: {Name:mkffc213261da21dd9fa76e1ef495146e6acb5b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I1129 00:33:24.893473 3924022 cache.go:107] acquiring lock: {Name:mk35324b651516ffae470a7c444348d3b1f783e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I1129 00:33:24.893723 3924022 cache.go:107] acquiring lock: {Name:mk7623e60b70ba3eac035432a155387b7554472b Clock:{} Delay:500ms Timeout:10m0s Cancel:} I1129 00:33:24.893744 3924022 cache.go:208] Successfully downloaded all kic artifacts I1129 00:33:24.893753 3924022 cache.go:107] acquiring lock: {Name:mke73797afe8b208d22657e17f023cf171c9319b Clock:{} Delay:500ms Timeout:10m0s Cancel:} I1129 00:33:24.893759 3924022 cache.go:107] acquiring lock: {Name:mkd995347cbf7712d434a1f3301a092c3fcf4f39 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I1129 00:33:24.893808 3924022 start.go:364] acquiring machines lock for minikube: {Name:mk71337955f1f39b748d922b43cec6964cee16f5 Clock:{} Delay:500ms Timeout:13m0s Cancel:} I1129 00:33:24.893861 3924022 cache.go:115] /home/vagrant/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists I1129 00:33:24.893906 3924022 cache.go:107] acquiring lock: {Name:mkcfc886b923734073859d09412d7514e527891c Clock:{} Delay:500ms Timeout:10m0s Cancel:} I1129 00:33:24.893910 3924022 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/vagrant/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 452.66µs I1129 00:33:24.893948 3924022 cache.go:115] /home/vagrant/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3 exists I1129 00:33:24.893964 3924022 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/vagrant/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded I1129 00:33:24.893943 3924022 start.go:368] acquired machines lock for "minikube" in 88.479µs I1129 00:33:24.894007 3924022 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.25.3" -> "/home/vagrant/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3" took 367.871µs I1129 00:33:24.894021 3924022 cache.go:107] acquiring lock: {Name:mkfe32f24fbba2c064d840b041b7f75bb866a694 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I1129 00:33:24.894087 3924022 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.25.3 -> /home/vagrant/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3 succeeded I1129 00:33:24.893816 3924022 cache.go:115] /home/vagrant/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3 exists I1129 00:33:24.894121 3924022 cache.go:107] acquiring lock: {Name:mk16f8ddd929ef51db2eb80793379679e1f467ff Clock:{} Delay:500ms Timeout:10m0s Cancel:} I1129 00:33:24.894181 3924022 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.25.3" -> "/home/vagrant/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3" took 728.15µs I1129 00:33:24.894210 3924022 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.25.3 -> /home/vagrant/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3 succeeded I1129 00:33:24.894207 3924022 cache.go:115] /home/vagrant/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3 exists I1129 00:33:24.894237 3924022 cache.go:115] /home/vagrant/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 exists I1129 00:33:24.894252 3924022 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.25.3" -> "/home/vagrant/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3" took 447.747µs I1129 00:33:24.893941 3924022 cache.go:115] /home/vagrant/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 exists I1129 00:33:24.894057 3924022 start.go:96] Skipping create...Using existing machine configuration I1129 00:33:24.894298 3924022 fix.go:55] fixHost starting: I1129 00:33:24.894297 3924022 cache.go:115] /home/vagrant/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3 exists I1129 00:33:24.894312 3924022 cache.go:96] cache image "registry.k8s.io/etcd:3.5.4-0" -> "/home/vagrant/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0" took 555.328µs I1129 00:33:24.894346 3924022 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.4-0 -> /home/vagrant/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 succeeded I1129 00:33:24.894263 3924022 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.25.3 -> /home/vagrant/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3 succeeded I1129 00:33:24.894329 3924022 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.25.3" -> "/home/vagrant/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3" took 302.449µs I1129 00:33:24.894370 3924022 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.25.3 -> /home/vagrant/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3 succeeded I1129 00:33:24.893953 3924022 cache.go:115] /home/vagrant/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 exists I1129 00:33:24.894258 3924022 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.9.3" -> "/home/vagrant/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3" took 389.988µs I1129 00:33:24.894404 3924022 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.9.3 -> /home/vagrant/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 succeeded I1129 00:33:24.894401 3924022 cache.go:96] cache image "registry.k8s.io/pause:3.8" -> "/home/vagrant/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8" took 792.797µs I1129 00:33:24.894424 3924022 cache.go:80] save to tar file registry.k8s.io/pause:3.8 -> /home/vagrant/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 succeeded I1129 00:33:24.894443 3924022 cache.go:87] Successfully saved all images to host disk. I1129 00:33:24.894770 3924022 fix.go:103] recreateIfNeeded on minikube: state=Running err= W1129 00:33:24.894793 3924022 fix.go:129] unexpected machine state, will restart: I1129 00:33:24.895487 3924022 out.go:177] * Updating the running ssh "minikube" bare metal machine ... I1129 00:33:24.895717 3924022 machine.go:88] provisioning docker machine ... I1129 00:33:24.895776 3924022 main.go:134] libmachine: Waiting for SSH to be available... I1129 00:33:24.895791 3924022 main.go:134] libmachine: Getting to WaitForSSH function... I1129 00:33:24.895822 3924022 main.go:134] libmachine: Using SSH client type: native I1129 00:33:24.895961 3924022 main.go:134] libmachine: &{{{ 0 [] [] []} ssp [0x7ed4e0] 0x7f0660 [] 0s} 10.239.241.111 22 } I1129 00:33:24.895973 3924022 main.go:134] libmachine: About to run SSH command: exit 0 I1129 00:33:25.575438 3924022 main.go:134] libmachine: SSH cmd err, output: : I1129 00:33:25.575454 3924022 main.go:134] libmachine: Detecting the provisioner... I1129 00:33:25.575508 3924022 main.go:134] libmachine: Using SSH client type: native I1129 00:33:25.575626 3924022 main.go:134] libmachine: &{{{ 0 [] [] []} ssp [0x7ed4e0] 0x7f0660 [] 0s} 10.239.241.111 22 } I1129 00:33:25.575638 3924022 main.go:134] libmachine: About to run SSH command: cat /etc/os-release I1129 00:33:25.876792 3924022 main.go:134] libmachine: SSH cmd err, output: : PRETTY_NAME="Ubuntu 22.04 LTS" NAME="Ubuntu" VERSION_ID="22.04" VERSION="22.04 (Jammy Jellyfish)" VERSION_CODENAME=jammy ID=ubuntu ID_LIKE=debian HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" UBUNTU_CODENAME=jammy I1129 00:33:25.876844 3924022 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I1129 00:33:25.876864 3924022 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I1129 00:33:25.876877 3924022 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I1129 00:33:25.876920 3924022 main.go:134] libmachine: found compatible host: ubuntu I1129 00:33:25.876936 3924022 ubuntu.go:169] provisioning hostname "minikube" I1129 00:33:25.876965 3924022 main.go:134] libmachine: Using SSH client type: native I1129 00:33:25.877078 3924022 main.go:134] libmachine: &{{{ 0 [] [] []} ssp [0x7ed4e0] 0x7f0660 [] 0s} 10.239.241.111 22 } I1129 00:33:25.877091 3924022 main.go:134] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I1129 00:33:26.223812 3924022 main.go:134] libmachine: SSH cmd err, output: : minikube I1129 00:33:26.223871 3924022 main.go:134] libmachine: Using SSH client type: native I1129 00:33:26.224007 3924022 main.go:134] libmachine: &{{{ 0 [] [] []} ssp [0x7ed4e0] 0x7f0660 [] 0s} 10.239.241.111 22 } I1129 00:33:26.224024 3924022 main.go:134] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I1129 00:33:26.538906 3924022 main.go:134] libmachine: SSH cmd err, output: : I1129 00:33:26.538922 3924022 ubuntu.go:175] set auth options {CertDir:/home/vagrant/.minikube CaCertPath:/home/vagrant/.minikube/certs/ca.pem CaPrivateKeyPath:/home/vagrant/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/vagrant/.minikube/machines/server.pem ServerKeyPath:/home/vagrant/.minikube/machines/server-key.pem ClientKeyPath:/home/vagrant/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/vagrant/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/vagrant/.minikube} I1129 00:33:26.538941 3924022 ubuntu.go:177] setting up certificates I1129 00:33:26.538947 3924022 provision.go:83] configureAuth start I1129 00:33:26.538956 3924022 provision.go:138] copyHostCerts I1129 00:33:26.538990 3924022 vm_assets.go:163] NewFileAsset: /home/vagrant/.minikube/certs/key.pem -> /home/vagrant/.minikube/key.pem I1129 00:33:26.539008 3924022 exec_runner.go:144] found /home/vagrant/.minikube/key.pem, removing ... I1129 00:33:26.539016 3924022 exec_runner.go:207] rm: /home/vagrant/.minikube/key.pem I1129 00:33:26.539072 3924022 exec_runner.go:151] cp: /home/vagrant/.minikube/certs/key.pem --> /home/vagrant/.minikube/key.pem (1675 bytes) I1129 00:33:26.539137 3924022 vm_assets.go:163] NewFileAsset: /home/vagrant/.minikube/certs/ca.pem -> /home/vagrant/.minikube/ca.pem I1129 00:33:26.539154 3924022 exec_runner.go:144] found /home/vagrant/.minikube/ca.pem, removing ... I1129 00:33:26.539161 3924022 exec_runner.go:207] rm: /home/vagrant/.minikube/ca.pem I1129 00:33:26.539185 3924022 exec_runner.go:151] cp: /home/vagrant/.minikube/certs/ca.pem --> /home/vagrant/.minikube/ca.pem (1078 bytes) I1129 00:33:26.539237 3924022 vm_assets.go:163] NewFileAsset: /home/vagrant/.minikube/certs/cert.pem -> /home/vagrant/.minikube/cert.pem I1129 00:33:26.539264 3924022 exec_runner.go:144] found /home/vagrant/.minikube/cert.pem, removing ... I1129 00:33:26.539272 3924022 exec_runner.go:207] rm: /home/vagrant/.minikube/cert.pem I1129 00:33:26.539291 3924022 exec_runner.go:151] cp: /home/vagrant/.minikube/certs/cert.pem --> /home/vagrant/.minikube/cert.pem (1123 bytes) I1129 00:33:26.539342 3924022 provision.go:112] generating server cert: /home/vagrant/.minikube/machines/server.pem ca-key=/home/vagrant/.minikube/certs/ca.pem private-key=/home/vagrant/.minikube/certs/ca-key.pem org=vagrant.minikube san=[10.239.241.111 10.239.241.111 localhost 127.0.0.1 minikube minikube] I1129 00:33:26.665801 3924022 provision.go:172] copyRemoteCerts I1129 00:33:26.665863 3924022 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I1129 00:33:26.665871 3924022 sshutil.go:53] new ssh client: &{IP:10.239.241.111 Port:22 SSHKeyPath:/home/vagrant/.minikube/machines/minikube/id_rsa Username:ssp} I1129 00:33:26.935456 3924022 vm_assets.go:163] NewFileAsset: /home/vagrant/.minikube/certs/ca.pem -> /etc/docker/ca.pem I1129 00:33:26.935541 3924022 ssh_runner.go:362] scp /home/vagrant/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes) I1129 00:33:26.999251 3924022 vm_assets.go:163] NewFileAsset: /home/vagrant/.minikube/machines/server.pem -> /etc/docker/server.pem I1129 00:33:26.999285 3924022 ssh_runner.go:362] scp /home/vagrant/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes) I1129 00:33:27.066540 3924022 vm_assets.go:163] NewFileAsset: /home/vagrant/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem I1129 00:33:27.066627 3924022 ssh_runner.go:362] scp /home/vagrant/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I1129 00:33:27.134367 3924022 provision.go:86] duration metric: configureAuth took 595.40714ms I1129 00:33:27.134413 3924022 ubuntu.go:193] setting minikube options for container-runtime I1129 00:33:27.134692 3924022 config.go:180] Loaded profile config "minikube": Driver=ssh, ContainerRuntime=docker, KubernetesVersion=v1.25.3 I1129 00:33:27.134763 3924022 main.go:134] libmachine: Using SSH client type: native I1129 00:33:27.134861 3924022 main.go:134] libmachine: &{{{ 0 [] [] []} ssp [0x7ed4e0] 0x7f0660 [] 0s} 10.239.241.111 22 } I1129 00:33:27.134872 3924022 main.go:134] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I1129 00:33:27.438466 3924022 main.go:134] libmachine: SSH cmd err, output: : ext4 I1129 00:33:27.438491 3924022 ubuntu.go:71] root file system type: ext4 I1129 00:33:27.438810 3924022 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I1129 00:33:27.438939 3924022 main.go:134] libmachine: Using SSH client type: native I1129 00:33:27.439278 3924022 main.go:134] libmachine: &{{{ 0 [] [] []} ssp [0x7ed4e0] 0x7f0660 [] 0s} 10.239.241.111 22 } I1129 00:33:27.439473 3924022 main.go:134] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure Environment="HTTP_PROXY=http://child-prc.intel.com:913" Environment="HTTPS_PROXY=http://child-prc.intel.com:913" Environment="NO_PROXY=intel.com,.intel.com,localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,172.16.0.0/12" # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=ssh --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I1129 00:33:27.783491 3924022 main.go:134] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure Environment=HTTP_PROXY=http://child-prc.intel.com:913 Environment=HTTPS_PROXY=http://child-prc.intel.com:913 Environment=NO_PROXY=intel.com,.intel.com,localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,172.16.0.0/12 # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=ssh --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I1129 00:33:27.783688 3924022 main.go:134] libmachine: Using SSH client type: native I1129 00:33:27.784075 3924022 main.go:134] libmachine: &{{{ 0 [] [] []} ssp [0x7ed4e0] 0x7f0660 [] 0s} 10.239.241.111 22 } I1129 00:33:27.784132 3924022 main.go:134] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I1129 00:33:30.332994 3924022 main.go:134] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2022-10-25 17:59:49.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2022-11-28 16:33:26.069108816 +0000 @@ -1,30 +1,35 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com -After=network-online.target docker.socket firewalld.service containerd.service +BindsTo=containerd.service +After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s +Environment=HTTP_PROXY=http://child-prc.intel.com:913 +Environment=HTTPS_PROXY=http://child-prc.intel.com:913 +Environment=NO_PROXY=intel.com,.intel.com,localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,172.16.0.0/12 + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=ssh --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +37,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I1129 00:33:30.333067 3924022 machine.go:91] provisioned docker machine in 5.437336007s I1129 00:33:30.333084 3924022 start.go:300] post-start starting for "minikube" (driver="ssh") I1129 00:33:30.333095 3924022 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I1129 00:33:30.333177 3924022 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I1129 00:33:30.333199 3924022 sshutil.go:53] new ssh client: &{IP:10.239.241.111 Port:22 SSHKeyPath:/home/vagrant/.minikube/machines/minikube/id_rsa Username:ssp} I1129 00:33:30.619319 3924022 ssh_runner.go:195] Run: cat /etc/os-release I1129 00:33:30.625027 3924022 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I1129 00:33:30.625093 3924022 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I1129 00:33:30.625126 3924022 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I1129 00:33:30.625145 3924022 info.go:137] Remote host: Ubuntu 22.04 LTS I1129 00:33:30.625166 3924022 filesync.go:126] Scanning /home/vagrant/.minikube/addons for local assets ... I1129 00:33:30.625228 3924022 filesync.go:126] Scanning /home/vagrant/.minikube/files for local assets ... I1129 00:33:30.625251 3924022 start.go:303] post-start completed in 292.157007ms I1129 00:33:30.625262 3924022 fix.go:57] fixHost completed within 5.73096579s I1129 00:33:30.625294 3924022 main.go:134] libmachine: Using SSH client type: native I1129 00:33:30.625416 3924022 main.go:134] libmachine: &{{{ 0 [] [] []} ssp [0x7ed4e0] 0x7f0660 [] 0s} 10.239.241.111 22 } I1129 00:33:30.625429 3924022 main.go:134] libmachine: About to run SSH command: date +%!s(MISSING).%!N(MISSING) I1129 00:33:30.916801 3924022 main.go:134] libmachine: SSH cmd err, output: : 1669653209.209326704 I1129 00:33:30.916841 3924022 fix.go:207] guest clock: 1669653209.209326704 I1129 00:33:30.916864 3924022 fix.go:220] Guest: 2022-11-29 00:33:29.209326704 +0800 CST Remote: 2022-11-29 00:33:30.625269086 +0800 CST m=+7.346653580 (delta=-1.415942382s) I1129 00:33:30.916914 3924022 fix.go:191] guest clock delta is within tolerance: -1.415942382s I1129 00:33:30.916932 3924022 start.go:83] releasing machines lock for "minikube", held for 6.022886134s I1129 00:33:30.917763 3924022 out.go:177] * Found network options: I1129 00:33:30.918165 3924022 out.go:177] - HTTP_PROXY=http://child-prc.intel.com:913 W1129 00:33:30.918407 3924022 proxy.go:119] fail to check proxy env: Error ip not in block W1129 00:33:30.918426 3924022 proxy.go:119] fail to check proxy env: Error ip not in block W1129 00:33:30.918443 3924022 proxy.go:119] fail to check proxy env: Error ip not in block W1129 00:33:30.918455 3924022 proxy.go:119] fail to check proxy env: Error ip not in block I1129 00:33:30.918745 3924022 out.go:177] - HTTPS_PROXY=http://child-prc.intel.com:913 W1129 00:33:30.918987 3924022 proxy.go:119] fail to check proxy env: Error ip not in block W1129 00:33:30.919001 3924022 proxy.go:119] fail to check proxy env: Error ip not in block W1129 00:33:30.919060 3924022 proxy.go:119] fail to check proxy env: Error ip not in block W1129 00:33:30.919071 3924022 proxy.go:119] fail to check proxy env: Error ip not in block I1129 00:33:30.919362 3924022 out.go:177] - NO_PROXY=intel.com,.intel.com,localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,172.16.0.0/12 W1129 00:33:30.919603 3924022 proxy.go:119] fail to check proxy env: Error ip not in block W1129 00:33:30.919617 3924022 proxy.go:119] fail to check proxy env: Error ip not in block W1129 00:33:30.919630 3924022 proxy.go:119] fail to check proxy env: Error ip not in block W1129 00:33:30.919642 3924022 proxy.go:119] fail to check proxy env: Error ip not in block I1129 00:33:30.919926 3924022 out.go:177] - http_proxy=http://child-prc.intel.com:913 W1129 00:33:30.920166 3924022 proxy.go:119] fail to check proxy env: Error ip not in block W1129 00:33:30.920182 3924022 proxy.go:119] fail to check proxy env: Error ip not in block W1129 00:33:30.920198 3924022 proxy.go:119] fail to check proxy env: Error ip not in block W1129 00:33:30.920220 3924022 proxy.go:119] fail to check proxy env: Error ip not in block I1129 00:33:30.920505 3924022 out.go:177] - https_proxy=http://child-prc.intel.com:913 W1129 00:33:30.920750 3924022 proxy.go:119] fail to check proxy env: Error ip not in block W1129 00:33:30.920762 3924022 proxy.go:119] fail to check proxy env: Error ip not in block W1129 00:33:30.920774 3924022 proxy.go:119] fail to check proxy env: Error ip not in block W1129 00:33:30.920785 3924022 proxy.go:119] fail to check proxy env: Error ip not in block I1129 00:33:30.921131 3924022 out.go:177] - no_proxy=intel.com,.intel.com,localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,172.16.0.0/12 W1129 00:33:30.921435 3924022 proxy.go:119] fail to check proxy env: Error ip not in block W1129 00:33:30.921448 3924022 proxy.go:119] fail to check proxy env: Error ip not in block W1129 00:33:30.921465 3924022 proxy.go:119] fail to check proxy env: Error ip not in block W1129 00:33:30.921477 3924022 proxy.go:119] fail to check proxy env: Error ip not in block W1129 00:33:30.921879 3924022 proxy.go:119] fail to check proxy env: Error ip not in block W1129 00:33:30.921907 3924022 proxy.go:119] fail to check proxy env: Error ip not in block W1129 00:33:30.921916 3924022 proxy.go:119] fail to check proxy env: Error ip not in block W1129 00:33:30.921931 3924022 proxy.go:119] fail to check proxy env: Error ip not in block I1129 00:33:30.921952 3924022 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker I1129 00:33:30.921970 3924022 ssh_runner.go:195] Run: curl -x http://child-prc.intel.com:913 -sS -m 2 https://registry.k8s.io/ I1129 00:33:30.922000 3924022 ssh_runner.go:195] Run: sudo systemctl cat docker.service I1129 00:33:30.922004 3924022 sshutil.go:53] new ssh client: &{IP:10.239.241.111 Port:22 SSHKeyPath:/home/vagrant/.minikube/machines/minikube/id_rsa Username:ssp} I1129 00:33:30.922016 3924022 sshutil.go:53] new ssh client: &{IP:10.239.241.111 Port:22 SSHKeyPath:/home/vagrant/.minikube/machines/minikube/id_rsa Username:ssp} I1129 00:33:32.017647 3924022 ssh_runner.go:235] Completed: curl -x http://child-prc.intel.com:913 -sS -m 2 https://registry.k8s.io/: (1.095644858s) I1129 00:33:32.018177 3924022 ssh_runner.go:235] Completed: sudo systemctl cat docker.service: (1.096143223s) I1129 00:33:32.018239 3924022 cruntime.go:273] skipping containerd shutdown because we are bound to it I1129 00:33:32.018296 3924022 ssh_runner.go:195] Run: sudo service crio status I1129 00:33:32.061227 3924022 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock image-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I1129 00:33:32.108655 3924022 ssh_runner.go:195] Run: sudo service docker restart I1129 00:33:32.795089 3924022 openrc.go:158] restart output: I1129 00:33:32.795187 3924022 ssh_runner.go:195] Run: sudo service cri-docker.socket status I1129 00:33:32.836402 3924022 ssh_runner.go:195] Run: sudo service cri-docker.socket start I1129 00:33:33.653328 3924022 out.go:177] W1129 00:33:33.653948 3924022 out.go:239] X Exiting due to RUNTIME_ENABLE: sudo service cri-docker.socket start: Process exited with status 5 stdout: stderr: Failed to start cri-docker.socket.service: Unit cri-docker.socket.service not found. W1129 00:33:33.654001 3924022 out.go:239] * W1129 00:33:33.655216 3924022 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮ │ │ │ * If the above advice does not help, please let us know: │ │ https://github.com/kubernetes/minikube/issues/new/choose │ │ │ │ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │ │ │ ╰─────────────────────────────────────────────────────────────────────────────────────────────╯ I1129 00:33:33.655823 3924022 out.go:177] * * ==> Docker <== * Nov 28 15:37:35 minikube dockerd[37723]: time="2022-11-28T15:37:35.218630669Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 28 15:37:35 minikube dockerd[37723]: time="2022-11-28T15:37:35.230440175Z" level=info msg="Loading containers: start." Nov 28 15:37:35 minikube dockerd[37723]: time="2022-11-28T15:37:35.524405357Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 28 15:37:35 minikube dockerd[37723]: time="2022-11-28T15:37:35.647872774Z" level=info msg="Loading containers: done." Nov 28 15:37:35 minikube dockerd[37723]: time="2022-11-28T15:37:35.670852012Z" level=info msg="Docker daemon" commit=3056208 graphdriver(s)=overlay2 version=20.10.21 Nov 28 15:37:35 minikube dockerd[37723]: time="2022-11-28T15:37:35.670915110Z" level=info msg="Daemon has completed initialization" Nov 28 15:37:35 minikube systemd[1]: Started Docker Application Container Engine. Nov 28 15:37:35 minikube dockerd[37723]: time="2022-11-28T15:37:35.707702533Z" level=info msg="API listen on [::]:2376" Nov 28 15:37:35 minikube dockerd[37723]: time="2022-11-28T15:37:35.724602214Z" level=info msg="API listen on /var/run/docker.sock" Nov 28 16:33:28 minikube systemd[1]: Stopping Docker Application Container Engine... Nov 28 16:33:28 minikube dockerd[37723]: time="2022-11-28T16:33:28.014310273Z" level=info msg="Processing signal 'terminated'" Nov 28 16:33:28 minikube dockerd[37723]: time="2022-11-28T16:33:28.015529759Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby Nov 28 16:33:28 minikube dockerd[37723]: time="2022-11-28T16:33:28.016281353Z" level=info msg="Daemon shutdown complete" Nov 28 16:33:28 minikube systemd[1]: docker.service: Deactivated successfully. Nov 28 16:33:28 minikube systemd[1]: Stopped Docker Application Container Engine. Nov 28 16:33:28 minikube systemd[1]: docker.service: Consumed 4.636s CPU time. Nov 28 16:33:28 minikube systemd[1]: Starting Docker Application Container Engine... Nov 28 16:33:28 minikube dockerd[40811]: time="2022-11-28T16:33:28.089960248Z" level=info msg="Starting up" Nov 28 16:33:28 minikube dockerd[40811]: time="2022-11-28T16:33:28.091188038Z" level=info msg="detected 127.0.0.53 nameserver, assuming systemd-resolved, so using resolv.conf: /run/systemd/resolve/resolv.conf" Nov 28 16:33:28 minikube dockerd[40811]: time="2022-11-28T16:33:28.092762256Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 28 16:33:28 minikube dockerd[40811]: time="2022-11-28T16:33:28.092824462Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 28 16:33:28 minikube dockerd[40811]: time="2022-11-28T16:33:28.092880569Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Nov 28 16:33:28 minikube dockerd[40811]: time="2022-11-28T16:33:28.092915403Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 28 16:33:28 minikube dockerd[40811]: time="2022-11-28T16:33:28.095376198Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 28 16:33:28 minikube dockerd[40811]: time="2022-11-28T16:33:28.095432081Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 28 16:33:28 minikube dockerd[40811]: time="2022-11-28T16:33:28.095479083Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Nov 28 16:33:28 minikube dockerd[40811]: time="2022-11-28T16:33:28.095510585Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 28 16:33:28 minikube dockerd[40811]: time="2022-11-28T16:33:28.144199847Z" level=info msg="Loading containers: start." Nov 28 16:33:28 minikube dockerd[40811]: time="2022-11-28T16:33:28.434071004Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 28 16:33:28 minikube dockerd[40811]: time="2022-11-28T16:33:28.574051773Z" level=info msg="Loading containers: done." Nov 28 16:33:28 minikube dockerd[40811]: time="2022-11-28T16:33:28.594060429Z" level=info msg="Docker daemon" commit=3056208 graphdriver(s)=overlay2 version=20.10.21 Nov 28 16:33:28 minikube dockerd[40811]: time="2022-11-28T16:33:28.594109229Z" level=info msg="Daemon has completed initialization" Nov 28 16:33:28 minikube systemd[1]: Started Docker Application Container Engine. Nov 28 16:33:28 minikube dockerd[40811]: time="2022-11-28T16:33:28.632968537Z" level=info msg="API listen on [::]:2376" Nov 28 16:33:28 minikube dockerd[40811]: time="2022-11-28T16:33:28.642366555Z" level=info msg="API listen on /var/run/docker.sock" Nov 28 16:33:30 minikube systemd[1]: Stopping Docker Application Container Engine... Nov 28 16:33:30 minikube dockerd[40811]: time="2022-11-28T16:33:30.434236932Z" level=info msg="Processing signal 'terminated'" Nov 28 16:33:30 minikube dockerd[40811]: time="2022-11-28T16:33:30.435478418Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby Nov 28 16:33:30 minikube dockerd[40811]: time="2022-11-28T16:33:30.436045015Z" level=info msg="Daemon shutdown complete" Nov 28 16:33:30 minikube systemd[1]: docker.service: Deactivated successfully. Nov 28 16:33:30 minikube systemd[1]: Stopped Docker Application Container Engine. Nov 28 16:33:30 minikube systemd[1]: Starting Docker Application Container Engine... Nov 28 16:33:30 minikube dockerd[41239]: time="2022-11-28T16:33:30.565761759Z" level=info msg="Starting up" Nov 28 16:33:30 minikube dockerd[41239]: time="2022-11-28T16:33:30.567247139Z" level=info msg="detected 127.0.0.53 nameserver, assuming systemd-resolved, so using resolv.conf: /run/systemd/resolve/resolv.conf" Nov 28 16:33:30 minikube dockerd[41239]: time="2022-11-28T16:33:30.568649559Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 28 16:33:30 minikube dockerd[41239]: time="2022-11-28T16:33:30.568693609Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 28 16:33:30 minikube dockerd[41239]: time="2022-11-28T16:33:30.568740568Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Nov 28 16:33:30 minikube dockerd[41239]: time="2022-11-28T16:33:30.568788912Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 28 16:33:30 minikube dockerd[41239]: time="2022-11-28T16:33:30.571327608Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 28 16:33:30 minikube dockerd[41239]: time="2022-11-28T16:33:30.571373046Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 28 16:33:30 minikube dockerd[41239]: time="2022-11-28T16:33:30.571420682Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Nov 28 16:33:30 minikube dockerd[41239]: time="2022-11-28T16:33:30.571451333Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 28 16:33:30 minikube dockerd[41239]: time="2022-11-28T16:33:30.584691220Z" level=info msg="Loading containers: start." Nov 28 16:33:30 minikube dockerd[41239]: time="2022-11-28T16:33:30.910354170Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 28 16:33:31 minikube dockerd[41239]: time="2022-11-28T16:33:31.035833901Z" level=info msg="Loading containers: done." Nov 28 16:33:31 minikube dockerd[41239]: time="2022-11-28T16:33:31.058886814Z" level=info msg="Docker daemon" commit=3056208 graphdriver(s)=overlay2 version=20.10.21 Nov 28 16:33:31 minikube dockerd[41239]: time="2022-11-28T16:33:31.058961702Z" level=info msg="Daemon has completed initialization" Nov 28 16:33:31 minikube systemd[1]: Started Docker Application Container Engine. Nov 28 16:33:31 minikube dockerd[41239]: time="2022-11-28T16:33:31.092083124Z" level=info msg="API listen on [::]:2376" Nov 28 16:33:31 minikube dockerd[41239]: time="2022-11-28T16:33:31.099286330Z" level=info msg="API listen on /var/run/docker.sock" * * ==> container status <== * CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES sudo: crictl: command not found * * ==> describe nodes <== * * ==> dmesg <== * [Nov28 11:31] x86/cpu: VMX (outside TXT) disabled by BIOS [ +0.399895] #29 #30 #31 #32 #33 #34 #35 #36 #37 #38 #39 #40 #41 #42 #43 #44 #45 #46 #47 #48 #49 #50 #51 #52 #53 #54 #55 [ +0.282728] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. [ +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. [ +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. [ +0.000000] #57 #58 #59 #60 #61 #62 #63 #64 #65 #66 #67 #68 #69 #70 #71 #72 #73 #74 #75 #76 #77 #78 #79 #80 #81 #82 #83 [ +1.999659] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. [ +0.000196] platform eisa.0: EISA: Cannot allocate resource for mainboard [ +0.000003] platform eisa.0: Cannot allocate resource for EISA slot 1 [ +0.000004] platform eisa.0: Cannot allocate resource for EISA slot 2 [ +0.000003] platform eisa.0: Cannot allocate resource for EISA slot 3 [ +0.000004] platform eisa.0: Cannot allocate resource for EISA slot 4 [ +0.000003] platform eisa.0: Cannot allocate resource for EISA slot 5 [ +0.000004] platform eisa.0: Cannot allocate resource for EISA slot 6 [ +0.000003] platform eisa.0: Cannot allocate resource for EISA slot 7 [ +0.000004] platform eisa.0: Cannot allocate resource for EISA slot 8 [ +0.973438] lpc_ich 0000:00:1f.0: No MFD cells added [ +0.075281] i2c i2c-0: Systems with more than 4 memory slots not supported yet, not instantiating SPD [ +0.169497] usb: port power management may be unreliable [ +2.656136] sr 14:0:0:0: Power-on or device reset occurred [ +0.030317] sd 15:0:0:0: Power-on or device reset occurred [ +2.151146] pstore: ignoring unexpected backend 'efi' [ +0.861748] power_meter ACPI000D:00: hwmon_device_register() is deprecated. Please convert the driver to use hwmon_device_register_with_info(). [ +0.860520] power_meter ACPI000D:00: Ignoring unsafe software power cap! [Nov28 11:50] kauditd_printk_skb: 19 callbacks suppressed * * ==> kernel <== * 16:33:52 up 5:02, 6 users, load average: 0.06, 0.04, 0.00 Linux minikube 5.15.0-53-generic #59-Ubuntu SMP Mon Oct 17 18:53:30 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 22.04 LTS" * * ==> kubelet <== * -- No entries --