* * ==> Audit <== * |---------|--------------------------------|------------|-------------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|--------------------------------|------------|-------------|---------|-------------------------------|-------------------------------| | -p | mount-test start | mount-test | charliewang | v1.23.2 | Thu, 07 Oct 2021 10:12:46 EDT | Thu, 07 Oct 2021 10:13:45 EDT | | | --driver hyperkit | | | | | | | | --mount --mount-string | | | | | | | | /tmp/minikube-mount-test:/test | | | | | | |---------|--------------------------------|------------|-------------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Last start log file not found at /Users/charliewang/.minikube/logs/lastStart.txt * * ==> Docker <== * -- Journal begins at Thu 2021-10-07 14:12:57 UTC, ends at Thu 2021-10-07 14:14:36 UTC. -- Oct 07 14:13:22 mount-test dockerd[2253]: time="2021-10-07T14:13:22.322483106Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 07 14:13:22 mount-test dockerd[2253]: time="2021-10-07T14:13:22.322496609Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 07 14:13:22 mount-test dockerd[2253]: time="2021-10-07T14:13:22.322508847Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 07 14:13:22 mount-test dockerd[2253]: time="2021-10-07T14:13:22.322617657Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 07 14:13:22 mount-test dockerd[2253]: time="2021-10-07T14:13:22.322667568Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 07 14:13:22 mount-test dockerd[2253]: time="2021-10-07T14:13:22.322961358Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 07 14:13:22 mount-test dockerd[2253]: time="2021-10-07T14:13:22.323009687Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 07 14:13:22 mount-test dockerd[2253]: time="2021-10-07T14:13:22.323102565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 07 14:13:22 mount-test dockerd[2253]: time="2021-10-07T14:13:22.323117497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 07 14:13:22 mount-test dockerd[2253]: time="2021-10-07T14:13:22.323127206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 07 14:13:22 mount-test dockerd[2253]: time="2021-10-07T14:13:22.323136405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 07 14:13:22 mount-test dockerd[2253]: time="2021-10-07T14:13:22.323147789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 07 14:13:22 mount-test dockerd[2253]: time="2021-10-07T14:13:22.323157657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 07 14:13:22 mount-test dockerd[2253]: time="2021-10-07T14:13:22.323166717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 07 14:13:22 mount-test dockerd[2253]: time="2021-10-07T14:13:22.323175192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 07 14:13:22 mount-test dockerd[2253]: time="2021-10-07T14:13:22.323183868Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 07 14:13:22 mount-test dockerd[2253]: time="2021-10-07T14:13:22.323226694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 07 14:13:22 mount-test dockerd[2253]: time="2021-10-07T14:13:22.323304868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 07 14:13:22 mount-test dockerd[2253]: time="2021-10-07T14:13:22.323324526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 07 14:13:22 mount-test dockerd[2253]: time="2021-10-07T14:13:22.323334224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 07 14:13:22 mount-test dockerd[2253]: time="2021-10-07T14:13:22.323510415Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock Oct 07 14:13:22 mount-test dockerd[2253]: time="2021-10-07T14:13:22.323572250Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc Oct 07 14:13:22 mount-test dockerd[2253]: time="2021-10-07T14:13:22.323604973Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock Oct 07 14:13:22 mount-test dockerd[2253]: time="2021-10-07T14:13:22.323617016Z" level=info msg="containerd successfully booted in 0.032070s" Oct 07 14:13:22 mount-test dockerd[2245]: time="2021-10-07T14:13:22.331562073Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 07 14:13:22 mount-test dockerd[2245]: time="2021-10-07T14:13:22.331659595Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 07 14:13:22 mount-test dockerd[2245]: time="2021-10-07T14:13:22.331683933Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc Oct 07 14:13:22 mount-test dockerd[2245]: time="2021-10-07T14:13:22.331693361Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 07 14:13:22 mount-test dockerd[2245]: time="2021-10-07T14:13:22.332840322Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 07 14:13:22 mount-test dockerd[2245]: time="2021-10-07T14:13:22.332890186Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 07 14:13:22 mount-test dockerd[2245]: time="2021-10-07T14:13:22.332911159Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc Oct 07 14:13:22 mount-test dockerd[2245]: time="2021-10-07T14:13:22.332921201Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 07 14:13:22 mount-test dockerd[2245]: time="2021-10-07T14:13:22.860343634Z" level=warning msg="Your kernel does not support cgroup blkio weight" Oct 07 14:13:22 mount-test dockerd[2245]: time="2021-10-07T14:13:22.860396933Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Oct 07 14:13:22 mount-test dockerd[2245]: time="2021-10-07T14:13:22.860408337Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" Oct 07 14:13:22 mount-test dockerd[2245]: time="2021-10-07T14:13:22.860414139Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" Oct 07 14:13:22 mount-test dockerd[2245]: time="2021-10-07T14:13:22.860420117Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" Oct 07 14:13:22 mount-test dockerd[2245]: time="2021-10-07T14:13:22.860425440Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" Oct 07 14:13:22 mount-test dockerd[2245]: time="2021-10-07T14:13:22.860707385Z" level=info msg="Loading containers: start." Oct 07 14:13:23 mount-test dockerd[2245]: time="2021-10-07T14:13:23.017772064Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Oct 07 14:13:23 mount-test dockerd[2245]: time="2021-10-07T14:13:23.097125318Z" level=info msg="Loading containers: done." Oct 07 14:13:23 mount-test dockerd[2245]: time="2021-10-07T14:13:23.129428601Z" level=info msg="Docker daemon" commit=75249d8 graphdriver(s)=overlay2 version=20.10.8 Oct 07 14:13:23 mount-test dockerd[2245]: time="2021-10-07T14:13:23.129521925Z" level=info msg="Daemon has completed initialization" Oct 07 14:13:23 mount-test systemd[1]: Started Docker Application Container Engine. Oct 07 14:13:23 mount-test dockerd[2245]: time="2021-10-07T14:13:23.162204927Z" level=info msg="API listen on [::]:2376" Oct 07 14:13:23 mount-test dockerd[2245]: time="2021-10-07T14:13:23.166669422Z" level=info msg="API listen on /var/run/docker.sock" Oct 07 14:13:30 mount-test dockerd[2253]: time="2021-10-07T14:13:30.961194103Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/1adc478764fddcc274a35db112f3355a99584b9b154c0d6adc9853e383833ac7 pid=3177 Oct 07 14:13:31 mount-test dockerd[2253]: time="2021-10-07T14:13:31.009584395Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/98cc16f7a03a496c736dcde4095a751f6bde27a9bd2a20d6ceef11e304a21cf6 pid=3208 Oct 07 14:13:31 mount-test dockerd[2253]: time="2021-10-07T14:13:31.152885711Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/41c5bd709ce74005b5e4b2a69a7718da7639bff3330a33b029dc0079e3392252 pid=3278 Oct 07 14:13:31 mount-test dockerd[2253]: time="2021-10-07T14:13:31.209168993Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e948a0ce46aad1ac1ee9e23025661f2157af430f3b89672526562964ff98aca2 pid=3317 Oct 07 14:13:31 mount-test dockerd[2253]: time="2021-10-07T14:13:31.929546893Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/19b6659c1519a007f5348ded4aacafa0ccf300b7d2c52068c92272d2c129d902 pid=3406 Oct 07 14:13:32 mount-test dockerd[2253]: time="2021-10-07T14:13:32.061073865Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/d3a11092dde0d71fe8cd42d4362e459eda11350fb3da2fc84cbbf7ce2aeed5fb pid=3442 Oct 07 14:13:32 mount-test dockerd[2253]: time="2021-10-07T14:13:32.253662816Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f13a6875f58f0eefdba1f51d9bc20362a2b710a77b095b26cd3b47c570c4469d pid=3500 Oct 07 14:13:32 mount-test dockerd[2253]: time="2021-10-07T14:13:32.293460880Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/b097ac38118d950f81f62c4f4f7ba5d2a73dade26e9ccccccf6fbd8c50e948ef pid=3536 Oct 07 14:13:57 mount-test dockerd[2253]: time="2021-10-07T14:13:57.029947954Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/d3f57cf96620f9ccbc635ca9753da45eacda12d4530b0da679648954d43bbc11 pid=4394 Oct 07 14:13:57 mount-test dockerd[2253]: time="2021-10-07T14:13:57.170451322Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e99fdf33d6cb6a44eabdfd6e1fcb455cb83641e5c2d19837fd60758c6f0962c1 pid=4440 Oct 07 14:13:57 mount-test dockerd[2253]: time="2021-10-07T14:13:57.433512248Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/7a6d1d20dec60634be4f76680e7f735babaf9c4fa13714cf0f00c5aab0614749 pid=4507 Oct 07 14:13:57 mount-test dockerd[2253]: time="2021-10-07T14:13:57.619412908Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f023f9ba1c3d555b315e76e359102c0aef67eee3eeb14d6c0dd0a7ef298ed9c2 pid=4559 Oct 07 14:13:58 mount-test dockerd[2253]: time="2021-10-07T14:13:58.005822091Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/d3a13f8b4d35d260a1ab25ccf5326e85e57508a0d8d4e7d062837c2e612891fd pid=4664 Oct 07 14:13:58 mount-test dockerd[2253]: time="2021-10-07T14:13:58.481293839Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ddc032f2517504d18d88ec27fe8625ef9c2fe7744b0d0f779366a11aa78e8836 pid=4734 * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID ddc032f251750 6e38f40d628db 38 seconds ago Running storage-provisioner 0 f023f9ba1c3d5 d3a13f8b4d35d 8d147537fb7d1 39 seconds ago Running coredns 0 d3f57cf96620f 7a6d1d20dec60 873127efbc8a7 39 seconds ago Running kube-proxy 0 e99fdf33d6cb6 b097ac38118d9 b51ddc1014b04 About a minute ago Running kube-scheduler 0 e948a0ce46aad f13a6875f58f0 5425bcbd23c54 About a minute ago Running kube-controller-manager 0 41c5bd709ce74 d3a11092dde0d e64579b7d8862 About a minute ago Running kube-apiserver 0 98cc16f7a03a4 19b6659c1519a 0048118155842 About a minute ago Running etcd 0 1adc478764fdd * * ==> coredns [d3a13f8b4d35] <== * .:53 [INFO] plugin/reload: Running configuration MD5 = 08e2b174e0f0a30a2e82df9c995f4a34 CoreDNS-1.8.4 linux/amd64, go1.16.4, 053c4d5 * * ==> describe nodes <== * Name: mount-test Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=mount-test kubernetes.io/os=linux minikube.k8s.io/commit=0a0ad764652082477c00d51d2475284b5d39ceed minikube.k8s.io/name=mount-test minikube.k8s.io/updated_at=2021_10_07T10_13_43_0700 minikube.k8s.io/version=v1.23.2 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Thu, 07 Oct 2021 14:13:39 +0000 Taints: Unschedulable: false Lease: HolderIdentity: mount-test AcquireTime: RenewTime: Thu, 07 Oct 2021 14:14:34 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Thu, 07 Oct 2021 14:13:53 +0000 Thu, 07 Oct 2021 14:13:33 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Thu, 07 Oct 2021 14:13:53 +0000 Thu, 07 Oct 2021 14:13:33 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Thu, 07 Oct 2021 14:13:53 +0000 Thu, 07 Oct 2021 14:13:33 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Thu, 07 Oct 2021 14:13:53 +0000 Thu, 07 Oct 2021 14:13:53 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.64.12 Hostname: mount-test Capacity: cpu: 4 ephemeral-storage: 53488004Ki hugepages-2Mi: 0 memory: 3934844Ki pods: 110 Allocatable: cpu: 4 ephemeral-storage: 53488004Ki hugepages-2Mi: 0 memory: 3934844Ki pods: 110 System Info: Machine ID: 70e204d2ae554c0c94433c83ac52f35c System UUID: a54e11ec-0000-0000-a3fe-8c85905d7a34 Boot ID: 7b01fbb2-83cf-4261-8056-b4bee304c15e Kernel Version: 4.19.202 OS Image: Buildroot 2021.02.4 Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.8 Kubelet Version: v1.22.2 Kube-Proxy Version: v1.22.2 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (7 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-78fcd69978-jhxm6 100m (2%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (4%!)(MISSING) 40s kube-system etcd-mount-test 100m (2%!)(MISSING) 0 (0%!)(MISSING) 100Mi (2%!)(MISSING) 0 (0%!)(MISSING) 53s kube-system kube-apiserver-mount-test 250m (6%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 56s kube-system kube-controller-manager-mount-test 200m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 52s kube-system kube-proxy-t5qgq 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 40s kube-system kube-scheduler-mount-test 100m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 53s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 52s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (18%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (4%!)(MISSING) 170Mi (4%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 66s kubelet Starting kubelet. Normal NodeHasSufficientMemory 66s (x3 over 66s) kubelet Node mount-test status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 66s (x3 over 66s) kubelet Node mount-test status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 66s (x2 over 66s) kubelet Node mount-test status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 66s kubelet Updated Node Allocatable limit across pods Normal Starting 53s kubelet Starting kubelet. Normal NodeHasSufficientMemory 53s kubelet Node mount-test status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 53s kubelet Node mount-test status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 53s kubelet Node mount-test status is now: NodeHasSufficientPID Normal NodeNotReady 53s kubelet Node mount-test status is now: NodeNotReady Normal NodeAllocatableEnforced 53s kubelet Updated Node Allocatable limit across pods Normal NodeReady 43s kubelet Node mount-test status is now: NodeReady * * ==> dmesg <== * [Oct 7 14:12] ERROR: earlyprintk= earlyser already used [ +0.000000] You have booted with nomodeset. This means your GPU drivers are DISABLED [ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly [ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it [ +0.204565] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20180810/tbprint-173) [ +5.970213] ACPI Error: Could not enable RealTimeClock event (20180810/evxfevnt-182) [ +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20180810/evxface-618) [ +0.014498] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 [ +3.291742] systemd-fstab-generator[1140]: Ignoring "noauto" for root device [ +0.039360] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. [ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.) [ +0.863409] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1674 comm=systemd-network [ +0.487006] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack. [Oct 7 14:13] vboxguest: loading out-of-tree module taints kernel. [ +0.004805] vboxguest: PCI device not found, probably running on physical hardware. [ +3.917560] systemd-fstab-generator[2043]: Ignoring "noauto" for root device [ +0.159217] systemd-fstab-generator[2054]: Ignoring "noauto" for root device [ +16.936760] systemd-fstab-generator[2236]: Ignoring "noauto" for root device [ +1.859302] kauditd_printk_skb: 68 callbacks suppressed [ +0.464028] systemd-fstab-generator[2402]: Ignoring "noauto" for root device [ +0.135152] systemd-fstab-generator[2413]: Ignoring "noauto" for root device [ +0.150489] systemd-fstab-generator[2424]: Ignoring "noauto" for root device [ +5.811619] systemd-fstab-generator[2679]: Ignoring "noauto" for root device [ +0.733428] kauditd_printk_skb: 107 callbacks suppressed [ +11.593168] systemd-fstab-generator[3965]: Ignoring "noauto" for root device [ +14.305966] kauditd_printk_skb: 38 callbacks suppressed * * ==> etcd [19b6659c1519] <== * {"level":"info","ts":"2021-10-07T14:13:32.659Z","caller":"etcdmain/etcd.go:72","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.64.12:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--initial-advertise-peer-urls=https://192.168.64.12:2380","--initial-cluster=mount-test=https://192.168.64.12:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.64.12:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.64.12:2380","--name=mount-test","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]} {"level":"info","ts":"2021-10-07T14:13:32.660Z","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.64.12:2380"]} {"level":"info","ts":"2021-10-07T14:13:32.660Z","caller":"embed/etcd.go:478","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2021-10-07T14:13:32.661Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.12:2379"]} {"level":"info","ts":"2021-10-07T14:13:32.661Z","caller":"embed/etcd.go:307","msg":"starting an etcd server","etcd-version":"3.5.0","git-sha":"946a5a6f2","go-version":"go1.16.3","go-os":"linux","go-arch":"amd64","max-cpu-set":4,"max-cpu-available":4,"member-initialized":false,"name":"mount-test","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.64.12:2380"],"listen-peer-urls":["https://192.168.64.12:2380"],"advertise-client-urls":["https://192.168.64.12:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.12:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"mount-test=https://192.168.64.12:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-size-bytes":2147483648,"pre-vote":true,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"} {"level":"info","ts":"2021-10-07T14:13:32.678Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"16.496502ms"} {"level":"info","ts":"2021-10-07T14:13:32.686Z","caller":"etcdserver/raft.go:448","msg":"starting local member","local-member-id":"893b0beac40933c0","cluster-id":"51ecae2d8304f353"} {"level":"info","ts":"2021-10-07T14:13:32.686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 switched to configuration voters=()"} {"level":"info","ts":"2021-10-07T14:13:32.686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became follower at term 0"} {"level":"info","ts":"2021-10-07T14:13:32.686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 893b0beac40933c0 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"} {"level":"info","ts":"2021-10-07T14:13:32.686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became follower at term 1"} {"level":"info","ts":"2021-10-07T14:13:32.686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 switched to configuration voters=(9888510509761246144)"} {"level":"warn","ts":"2021-10-07T14:13:32.691Z","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"} {"level":"info","ts":"2021-10-07T14:13:32.693Z","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":1} {"level":"info","ts":"2021-10-07T14:13:32.695Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"} {"level":"info","ts":"2021-10-07T14:13:32.698Z","caller":"etcdserver/server.go:843","msg":"starting etcd server","local-member-id":"893b0beac40933c0","local-server-version":"3.5.0","cluster-version":"to_be_decided"} {"level":"info","ts":"2021-10-07T14:13:32.698Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"893b0beac40933c0","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} {"level":"info","ts":"2021-10-07T14:13:32.701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 switched to configuration voters=(9888510509761246144)"} {"level":"info","ts":"2021-10-07T14:13:32.706Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"51ecae2d8304f353","local-member-id":"893b0beac40933c0","added-peer-id":"893b0beac40933c0","added-peer-peer-urls":["https://192.168.64.12:2380"]} {"level":"info","ts":"2021-10-07T14:13:32.717Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2021-10-07T14:13:32.718Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.64.12:2380"} {"level":"info","ts":"2021-10-07T14:13:32.718Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.64.12:2380"} {"level":"info","ts":"2021-10-07T14:13:32.722Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"893b0beac40933c0","initial-advertise-peer-urls":["https://192.168.64.12:2380"],"listen-peer-urls":["https://192.168.64.12:2380"],"advertise-client-urls":["https://192.168.64.12:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.12:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]} {"level":"info","ts":"2021-10-07T14:13:32.722Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"} {"level":"info","ts":"2021-10-07T14:13:33.287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 is starting a new election at term 1"} {"level":"info","ts":"2021-10-07T14:13:33.287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became pre-candidate at term 1"} {"level":"info","ts":"2021-10-07T14:13:33.287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 received MsgPreVoteResp from 893b0beac40933c0 at term 1"} {"level":"info","ts":"2021-10-07T14:13:33.288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became candidate at term 2"} {"level":"info","ts":"2021-10-07T14:13:33.288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 received MsgVoteResp from 893b0beac40933c0 at term 2"} {"level":"info","ts":"2021-10-07T14:13:33.288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became leader at term 2"} {"level":"info","ts":"2021-10-07T14:13:33.288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 893b0beac40933c0 elected leader 893b0beac40933c0 at term 2"} {"level":"info","ts":"2021-10-07T14:13:33.289Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2021-10-07T14:13:33.290Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2021-10-07T14:13:33.288Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"893b0beac40933c0","local-member-attributes":"{Name:mount-test ClientURLs:[https://192.168.64.12:2379]}","request-path":"/0/members/893b0beac40933c0/attributes","cluster-id":"51ecae2d8304f353","publish-timeout":"7s"} {"level":"info","ts":"2021-10-07T14:13:33.292Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.64.12:2379"} {"level":"info","ts":"2021-10-07T14:13:33.293Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"} {"level":"info","ts":"2021-10-07T14:13:33.293Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"} {"level":"info","ts":"2021-10-07T14:13:33.297Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"} {"level":"info","ts":"2021-10-07T14:13:33.298Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2021-10-07T14:13:33.306Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"51ecae2d8304f353","local-member-id":"893b0beac40933c0","cluster-version":"3.5"} {"level":"info","ts":"2021-10-07T14:13:33.316Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2021-10-07T14:13:33.316Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2021-10-07T14:13:51.491Z","caller":"traceutil/trace.go:171","msg":"trace[1240104406] linearizableReadLoop","detail":"{readStateIndex:303; appliedIndex:302; }","duration":"110.406686ms","start":"2021-10-07T14:13:51.380Z","end":"2021-10-07T14:13:51.491Z","steps":["trace[1240104406] 'read index received' (duration: 97.313154ms)","trace[1240104406] 'applied index is now lower than readState.Index' (duration: 13.092692ms)"],"step_count":2} {"level":"info","ts":"2021-10-07T14:13:51.491Z","caller":"traceutil/trace.go:171","msg":"trace[393349549] transaction","detail":"{read_only:false; response_revision:291; number_of_response:1; }","duration":"144.88761ms","start":"2021-10-07T14:13:51.346Z","end":"2021-10-07T14:13:51.491Z","steps":["trace[393349549] 'process raft request' (duration: 131.874518ms)","trace[393349549] 'compare' (duration: 12.848673ms)"],"step_count":2} {"level":"warn","ts":"2021-10-07T14:13:51.491Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"110.998512ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" ","response":"range_response_count:1 size:242"} {"level":"info","ts":"2021-10-07T14:13:51.491Z","caller":"traceutil/trace.go:171","msg":"trace[177464853] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:291; }","duration":"111.281766ms","start":"2021-10-07T14:13:51.380Z","end":"2021-10-07T14:13:51.491Z","steps":["trace[177464853] 'agreement among raft nodes before linearized reading' (duration: 110.938394ms)"],"step_count":1} {"level":"warn","ts":"2021-10-07T14:13:51.492Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"108.856549ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-mount-test\" ","response":"range_response_count:1 size:4329"} {"level":"info","ts":"2021-10-07T14:13:51.492Z","caller":"traceutil/trace.go:171","msg":"trace[1469350639] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-mount-test; range_end:; response_count:1; response_revision:291; }","duration":"109.238775ms","start":"2021-10-07T14:13:51.383Z","end":"2021-10-07T14:13:51.492Z","steps":["trace[1469350639] 'agreement among raft nodes before linearized reading' (duration: 108.778285ms)"],"step_count":1} {"level":"info","ts":"2021-10-07T14:14:09.006Z","caller":"traceutil/trace.go:171","msg":"trace[1223470476] linearizableReadLoop","detail":"{readStateIndex:497; appliedIndex:497; }","duration":"158.790589ms","start":"2021-10-07T14:14:08.847Z","end":"2021-10-07T14:14:09.006Z","steps":["trace[1223470476] 'read index received' (duration: 158.784336ms)","trace[1223470476] 'applied index is now lower than readState.Index' (duration: 5.453µs)"],"step_count":2} {"level":"warn","ts":"2021-10-07T14:14:09.013Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"165.769485ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2021-10-07T14:14:09.013Z","caller":"traceutil/trace.go:171","msg":"trace[1549244069] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:482; }","duration":"165.847987ms","start":"2021-10-07T14:14:08.847Z","end":"2021-10-07T14:14:09.013Z","steps":["trace[1549244069] 'agreement among raft nodes before linearized reading' (duration: 159.006953ms)"],"step_count":1} * * ==> kernel <== * 14:14:37 up 1 min, 0 users, load average: 0.97, 0.36, 0.13 Linux mount-test 4.19.202 #1 SMP Thu Sep 16 17:44:03 UTC 2021 x86_64 GNU/Linux PRETTY_NAME="Buildroot 2021.02.4" * * ==> kube-apiserver [d3a11092dde0] <== * W1007 14:13:36.446076 1 genericapiserver.go:455] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W1007 14:13:36.450970 1 genericapiserver.go:455] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources. W1007 14:13:36.460878 1 genericapiserver.go:455] Skipping API apps/v1beta2 because it has no resources. W1007 14:13:36.460932 1 genericapiserver.go:455] Skipping API apps/v1beta1 because it has no resources. W1007 14:13:36.465550 1 genericapiserver.go:455] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources. I1007 14:13:36.474005 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I1007 14:13:36.474054 1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. W1007 14:13:36.534489 1 genericapiserver.go:455] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources. I1007 14:13:39.223604 1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I1007 14:13:39.223726 1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I1007 14:13:39.224035 1 dynamic_serving_content.go:129] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key" I1007 14:13:39.225075 1 secure_serving.go:266] Serving securely on [::]:8443 I1007 14:13:39.225133 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I1007 14:13:39.225278 1 available_controller.go:491] Starting AvailableConditionController I1007 14:13:39.225393 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I1007 14:13:39.225531 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I1007 14:13:39.225692 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I1007 14:13:39.226653 1 dynamic_serving_content.go:129] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key" I1007 14:13:39.227348 1 autoregister_controller.go:141] Starting autoregister controller I1007 14:13:39.227502 1 cache.go:32] Waiting for caches to sync for autoregister controller I1007 14:13:39.227839 1 apf_controller.go:299] Starting API Priority and Fairness config controller I1007 14:13:39.229242 1 customresource_discovery_controller.go:209] Starting DiscoveryController I1007 14:13:39.229734 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I1007 14:13:39.229935 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller I1007 14:13:39.231178 1 controller.go:85] Starting OpenAPI controller I1007 14:13:39.231240 1 naming_controller.go:291] Starting NamingConditionController I1007 14:13:39.231266 1 establishing_controller.go:76] Starting EstablishingController I1007 14:13:39.231816 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I1007 14:13:39.231872 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I1007 14:13:39.231889 1 crd_finalizer.go:266] Starting CRDFinalizer I1007 14:13:39.233243 1 crdregistration_controller.go:111] Starting crd-autoregister controller I1007 14:13:39.233289 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister I1007 14:13:39.233334 1 controller.go:83] Starting OpenAPI AggregationController E1007 14:13:39.242977 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.64.12, ResourceVersion: 0, AdditionalErrorMsg: I1007 14:13:39.270625 1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I1007 14:13:39.276727 1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I1007 14:13:39.325998 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I1007 14:13:39.326307 1 cache.go:39] Caches are synced for AvailableConditionController controller I1007 14:13:39.333311 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I1007 14:13:39.333452 1 shared_informer.go:247] Caches are synced for crd-autoregister I1007 14:13:39.334043 1 apf_controller.go:304] Running API Priority and Fairness config worker I1007 14:13:39.335051 1 cache.go:39] Caches are synced for autoregister controller I1007 14:13:39.374763 1 shared_informer.go:247] Caches are synced for node_authorizer I1007 14:13:39.387652 1 controller.go:611] quota admission added evaluator for: namespaces I1007 14:13:40.223980 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I1007 14:13:40.224142 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I1007 14:13:40.238283 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000 I1007 14:13:40.245344 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000 I1007 14:13:40.245390 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist. I1007 14:13:41.106254 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io I1007 14:13:41.204389 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I1007 14:13:41.368701 1 controller.go:611] quota admission added evaluator for: serviceaccounts W1007 14:13:41.380992 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.64.12] I1007 14:13:41.382896 1 controller.go:611] quota admission added evaluator for: endpoints I1007 14:13:41.393884 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io I1007 14:13:43.068469 1 controller.go:611] quota admission added evaluator for: deployments.apps I1007 14:13:43.128754 1 controller.go:611] quota admission added evaluator for: daemonsets.apps I1007 14:13:43.534278 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io I1007 14:13:55.804854 1 controller.go:611] quota admission added evaluator for: replicasets.apps I1007 14:13:56.112039 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps * * ==> kube-controller-manager [f13a6875f58f] <== * I1007 14:13:55.414726 1 controllermanager.go:577] Started "namespace" I1007 14:13:55.414878 1 namespace_controller.go:200] Starting namespace controller I1007 14:13:55.415024 1 shared_informer.go:240] Waiting for caches to sync for namespace I1007 14:13:55.550678 1 controllermanager.go:577] Started "pv-protection" I1007 14:13:55.550900 1 pv_protection_controller.go:83] Starting PV protection controller I1007 14:13:55.550913 1 shared_informer.go:240] Waiting for caches to sync for PV protection I1007 14:13:55.559265 1 shared_informer.go:240] Waiting for caches to sync for resource quota W1007 14:13:55.582812 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="mount-test" does not exist I1007 14:13:55.604675 1 shared_informer.go:247] Caches are synced for daemon sets I1007 14:13:55.606147 1 shared_informer.go:247] Caches are synced for TTL I1007 14:13:55.608090 1 shared_informer.go:247] Caches are synced for deployment I1007 14:13:55.609954 1 shared_informer.go:247] Caches are synced for cronjob I1007 14:13:55.610128 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I1007 14:13:55.612948 1 shared_informer.go:247] Caches are synced for persistent volume I1007 14:13:55.618062 1 shared_informer.go:247] Caches are synced for node I1007 14:13:55.619530 1 range_allocator.go:172] Starting range CIDR allocator I1007 14:13:55.619614 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator I1007 14:13:55.619648 1 shared_informer.go:247] Caches are synced for cidrallocator I1007 14:13:55.621504 1 shared_informer.go:247] Caches are synced for GC I1007 14:13:55.622037 1 shared_informer.go:247] Caches are synced for job I1007 14:13:55.631365 1 shared_informer.go:247] Caches are synced for PVC protection I1007 14:13:55.646834 1 range_allocator.go:373] Set node mount-test PodCIDR to [10.244.0.0/24] I1007 14:13:55.647709 1 shared_informer.go:247] Caches are synced for HPA I1007 14:13:55.650741 1 shared_informer.go:247] Caches are synced for expand I1007 14:13:55.651075 1 shared_informer.go:247] Caches are synced for PV protection I1007 14:13:55.652410 1 shared_informer.go:247] Caches are synced for TTL after finished I1007 14:13:55.652621 1 shared_informer.go:247] Caches are synced for stateful set I1007 14:13:55.652884 1 shared_informer.go:247] Caches are synced for ReplicaSet I1007 14:13:55.652972 1 shared_informer.go:247] Caches are synced for taint I1007 14:13:55.653411 1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: I1007 14:13:55.653798 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I1007 14:13:55.653029 1 shared_informer.go:247] Caches are synced for attach detach I1007 14:13:55.652999 1 shared_informer.go:247] Caches are synced for ephemeral I1007 14:13:55.653903 1 taint_manager.go:187] "Starting NoExecuteTaintManager" W1007 14:13:55.654062 1 node_lifecycle_controller.go:1013] Missing timestamp for Node mount-test. Assuming now as a timestamp. I1007 14:13:55.657257 1 node_lifecycle_controller.go:1214] Controller detected that zone is now in state Normal. I1007 14:13:55.654061 1 event.go:291] "Event occurred" object="mount-test" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node mount-test event: Registered Node mount-test in Controller" I1007 14:13:55.657677 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I1007 14:13:55.658077 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I1007 14:13:55.658902 1 shared_informer.go:247] Caches are synced for bootstrap_signer I1007 14:13:55.660216 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I1007 14:13:55.664312 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I1007 14:13:55.665566 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I1007 14:13:55.700490 1 shared_informer.go:247] Caches are synced for crt configmap I1007 14:13:55.701073 1 shared_informer.go:247] Caches are synced for service account I1007 14:13:55.715602 1 shared_informer.go:247] Caches are synced for namespace I1007 14:13:55.807459 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 1" I1007 14:13:55.837113 1 shared_informer.go:247] Caches are synced for disruption I1007 14:13:55.837251 1 disruption.go:371] Sending events to api server. I1007 14:13:55.843435 1 shared_informer.go:247] Caches are synced for ReplicationController I1007 14:13:55.849299 1 shared_informer.go:247] Caches are synced for endpoint I1007 14:13:55.864510 1 shared_informer.go:247] Caches are synced for resource quota I1007 14:13:55.864562 1 shared_informer.go:247] Caches are synced for resource quota I1007 14:13:55.881556 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I1007 14:13:55.904475 1 shared_informer.go:247] Caches are synced for endpoint_slice I1007 14:13:56.130420 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-t5qgq" I1007 14:13:56.266668 1 shared_informer.go:247] Caches are synced for garbage collector I1007 14:13:56.266791 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I1007 14:13:56.267516 1 shared_informer.go:247] Caches are synced for garbage collector I1007 14:13:56.558038 1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-jhxm6" * * ==> kube-proxy [7a6d1d20dec6] <== * I1007 14:13:57.725980 1 node.go:172] Successfully retrieved node IP: 192.168.64.12 I1007 14:13:57.726867 1 server_others.go:140] Detected node IP 192.168.64.12 W1007 14:13:57.727286 1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy W1007 14:13:57.774187 1 server_others.go:197] No iptables support for IPv6: exit status 3 I1007 14:13:57.774446 1 server_others.go:208] kube-proxy running in single-stack IPv4 mode I1007 14:13:57.774726 1 server_others.go:212] Using iptables Proxier. I1007 14:13:57.776858 1 server.go:649] Version: v1.22.2 I1007 14:13:57.778644 1 config.go:315] Starting service config controller I1007 14:13:57.778685 1 shared_informer.go:240] Waiting for caches to sync for service config I1007 14:13:57.778704 1 config.go:224] Starting endpoint slice config controller I1007 14:13:57.778708 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config E1007 14:13:57.793451 1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"mount-test.16abc5218f2411be", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc04fde096e5fdac9, ext:217077037, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-mount-test", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"mount-test", UID:"mount-test", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "mount-test.16abc5218f2411be" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!) I1007 14:13:57.878889 1 shared_informer.go:247] Caches are synced for endpoint slice config I1007 14:13:57.878965 1 shared_informer.go:247] Caches are synced for service config * * ==> kube-scheduler [b097ac38118d] <== * I1007 14:13:33.464497 1 serving.go:347] Generated self-signed cert in-memory W1007 14:13:39.318915 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W1007 14:13:39.318984 1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W1007 14:13:39.320793 1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous. W1007 14:13:39.320899 1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I1007 14:13:39.387423 1 secure_serving.go:200] Serving securely on 127.0.0.1:10259 I1007 14:13:39.388088 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I1007 14:13:39.388198 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1007 14:13:39.388229 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" E1007 14:13:39.398035 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E1007 14:13:39.398217 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E1007 14:13:39.398444 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E1007 14:13:39.398522 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E1007 14:13:39.398580 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E1007 14:13:39.398679 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E1007 14:13:39.408251 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E1007 14:13:39.409544 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1007 14:13:39.409687 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E1007 14:13:39.410969 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E1007 14:13:39.411081 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E1007 14:13:39.411225 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E1007 14:13:39.411739 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E1007 14:13:39.412116 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E1007 14:13:39.421181 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E1007 14:13:40.240973 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E1007 14:13:40.363739 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E1007 14:13:40.467234 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E1007 14:13:40.483128 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E1007 14:13:40.512123 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E1007 14:13:40.690407 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1007 14:13:40.802668 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E1007 14:13:41.000518 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" I1007 14:13:43.388678 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Journal begins at Thu 2021-10-07 14:12:57 UTC, ends at Thu 2021-10-07 14:14:37 UTC. -- Oct 07 14:13:43 mount-test kubelet[3972]: I1007 14:13:43.776861 3972 state_mem.go:36] "Initialized new in-memory state store" Oct 07 14:13:43 mount-test kubelet[3972]: I1007 14:13:43.777362 3972 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 07 14:13:43 mount-test kubelet[3972]: I1007 14:13:43.777408 3972 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Oct 07 14:13:43 mount-test kubelet[3972]: I1007 14:13:43.777423 3972 policy_none.go:49] "None policy: Start" Oct 07 14:13:43 mount-test kubelet[3972]: I1007 14:13:43.783939 3972 memory_manager.go:168] "Starting memorymanager" policy="None" Oct 07 14:13:43 mount-test kubelet[3972]: I1007 14:13:43.784013 3972 state_mem.go:35] "Initializing new in-memory state store" Oct 07 14:13:43 mount-test kubelet[3972]: I1007 14:13:43.784211 3972 state_mem.go:75] "Updated machine memory state" Oct 07 14:13:43 mount-test kubelet[3972]: I1007 14:13:43.792170 3972 manager.go:607] "Failed to retrieve checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 07 14:13:43 mount-test kubelet[3972]: I1007 14:13:43.793721 3972 plugin_manager.go:114] "Starting Kubelet Plugin Manager" Oct 07 14:13:43 mount-test kubelet[3972]: I1007 14:13:43.887245 3972 topology_manager.go:200] "Topology Admit Handler" Oct 07 14:13:43 mount-test kubelet[3972]: I1007 14:13:43.887445 3972 topology_manager.go:200] "Topology Admit Handler" Oct 07 14:13:43 mount-test kubelet[3972]: I1007 14:13:43.887517 3972 topology_manager.go:200] "Topology Admit Handler" Oct 07 14:13:43 mount-test kubelet[3972]: I1007 14:13:43.887557 3972 topology_manager.go:200] "Topology Admit Handler" Oct 07 14:13:43 mount-test kubelet[3972]: E1007 14:13:43.910108 3972 kubelet.go:1701] "Failed creating a mirror pod for" err="pods \"kube-apiserver-mount-test\" already exists" pod="kube-system/kube-apiserver-mount-test" Oct 07 14:13:43 mount-test kubelet[3972]: I1007 14:13:43.917159 3972 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5482578a508fc92102e0d0850001ddbf-ca-certs\") pod \"kube-controller-manager-mount-test\" (UID: \"5482578a508fc92102e0d0850001ddbf\") " Oct 07 14:13:43 mount-test kubelet[3972]: I1007 14:13:43.917609 3972 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/1307dc3d978e1010f241b50bc468969f-etcd-certs\") pod \"etcd-mount-test\" (UID: \"1307dc3d978e1010f241b50bc468969f\") " Oct 07 14:13:43 mount-test kubelet[3972]: I1007 14:13:43.917950 3972 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/357444121ef7794bc6df813fef265b48-ca-certs\") pod \"kube-apiserver-mount-test\" (UID: \"357444121ef7794bc6df813fef265b48\") " Oct 07 14:13:43 mount-test kubelet[3972]: I1007 14:13:43.918705 3972 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/357444121ef7794bc6df813fef265b48-usr-share-ca-certificates\") pod \"kube-apiserver-mount-test\" (UID: \"357444121ef7794bc6df813fef265b48\") " Oct 07 14:13:43 mount-test kubelet[3972]: I1007 14:13:43.919271 3972 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5482578a508fc92102e0d0850001ddbf-flexvolume-dir\") pod \"kube-controller-manager-mount-test\" (UID: \"5482578a508fc92102e0d0850001ddbf\") " Oct 07 14:13:43 mount-test kubelet[3972]: I1007 14:13:43.919884 3972 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5482578a508fc92102e0d0850001ddbf-k8s-certs\") pod \"kube-controller-manager-mount-test\" (UID: \"5482578a508fc92102e0d0850001ddbf\") " Oct 07 14:13:43 mount-test kubelet[3972]: I1007 14:13:43.920495 3972 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5482578a508fc92102e0d0850001ddbf-kubeconfig\") pod \"kube-controller-manager-mount-test\" (UID: \"5482578a508fc92102e0d0850001ddbf\") " Oct 07 14:13:43 mount-test kubelet[3972]: I1007 14:13:43.921055 3972 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5482578a508fc92102e0d0850001ddbf-usr-share-ca-certificates\") pod \"kube-controller-manager-mount-test\" (UID: \"5482578a508fc92102e0d0850001ddbf\") " Oct 07 14:13:43 mount-test kubelet[3972]: I1007 14:13:43.921463 3972 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/47c244cb083d2092622647aeeae0e8d8-kubeconfig\") pod \"kube-scheduler-mount-test\" (UID: \"47c244cb083d2092622647aeeae0e8d8\") " Oct 07 14:13:43 mount-test kubelet[3972]: I1007 14:13:43.921668 3972 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/1307dc3d978e1010f241b50bc468969f-etcd-data\") pod \"etcd-mount-test\" (UID: \"1307dc3d978e1010f241b50bc468969f\") " Oct 07 14:13:43 mount-test kubelet[3972]: I1007 14:13:43.922046 3972 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/357444121ef7794bc6df813fef265b48-k8s-certs\") pod \"kube-apiserver-mount-test\" (UID: \"357444121ef7794bc6df813fef265b48\") " Oct 07 14:13:44 mount-test kubelet[3972]: I1007 14:13:44.466849 3972 apiserver.go:52] "Watching apiserver" Oct 07 14:13:44 mount-test kubelet[3972]: I1007 14:13:44.938189 3972 reconciler.go:157] "Reconciler: start to sync state" Oct 07 14:13:45 mount-test kubelet[3972]: E1007 14:13:45.065971 3972 kubelet.go:1701] "Failed creating a mirror pod for" err="pods \"etcd-mount-test\" already exists" pod="kube-system/etcd-mount-test" Oct 07 14:13:45 mount-test kubelet[3972]: E1007 14:13:45.313695 3972 kubelet.go:1701] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-mount-test\" already exists" pod="kube-system/kube-controller-manager-mount-test" Oct 07 14:13:45 mount-test kubelet[3972]: E1007 14:13:45.474276 3972 kubelet.go:1701] "Failed creating a mirror pod for" err="pods \"kube-apiserver-mount-test\" already exists" pod="kube-system/kube-apiserver-mount-test" Oct 07 14:13:45 mount-test kubelet[3972]: I1007 14:13:45.656819 3972 request.go:665] Waited for 1.013529195s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods Oct 07 14:13:45 mount-test kubelet[3972]: E1007 14:13:45.675893 3972 kubelet.go:1701] "Failed creating a mirror pod for" err="pods \"kube-scheduler-mount-test\" already exists" pod="kube-system/kube-scheduler-mount-test" Oct 07 14:13:55 mount-test kubelet[3972]: I1007 14:13:55.675788 3972 topology_manager.go:200] "Topology Admit Handler" Oct 07 14:13:55 mount-test kubelet[3972]: I1007 14:13:55.692032 3972 kuberuntime_manager.go:1075] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24" Oct 07 14:13:55 mount-test kubelet[3972]: I1007 14:13:55.692851 3972 docker_service.go:359] "Docker cri received runtime config" runtimeConfig="&RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}" Oct 07 14:13:55 mount-test kubelet[3972]: I1007 14:13:55.693485 3972 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24" Oct 07 14:13:55 mount-test kubelet[3972]: I1007 14:13:55.717922 3972 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0c7d5e2e-18b0-46fe-8a22-99510e83f1ca-tmp\") pod \"storage-provisioner\" (UID: \"0c7d5e2e-18b0-46fe-8a22-99510e83f1ca\") " Oct 07 14:13:55 mount-test kubelet[3972]: I1007 14:13:55.717996 3972 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j6hd\" (UniqueName: \"kubernetes.io/projected/0c7d5e2e-18b0-46fe-8a22-99510e83f1ca-kube-api-access-6j6hd\") pod \"storage-provisioner\" (UID: \"0c7d5e2e-18b0-46fe-8a22-99510e83f1ca\") " Oct 07 14:13:55 mount-test kubelet[3972]: E1007 14:13:55.827906 3972 projected.go:293] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 07 14:13:55 mount-test kubelet[3972]: E1007 14:13:55.827939 3972 projected.go:199] Error preparing data for projected volume kube-api-access-6j6hd for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found Oct 07 14:13:55 mount-test kubelet[3972]: E1007 14:13:55.828046 3972 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/projected/0c7d5e2e-18b0-46fe-8a22-99510e83f1ca-kube-api-access-6j6hd podName:0c7d5e2e-18b0-46fe-8a22-99510e83f1ca nodeName:}" failed. No retries permitted until 2021-10-07 14:13:56.328021507 +0000 UTC m=+13.336426603 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6j6hd" (UniqueName: "kubernetes.io/projected/0c7d5e2e-18b0-46fe-8a22-99510e83f1ca-kube-api-access-6j6hd") pod "storage-provisioner" (UID: "0c7d5e2e-18b0-46fe-8a22-99510e83f1ca") : configmap "kube-root-ca.crt" not found Oct 07 14:13:56 mount-test kubelet[3972]: I1007 14:13:56.141157 3972 topology_manager.go:200] "Topology Admit Handler" Oct 07 14:13:56 mount-test kubelet[3972]: I1007 14:13:56.223102 3972 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/720fd3e1-9ab7-4377-b55e-fc4654aed1a3-lib-modules\") pod \"kube-proxy-t5qgq\" (UID: \"720fd3e1-9ab7-4377-b55e-fc4654aed1a3\") " Oct 07 14:13:56 mount-test kubelet[3972]: I1007 14:13:56.223573 3972 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/720fd3e1-9ab7-4377-b55e-fc4654aed1a3-kube-proxy\") pod \"kube-proxy-t5qgq\" (UID: \"720fd3e1-9ab7-4377-b55e-fc4654aed1a3\") " Oct 07 14:13:56 mount-test kubelet[3972]: I1007 14:13:56.223930 3972 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/720fd3e1-9ab7-4377-b55e-fc4654aed1a3-xtables-lock\") pod \"kube-proxy-t5qgq\" (UID: \"720fd3e1-9ab7-4377-b55e-fc4654aed1a3\") " Oct 07 14:13:56 mount-test kubelet[3972]: I1007 14:13:56.224126 3972 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v5hh\" (UniqueName: \"kubernetes.io/projected/720fd3e1-9ab7-4377-b55e-fc4654aed1a3-kube-api-access-4v5hh\") pod \"kube-proxy-t5qgq\" (UID: \"720fd3e1-9ab7-4377-b55e-fc4654aed1a3\") " Oct 07 14:13:56 mount-test kubelet[3972]: E1007 14:13:56.349907 3972 projected.go:293] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 07 14:13:56 mount-test kubelet[3972]: E1007 14:13:56.349988 3972 projected.go:199] Error preparing data for projected volume kube-api-access-4v5hh for pod kube-system/kube-proxy-t5qgq: configmap "kube-root-ca.crt" not found Oct 07 14:13:56 mount-test kubelet[3972]: E1007 14:13:56.350134 3972 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/projected/720fd3e1-9ab7-4377-b55e-fc4654aed1a3-kube-api-access-4v5hh podName:720fd3e1-9ab7-4377-b55e-fc4654aed1a3 nodeName:}" failed. No retries permitted until 2021-10-07 14:13:56.850083864 +0000 UTC m=+13.858488969 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4v5hh" (UniqueName: "kubernetes.io/projected/720fd3e1-9ab7-4377-b55e-fc4654aed1a3-kube-api-access-4v5hh") pod "kube-proxy-t5qgq" (UID: "720fd3e1-9ab7-4377-b55e-fc4654aed1a3") : configmap "kube-root-ca.crt" not found Oct 07 14:13:56 mount-test kubelet[3972]: E1007 14:13:56.429115 3972 projected.go:293] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 07 14:13:56 mount-test kubelet[3972]: E1007 14:13:56.429520 3972 projected.go:199] Error preparing data for projected volume kube-api-access-6j6hd for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found Oct 07 14:13:56 mount-test kubelet[3972]: E1007 14:13:56.429869 3972 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/projected/0c7d5e2e-18b0-46fe-8a22-99510e83f1ca-kube-api-access-6j6hd podName:0c7d5e2e-18b0-46fe-8a22-99510e83f1ca nodeName:}" failed. No retries permitted until 2021-10-07 14:13:57.429797893 +0000 UTC m=+14.438203036 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6j6hd" (UniqueName: "kubernetes.io/projected/0c7d5e2e-18b0-46fe-8a22-99510e83f1ca-kube-api-access-6j6hd") pod "storage-provisioner" (UID: "0c7d5e2e-18b0-46fe-8a22-99510e83f1ca") : configmap "kube-root-ca.crt" not found Oct 07 14:13:56 mount-test kubelet[3972]: I1007 14:13:56.575433 3972 topology_manager.go:200] "Topology Admit Handler" Oct 07 14:13:56 mount-test kubelet[3972]: I1007 14:13:56.631427 3972 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ad8ea89-1c47-4328-a107-fd8dd9da0942-config-volume\") pod \"coredns-78fcd69978-jhxm6\" (UID: \"0ad8ea89-1c47-4328-a107-fd8dd9da0942\") " Oct 07 14:13:56 mount-test kubelet[3972]: I1007 14:13:56.631598 3972 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb5mk\" (UniqueName: \"kubernetes.io/projected/0ad8ea89-1c47-4328-a107-fd8dd9da0942-kube-api-access-wb5mk\") pod \"coredns-78fcd69978-jhxm6\" (UID: \"0ad8ea89-1c47-4328-a107-fd8dd9da0942\") " Oct 07 14:13:57 mount-test kubelet[3972]: I1007 14:13:57.859294 3972 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-jhxm6 through plugin: invalid network status for" Oct 07 14:13:58 mount-test kubelet[3972]: I1007 14:13:58.344151 3972 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="f023f9ba1c3d555b315e76e359102c0aef67eee3eeb14d6c0dd0a7ef298ed9c2" Oct 07 14:13:58 mount-test kubelet[3972]: I1007 14:13:58.371113 3972 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-jhxm6 through plugin: invalid network status for" Oct 07 14:13:58 mount-test kubelet[3972]: I1007 14:13:58.393478 3972 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="d3f57cf96620f9ccbc635ca9753da45eacda12d4530b0da679648954d43bbc11" Oct 07 14:13:59 mount-test kubelet[3972]: I1007 14:13:59.405823 3972 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-jhxm6 through plugin: invalid network status for" * * ==> storage-provisioner [ddc032f25175] <== * I1007 14:13:58.646197 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I1007 14:13:58.668114 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I1007 14:13:58.668190 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I1007 14:13:58.679827 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I1007 14:13:58.680360 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8d56c2ca-d119-49a8-bb43-78295f5c8e2b", APIVersion:"v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' mount-test_b3558cb4-72ad-46a1-8e0f-520b98fb55e1 became leader I1007 14:13:58.680386 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_mount-test_b3558cb4-72ad-46a1-8e0f-520b98fb55e1! I1007 14:13:58.790126 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_mount-test_b3558cb4-72ad-46a1-8e0f-520b98fb55e1!