Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minikube ssh driver fails with "Failed to restart cri-docker.socket.service: Unit cri-docker.socket.service not found." #18559

Closed
msplival opened this issue Apr 1, 2024 · 16 comments
Labels
co/generic-driver co/runtime/docker Issues specific to a docker runtime lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. triage/duplicate Indicates an issue is a duplicate of other open issue.

Comments

@msplival
Copy link

msplival commented Apr 1, 2024

What Happened?

I wanted to set up minikube on pre-installed kvm machine. I installed ubuntu 22.04 in it, docker (not from snap, but from docker repos), and all the requirements mentioned in the ssh driver manual page.
Here is how I started it: (note that without specifying ssh key minikube fails mid-process, not sure why):

minikube start --driver=ssh --ssh-ip-address=192.168.122.163 --ssh-user=mario --ssh-key=~/.ssh/id_rsa

However, start process fails with:

Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo service cri-docker.socket restart: Process exited with status 5

And, indeed, there is no service unit file for cri-docker.socket. But, minikube is using 'service' command, instead of systemctl. cri-docker service is installed:

root@minikube:~# systemctl status cri-docker.service
● cri-docker.service - CRI Interface for Docker Application Container Engine
     Loaded: loaded (/lib/systemd/system/cri-docker.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/cri-docker.service.d
             └─10-cni.conf
     Active: active (running) since Mon 2024-04-01 19:13:33 UTC; 4min 50s ago
TriggeredBy: ● cri-docker.socket
       Docs: https://docs.mirantis.com
   Main PID: 5762 (cri-dockerd)
      Tasks: 8
     Memory: 8.9M
        CPU: 85ms
     CGroup: /system.slice/cri-docker.service
             └─5762 /usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image=registry.k8s.io/pause:3.9 --network-plugin=cni --hairpin-mode=hairpin-veth

The command: service cri-docker.socket will run service init.d wrapper which will then check if systemd is installed (it is), it will then add .service to the end of the unit (cri-docker.socket in this case), and then run systemctl as follows: systemctl restart cri-docker.socket.service which does not exist.

Attach the log file

I can not get a log file via minikube, but this is excerpt (the end) of the minikube log when started with --alsologtostderr:

I0401 21:11:41.283081  453000 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0401 21:11:41.283155  453000 ssh_runner.go:195] Run: sudo service crio status
I0401 21:11:41.294571  453000 command_runner.go:130] ! Unit crio.service could not be found.
I0401 21:11:41.295920  453000 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0401 21:11:41.309007  453000 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0401 21:11:41.309786  453000 ssh_runner.go:195] Run: which cri-dockerd
I0401 21:11:41.312344  453000 command_runner.go:130] > /usr/bin/cri-dockerd
I0401 21:11:41.312409  453000 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0401 21:11:41.319295  453000 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0401 21:11:41.334627  453000 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
I0401 21:11:41.334706  453000 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0401 21:11:41.352218  453000 ssh_runner.go:195] Run: sudo service docker restart
I0401 21:11:42.229016  453000 openrc.go:158] restart output: 
I0401 21:11:42.229106  453000 ssh_runner.go:195] Run: sudo service cri-docker.socket restart
I0401 21:11:42.245170  453000 command_runner.go:130] ! Failed to restart cri-docker.socket.service: Unit cri-docker.socket.service not found.
I0401 21:11:42.247766  453000 out.go:177] 

W0401 21:11:42.249380  453000 out.go:239] ❌  Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo service cri-docker.socket restart: Process exited with status 5
stdout:

stderr:
Failed to restart cri-docker.socket.service: Unit cri-docker.socket.service not found.

Operating System

Ubuntu

Driver

SSH

@msplival
Copy link
Author

msplival commented Apr 1, 2024

Eh, I just realized this is very similar to this issue: #15413

@spowelljr
Copy link
Member

spowelljr commented Apr 1, 2024

@msplival
Copy link
Author

msplival commented Apr 1, 2024

Hi, @spowelljr thank you for your response.

The PR you linked seems to address different issue. Problem here is that minikube thinks the target is running openrc and not systemd, so it invokes 'service restart this-n-that' instead of 'systemctl restart this-n-that'.

Or am I wrong?

@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 2, 2024

And, indeed, there is no service unit file for cri-docker.socket.

You seem to be missing some unit files from the cri-dockerd installation, the upstream files are here:

https://github.com/Mirantis/cri-dockerd/tree/master/packaging/systemd

But minikube seems to still be calling service, rather than systemctl

@afbjorklund afbjorklund added co/generic-driver co/runtime/docker Issues specific to a docker runtime labels Apr 2, 2024
@afbjorklund
Copy link
Collaborator

The check for systemd is quite simple, it just runs systemctl --version

func usesSystemd(r Runner) bool {
        _, err := r.RunCmd(exec.Command("systemctl", "--version"))
        return err == nil
}

The debug log should have the full output, on why that command is failing.

@msplival
Copy link
Author

msplival commented Apr 2, 2024

Yes, the check is simple, and it seems it passes on my setup - I see the command being run, but I don't see any errors to it.

Here is the log file from the 'whole run', but from the output of minikube command, with --alsologtostderr:

mario@BUNTOR ~> cat /tmp/output.txt 
mario@BUNTOR ~> minikube start --driver=ssh --ssh-ip-address=192.168.122.163 --ssh-user=mario --ssh-key=~/.ssh/id_rsa  --alsologtostderr -v=8
I0401 21:11:30.178394  453000 out.go:296] Setting OutFile to fd 1 ...
I0401 21:11:30.178605  453000 out.go:348] isatty.IsTerminal(1) = true
I0401 21:11:30.178612  453000 out.go:309] Setting ErrFile to fd 2...
I0401 21:11:30.178627  453000 out.go:348] isatty.IsTerminal(2) = true
I0401 21:11:30.178847  453000 root.go:338] Updating PATH: /home/mario/.minikube/bin
W0401 21:11:30.178991  453000 root.go:314] Error reading config file at /home/mario/.minikube/config/config.json: open /home/mario/.minikube/config/config.json: no such file or directory
I0401 21:11:30.179391  453000 out.go:303] Setting JSON to false
I0401 21:11:30.180673  453000 start.go:128] hostinfo: {"hostname":"buntor","uptime":95761,"bootTime":1711902929,"procs":539,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"bookworm/sid","kernelVersion":"6.5.0-26-generic","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"586f99fb-c25b-4969-a53a-99c34e7c8baf"}
I0401 21:11:30.180715  453000 start.go:138] virtualization: kvm host
I0401 21:11:30.183587  453000 out.go:177] 😄  minikube v1.32.0 on Debian bookworm/sid
😄  minikube v1.32.0 on Debian bookworm/sid
W0401 21:11:30.185655  453000 preload.go:295] Failed to list preload files: open /home/mario/.minikube/cache/preloaded-tarball: no such file or directory
I0401 21:11:30.185721  453000 notify.go:220] Checking for updates...
I0401 21:11:30.185795  453000 driver.go:378] Setting default libvirt URI to qemu:///system
I0401 21:11:30.187303  453000 out.go:177] ✨  Using the ssh driver based on user configuration
✨  Using the ssh driver based on user configuration
I0401 21:11:30.188704  453000 start.go:298] selected driver: ssh
I0401 21:11:30.188723  453000 start.go:902] validating driver "ssh" against <nil>
I0401 21:11:30.188731  453000 start.go:913] status for ssh: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0401 21:11:30.188986  453000 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
I0401 21:11:30.189490  453000 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32036MB, container=0MB
I0401 21:11:30.189608  453000 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
I0401 21:11:30.189635  453000 cni.go:84] Creating CNI manager for ""
I0401 21:11:30.189645  453000 cni.go:158] "ssh" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0401 21:11:30.189660  453000 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0401 21:11:30.189679  453000 start_flags.go:323] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:ssh HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress:192.168.122.163 SSHUser:mario SSHKey:~/.ssh/id_rsa SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/mario:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
I0401 21:11:30.191251  453000 out.go:177] 👍  Starting control plane node minikube in cluster minikube
👍  Starting control plane node minikube in cluster minikube
I0401 21:11:30.192544  453000 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
I0401 21:11:30.192754  453000 cache.go:107] acquiring lock: {Name:mkb16b2a4f57eff285eba66f9d9f0f0da3a3a128 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0401 21:11:30.192763  453000 cache.go:107] acquiring lock: {Name:mkdd62752945bd6a6f4a5ff162012ff7aa4d0825 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0401 21:11:30.192808  453000 cache.go:107] acquiring lock: {Name:mk46782363fc83682f8c6ecdbfb960d99ebedd1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0401 21:11:30.192829  453000 cache.go:107] acquiring lock: {Name:mk56ac72c5069617aab98d8f4973466c35e9958d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0401 21:11:30.192835  453000 cache.go:107] acquiring lock: {Name:mkd867a40f52ce0b343a7aa150f325ba3b864ac9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0401 21:11:30.192863  453000 profile.go:148] Saving config to /home/mario/.minikube/profiles/minikube/config.json ...
I0401 21:11:30.192879  453000 cache.go:107] acquiring lock: {Name:mk476f40b161011695aa1d2f7c84f8b936737a07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0401 21:11:30.192908  453000 lock.go:35] WriteFile acquiring /home/mario/.minikube/profiles/minikube/config.json: {Name:mk7fc10aa42c3e6d93591b6a8a558dbc720a1b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0401 21:11:30.192905  453000 cache.go:107] acquiring lock: {Name:mk61eec7d884b0762a2e4bdad89b332a1a6cd3f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0401 21:11:30.192932  453000 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
I0401 21:11:30.192954  453000 image.go:134] retrieving image: registry.k8s.io/pause:3.9
I0401 21:11:30.192964  453000 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
I0401 21:11:30.192996  453000 start.go:365] acquiring machines lock for minikube: {Name:mke4ca674cee3c777264000e382a701880d331f2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0401 21:11:30.193005  453000 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0401 21:11:30.192935  453000 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
I0401 21:11:30.192956  453000 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
I0401 21:11:30.193158  453000 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0401 21:11:30.193169  453000 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0401 21:11:30.193155  453000 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0401 21:11:30.192938  453000 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
I0401 21:11:30.193190  453000 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0401 21:11:30.193018  453000 start.go:369] acquired machines lock for "minikube" in 11.298µs
I0401 21:11:30.193236  453000 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0401 21:11:30.193220  453000 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:ssh HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress:192.168.122.163 SSHUser:mario SSHKey:~/.ssh/id_rsa SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/mario:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0401 21:11:30.193274  453000 start.go:125] createHost starting for "" (driver="ssh")
I0401 21:11:30.193158  453000 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0401 21:11:30.192854  453000 cache.go:107] acquiring lock: {Name:mkb3079065e598c5f73b70b9f793d0d4b3336bf0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0401 21:11:30.193307  453000 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0401 21:11:30.193309  453000 ssh_runner.go:195] Run: systemctl --version
I0401 21:11:30.193379  453000 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
I0401 21:11:30.193420  453000 retry.go:31] will retry after 155.698216ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: IP address is not set
I0401 21:11:30.193520  453000 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0401 21:11:30.349709  453000 retry.go:31] will retry after 395.16608ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: IP address is not set
I0401 21:11:30.746207  453000 retry.go:31] will retry after 843.139745ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: IP address is not set
I0401 21:11:30.792948  453000 cache.go:162] opening:  /home/mario/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
I0401 21:11:30.804892  453000 cache.go:162] opening:  /home/mario/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3
I0401 21:11:30.805483  453000 cache.go:162] opening:  /home/mario/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
I0401 21:11:30.810131  453000 cache.go:162] opening:  /home/mario/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3
I0401 21:11:30.825536  453000 cache.go:162] opening:  /home/mario/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
I0401 21:11:30.837072  453000 cache.go:162] opening:  /home/mario/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3
I0401 21:11:30.856332  453000 cache.go:162] opening:  /home/mario/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3
I0401 21:11:30.979207  453000 cache.go:157] /home/mario/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
I0401 21:11:30.979278  453000 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/mario/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 786.388156ms
I0401 21:11:30.979336  453000 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/mario/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
I0401 21:11:31.542824  453000 cache.go:162] opening:  /home/mario/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
I0401 21:11:31.590053  453000 retry.go:31] will retry after 536.937137ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: IP address is not set
I0401 21:11:32.127694  453000 start.go:159] libmachine.API.Create for "minikube" (driver="ssh")
I0401 21:11:32.127717  453000 client.go:168] LocalClient.Create starting
I0401 21:11:32.127810  453000 main.go:141] libmachine: Creating CA: /home/mario/.minikube/certs/ca.pem
I0401 21:11:32.262553  453000 main.go:141] libmachine: Creating client certificate: /home/mario/.minikube/certs/cert.pem
I0401 21:11:32.571606  453000 main.go:141] libmachine: Importing SSH key...
I0401 21:11:32.571779  453000 ssh_runner.go:195] Run: groups mario
I0401 21:11:32.571791  453000 sshutil.go:53] new ssh client: &{IP:192.168.122.163 Port:22 SSHKeyPath:/home/mario/.minikube/machines/minikube/id_rsa Username:mario}
I0401 21:11:32.655085  453000 cache.go:157] /home/mario/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I0401 21:11:32.655108  453000 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/mario/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.462360057s
I0401 21:11:32.655119  453000 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/mario/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I0401 21:11:32.959900  453000 command_runner.go:130] > mario : mario adm cdrom sudo dip plugdev lxd docker
I0401 21:11:32.959950  453000 main.go:141] libmachine: IP: 192.168.122.163
I0401 21:11:32.960218  453000 machine.go:88] provisioning docker machine ...
I0401 21:11:32.960232  453000 main.go:141] libmachine: Waiting for SSH to be available...
I0401 21:11:32.960238  453000 main.go:141] libmachine: Getting to WaitForSSH function...
I0401 21:11:32.960269  453000 main.go:141] libmachine: Using SSH client type: native
I0401 21:11:32.960541  453000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} mario [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.122.163 22 <nil> <nil>}
I0401 21:11:32.960551  453000 main.go:141] libmachine: About to run SSH command:
exit 0
I0401 21:11:33.114279  453000 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0401 21:11:33.114303  453000 main.go:141] libmachine: Detecting the provisioner...
I0401 21:11:33.114341  453000 main.go:141] libmachine: Using SSH client type: native
I0401 21:11:33.114712  453000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} mario [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.122.163 22 <nil> <nil>}
I0401 21:11:33.114724  453000 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0401 21:11:33.266853  453000 main.go:141] libmachine: SSH cmd err, output: <nil>: PRETTY_NAME="Ubuntu 22.04.4 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.4 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy

I0401 21:11:33.266887  453000 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0401 21:11:33.266903  453000 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0401 21:11:33.266911  453000 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0401 21:11:33.266934  453000 main.go:141] libmachine: found compatible host: ubuntu
I0401 21:11:33.266941  453000 ubuntu.go:169] provisioning hostname "minikube"
I0401 21:11:33.266976  453000 main.go:141] libmachine: Using SSH client type: native
I0401 21:11:33.267259  453000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} mario [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.122.163 22 <nil> <nil>}
I0401 21:11:33.267268  453000 main.go:141] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0401 21:11:33.440256  453000 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

I0401 21:11:33.440362  453000 main.go:141] libmachine: Using SSH client type: native
I0401 21:11:33.440920  453000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} mario [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.122.163 22 <nil> <nil>}
I0401 21:11:33.440959  453000 main.go:141] libmachine: About to run SSH command:

                if ! grep -xq '.*\sminikube' /etc/hosts; then
                        if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
                        else 
                                echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
                        fi
                fi
I0401 21:11:33.603368  453000 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0401 21:11:33.603389  453000 ubuntu.go:175] set auth options {CertDir:/home/mario/.minikube CaCertPath:/home/mario/.minikube/certs/ca.pem CaPrivateKeyPath:/home/mario/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/mario/.minikube/machines/server.pem ServerKeyPath:/home/mario/.minikube/machines/server-key.pem ClientKeyPath:/home/mario/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/mario/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/mario/.minikube}
I0401 21:11:33.603410  453000 ubuntu.go:177] setting up certificates
I0401 21:11:33.603416  453000 provision.go:83] configureAuth start
I0401 21:11:33.603423  453000 provision.go:138] copyHostCerts
I0401 21:11:33.603438  453000 vm_assets.go:163] NewFileAsset: /home/mario/.minikube/certs/ca.pem -> /home/mario/.minikube/ca.pem
I0401 21:11:33.603473  453000 exec_runner.go:151] cp: /home/mario/.minikube/certs/ca.pem --> /home/mario/.minikube/ca.pem (1074 bytes)
I0401 21:11:33.603541  453000 vm_assets.go:163] NewFileAsset: /home/mario/.minikube/certs/cert.pem -> /home/mario/.minikube/cert.pem
I0401 21:11:33.603568  453000 exec_runner.go:151] cp: /home/mario/.minikube/certs/cert.pem --> /home/mario/.minikube/cert.pem (1119 bytes)
I0401 21:11:33.603611  453000 vm_assets.go:163] NewFileAsset: /home/mario/.minikube/certs/key.pem -> /home/mario/.minikube/key.pem
I0401 21:11:33.603630  453000 exec_runner.go:151] cp: /home/mario/.minikube/certs/key.pem --> /home/mario/.minikube/key.pem (1679 bytes)
I0401 21:11:33.603675  453000 provision.go:112] generating server cert: /home/mario/.minikube/machines/server.pem ca-key=/home/mario/.minikube/certs/ca.pem private-key=/home/mario/.minikube/certs/ca-key.pem org=mario.minikube san=[192.168.122.163 192.168.122.163 localhost 127.0.0.1 minikube minikube]
I0401 21:11:33.720467  453000 provision.go:172] copyRemoteCerts
I0401 21:11:33.720515  453000 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0401 21:11:33.720528  453000 sshutil.go:53] new ssh client: &{IP:192.168.122.163 Port:22 SSHKeyPath:/home/mario/.minikube/machines/minikube/id_rsa Username:mario}
I0401 21:11:33.744153  453000 cache.go:157] /home/mario/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 exists
I0401 21:11:33.744177  453000 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/home/mario/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1" took 3.551345524s
I0401 21:11:33.744192  453000 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /home/mario/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
I0401 21:11:33.848646  453000 vm_assets.go:163] NewFileAsset: /home/mario/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0401 21:11:33.848703  453000 ssh_runner.go:362] scp /home/mario/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1074 bytes)
I0401 21:11:33.875642  453000 vm_assets.go:163] NewFileAsset: /home/mario/.minikube/machines/server.pem -> /etc/docker/server.pem
I0401 21:11:33.875699  453000 ssh_runner.go:362] scp /home/mario/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
I0401 21:11:33.901347  453000 vm_assets.go:163] NewFileAsset: /home/mario/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0401 21:11:33.901441  453000 ssh_runner.go:362] scp /home/mario/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0401 21:11:33.927474  453000 provision.go:86] duration metric: configureAuth took 324.048019ms
I0401 21:11:33.927493  453000 ubuntu.go:193] setting minikube options for container-runtime
I0401 21:11:33.927617  453000 config.go:182] Loaded profile config "minikube": Driver=ssh, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I0401 21:11:33.927666  453000 main.go:141] libmachine: Using SSH client type: native
I0401 21:11:33.927948  453000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} mario [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.122.163 22 <nil> <nil>}
I0401 21:11:33.927958  453000 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0401 21:11:34.091595  453000 main.go:141] libmachine: SSH cmd err, output: <nil>: ext4

I0401 21:11:34.091616  453000 ubuntu.go:71] root file system type: ext4
I0401 21:11:34.091779  453000 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0401 21:11:34.091849  453000 main.go:141] libmachine: Using SSH client type: native
I0401 21:11:34.092359  453000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} mario [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.122.163 22 <nil> <nil>}
I0401 21:11:34.092424  453000 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=ssh --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0401 21:11:34.272042  453000 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=ssh --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0401 21:11:34.272128  453000 main.go:141] libmachine: Using SSH client type: native
I0401 21:11:34.272451  453000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} mario [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.122.163 22 <nil> <nil>}
I0401 21:11:34.272469  453000 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0401 21:11:34.436966  453000 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0401 21:11:34.436986  453000 machine.go:91] provisioned docker machine in 1.476757899s
I0401 21:11:34.436999  453000 client.go:171] LocalClient.Create took 2.309274389s
I0401 21:11:34.437016  453000 start.go:167] duration metric: libmachine.API.Create for "minikube" took 2.30932408s
I0401 21:11:34.437063  453000 ssh_runner.go:195] Run: nproc
I0401 21:11:34.437080  453000 sshutil.go:53] new ssh client: &{IP:192.168.122.163 Port:22 SSHKeyPath:/home/mario/.minikube/machines/minikube/id_rsa Username:mario}
I0401 21:11:34.559066  453000 command_runner.go:130] > 2
I0401 21:11:34.559116  453000 ssh_runner.go:195] Run: free -m
I0401 21:11:34.562374  453000 command_runner.go:130] >                total        used        free      shared  buff/cache   available
I0401 21:11:34.562391  453000 command_runner.go:130] > Mem:            7937         280        7117           1         539        7411
I0401 21:11:34.562398  453000 command_runner.go:130] > Swap:           4095           0        4095
I0401 21:11:34.562438  453000 ssh_runner.go:195] Run: df -m
I0401 21:11:34.565278  453000 command_runner.go:130] > Filesystem     1M-blocks  Used Available Use% Mounted on
I0401 21:11:34.565300  453000 command_runner.go:130] > tmpfs                794     2       793   1% /run
I0401 21:11:34.565316  453000 command_runner.go:130] > /dev/vda2          60165  8066     49012  15% /
I0401 21:11:34.565321  453000 command_runner.go:130] > tmpfs               3969     0      3969   0% /dev/shm
I0401 21:11:34.565331  453000 command_runner.go:130] > tmpfs                  5     0         5   0% /run/lock
I0401 21:11:34.565340  453000 command_runner.go:130] > tmpfs                794     1       794   1% /run/user/1000
I0401 21:11:34.567740  453000 out.go:177] 🔗  Running remotely (CPUs=2, Memory=7937MB, Disk=60165MB) ...
🔗  Running remotely (CPUs=2, Memory=7937MB, Disk=60165MB) ...
I0401 21:11:34.569607  453000 start.go:300] post-start starting for "minikube" (driver="ssh")
I0401 21:11:34.569630  453000 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0401 21:11:34.569686  453000 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0401 21:11:34.569707  453000 sshutil.go:53] new ssh client: &{IP:192.168.122.163 Port:22 SSHKeyPath:/home/mario/.minikube/machines/minikube/id_rsa Username:mario}
I0401 21:11:34.688288  453000 ssh_runner.go:195] Run: cat /etc/os-release
I0401 21:11:34.691086  453000 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
I0401 21:11:34.691097  453000 command_runner.go:130] > NAME="Ubuntu"
I0401 21:11:34.691103  453000 command_runner.go:130] > VERSION_ID="22.04"
I0401 21:11:34.691112  453000 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
I0401 21:11:34.691120  453000 command_runner.go:130] > VERSION_CODENAME=jammy
I0401 21:11:34.691126  453000 command_runner.go:130] > ID=ubuntu
I0401 21:11:34.691133  453000 command_runner.go:130] > ID_LIKE=debian
I0401 21:11:34.691140  453000 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
I0401 21:11:34.691150  453000 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
I0401 21:11:34.691161  453000 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
I0401 21:11:34.691173  453000 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
I0401 21:11:34.691181  453000 command_runner.go:130] > UBUNTU_CODENAME=jammy
I0401 21:11:34.691234  453000 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0401 21:11:34.691268  453000 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0401 21:11:34.691285  453000 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0401 21:11:34.691294  453000 info.go:137] Remote host: Ubuntu 22.04.4 LTS
I0401 21:11:34.691307  453000 filesync.go:126] Scanning /home/mario/.minikube/addons for local assets ...
I0401 21:11:34.691370  453000 filesync.go:126] Scanning /home/mario/.minikube/files for local assets ...
I0401 21:11:34.691400  453000 start.go:303] post-start completed in 121.780847ms
I0401 21:11:34.691686  453000 profile.go:148] Saving config to /home/mario/.minikube/profiles/minikube/config.json ...
I0401 21:11:34.691813  453000 start.go:128] duration metric: createHost completed in 4.498532893s
I0401 21:11:34.691854  453000 main.go:141] libmachine: Using SSH client type: native
I0401 21:11:34.692189  453000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} mario [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.122.163 22 <nil> <nil>}
I0401 21:11:34.692204  453000 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0401 21:11:34.850828  453000 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711998694.857943792

I0401 21:11:34.850843  453000 fix.go:206] guest clock: 1711998694.857943792
I0401 21:11:34.850849  453000 fix.go:219] Guest: 2024-04-01 21:11:34.857943792 +0200 CEST Remote: 2024-04-01 21:11:34.691820483 +0200 CEST m=+4.549415301 (delta=166.123309ms)
I0401 21:11:34.850863  453000 fix.go:190] guest clock delta is within tolerance: 166.123309ms
I0401 21:11:34.850867  453000 start.go:83] releasing machines lock for "minikube", held for 4.65765472s
I0401 21:11:34.851252  453000 ssh_runner.go:195] Run: cat /version.json
I0401 21:11:34.851262  453000 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0401 21:11:34.851273  453000 sshutil.go:53] new ssh client: &{IP:192.168.122.163 Port:22 SSHKeyPath:/home/mario/.minikube/machines/minikube/id_rsa Username:mario}
I0401 21:11:34.851290  453000 sshutil.go:53] new ssh client: &{IP:192.168.122.163 Port:22 SSHKeyPath:/home/mario/.minikube/machines/minikube/id_rsa Username:mario}
I0401 21:11:34.982705  453000 command_runner.go:130] ! cat: /version.json: No such file or directory
W0401 21:11:34.982732  453000 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
stdout:

stderr:
cat: /version.json: No such file or directory
I0401 21:11:35.137215  453000 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I0401 21:11:35.593343  453000 cache.go:157] /home/mario/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 exists
I0401 21:11:35.593374  453000 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.3" -> "/home/mario/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3" took 5.400613752s
I0401 21:11:35.593395  453000 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.3 -> /home/mario/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 succeeded
I0401 21:11:35.837938  453000 cache.go:157] /home/mario/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 exists
I0401 21:11:35.837956  453000 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.3" -> "/home/mario/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3" took 5.645152446s
I0401 21:11:35.837976  453000 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.3 -> /home/mario/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 succeeded
I0401 21:11:36.056709  453000 cache.go:157] /home/mario/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 exists
I0401 21:11:36.056831  453000 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.3" -> "/home/mario/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3" took 5.863915173s
I0401 21:11:36.057035  453000 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.3 -> /home/mario/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 succeeded
I0401 21:11:36.243738  453000 cache.go:157] /home/mario/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 exists
I0401 21:11:36.243800  453000 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.3" -> "/home/mario/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3" took 6.050980347s
I0401 21:11:36.243855  453000 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.3 -> /home/mario/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 succeeded
I0401 21:11:41.048731  453000 cache.go:157] /home/mario/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 exists
I0401 21:11:41.048791  453000 cache.go:96] cache image "registry.k8s.io/etcd:3.5.9-0" -> "/home/mario/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0" took 10.856001763s
I0401 21:11:41.048846  453000 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.9-0 -> /home/mario/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 succeeded
I0401 21:11:41.048911  453000 cache.go:87] Successfully saved all images to host disk.
I0401 21:11:41.049116  453000 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0401 21:11:41.064845  453000 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W0401 21:11:41.066183  453000 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0401 21:11:41.066405  453000 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0401 21:11:41.102722  453000 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0401 21:11:41.102786  453000 start.go:472] detecting cgroup driver to use...
I0401 21:11:41.103033  453000 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0401 21:11:41.118979  453000 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I0401 21:11:41.119079  453000 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0401 21:11:41.127762  453000 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0401 21:11:41.136161  453000 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0401 21:11:41.136238  453000 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0401 21:11:41.144605  453000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0401 21:11:41.153590  453000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0401 21:11:41.164316  453000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0401 21:11:41.173481  453000 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0401 21:11:41.181543  453000 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0401 21:11:41.189920  453000 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0401 21:11:41.196745  453000 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I0401 21:11:41.196826  453000 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0401 21:11:41.204168  453000 ssh_runner.go:195] Run: sudo service containerd restart
I0401 21:11:41.269242  453000 openrc.go:158] restart output: 
I0401 21:11:41.269274  453000 start.go:472] detecting cgroup driver to use...
I0401 21:11:41.269349  453000 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0401 21:11:41.277489  453000 command_runner.go:130] > # /lib/systemd/system/docker.service
I0401 21:11:41.277881  453000 command_runner.go:130] > [Unit]
I0401 21:11:41.277891  453000 command_runner.go:130] > Description=Docker Application Container Engine
I0401 21:11:41.277903  453000 command_runner.go:130] > Documentation=https://docs.docker.com
I0401 21:11:41.277913  453000 command_runner.go:130] > BindsTo=containerd.service
I0401 21:11:41.277923  453000 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
I0401 21:11:41.277932  453000 command_runner.go:130] > Wants=network-online.target
I0401 21:11:41.277941  453000 command_runner.go:130] > Requires=docker.socket
I0401 21:11:41.277949  453000 command_runner.go:130] > StartLimitBurst=3
I0401 21:11:41.277957  453000 command_runner.go:130] > StartLimitIntervalSec=60
I0401 21:11:41.277964  453000 command_runner.go:130] > [Service]
I0401 21:11:41.277971  453000 command_runner.go:130] > Type=notify
I0401 21:11:41.277978  453000 command_runner.go:130] > Restart=on-failure
I0401 21:11:41.277991  453000 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I0401 21:11:41.278005  453000 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I0401 21:11:41.278026  453000 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I0401 21:11:41.278039  453000 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I0401 21:11:41.278051  453000 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I0401 21:11:41.278063  453000 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I0401 21:11:41.278076  453000 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I0401 21:11:41.278091  453000 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I0401 21:11:41.278103  453000 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I0401 21:11:41.278111  453000 command_runner.go:130] > ExecStart=
I0401 21:11:41.278136  453000 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=ssh --insecure-registry 10.96.0.0/12 
I0401 21:11:41.278147  453000 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I0401 21:11:41.278160  453000 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I0401 21:11:41.278171  453000 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I0401 21:11:41.278179  453000 command_runner.go:130] > LimitNOFILE=infinity
I0401 21:11:41.278187  453000 command_runner.go:130] > LimitNPROC=infinity
I0401 21:11:41.278195  453000 command_runner.go:130] > LimitCORE=infinity
I0401 21:11:41.278205  453000 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I0401 21:11:41.278215  453000 command_runner.go:130] > # Only systemd 226 and above support this version.
I0401 21:11:41.278223  453000 command_runner.go:130] > TasksMax=infinity
I0401 21:11:41.278235  453000 command_runner.go:130] > TimeoutStartSec=0
I0401 21:11:41.278247  453000 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I0401 21:11:41.278255  453000 command_runner.go:130] > Delegate=yes
I0401 21:11:41.278266  453000 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I0401 21:11:41.278274  453000 command_runner.go:130] > KillMode=process
I0401 21:11:41.278281  453000 command_runner.go:130] > [Install]
I0401 21:11:41.278292  453000 command_runner.go:130] > WantedBy=multi-user.target
I0401 21:11:41.283081  453000 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0401 21:11:41.283155  453000 ssh_runner.go:195] Run: sudo service crio status
I0401 21:11:41.294571  453000 command_runner.go:130] ! Unit crio.service could not be found.
I0401 21:11:41.295920  453000 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0401 21:11:41.309007  453000 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0401 21:11:41.309786  453000 ssh_runner.go:195] Run: which cri-dockerd
I0401 21:11:41.312344  453000 command_runner.go:130] > /usr/bin/cri-dockerd
I0401 21:11:41.312409  453000 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0401 21:11:41.319295  453000 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0401 21:11:41.334627  453000 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
I0401 21:11:41.334706  453000 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0401 21:11:41.352218  453000 ssh_runner.go:195] Run: sudo service docker restart
I0401 21:11:42.229016  453000 openrc.go:158] restart output: 
I0401 21:11:42.229106  453000 ssh_runner.go:195] Run: sudo service cri-docker.socket restart
I0401 21:11:42.245170  453000 command_runner.go:130] ! Failed to restart cri-docker.socket.service: Unit cri-docker.socket.service not found.
I0401 21:11:42.247766  453000 out.go:177] 

W0401 21:11:42.249380  453000 out.go:239] ❌  Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo service cri-docker.socket restart: Process exited with status 5
stdout:

stderr:
Failed to restart cri-docker.socket.service: Unit cri-docker.socket.service not found.

❌  Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo service cri-docker.socket restart: Process exited with status 5
stdout:

stderr:
Failed to restart cri-docker.socket.service: Unit cri-docker.socket.service not found.

W0401 21:11:42.249407  453000 out.go:239] 

W0401 21:11:42.250113  453000 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
I0401 21:11:42.251864  453000 out.go:177] 

mario@BUNTOR ~> 

@afbjorklund
Copy link
Collaborator

I seem to be wrong about the logging part. Might have to run minikube ssh -- systemctl --version

@msplival
Copy link
Author

msplival commented Apr 2, 2024

Re: incomplete cri-docker installation, I installed the latest .deb from https://github.com/Mirantis/cri-dockerd/releases

Instalation did install those two service files:

/.
/lib
/lib/systemd
/lib/systemd/system
/lib/systemd/system/cri-docker.service
/lib/systemd/system/cri-docker.socket
/usr
/usr/bin
/usr/bin/cri-dockerd
/usr/share
/usr/share/doc
/usr/share/doc/cri-dockerd
/usr/share/doc/cri-dockerd/changelog.Debian.gz
mario@minikube:~$ cat /lib/systemd/system/cri-docker.service /lib/systemd/system/cri-docker.socket
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket

[Service]
Type=notify
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd://
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3

# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service

[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker

[Install]
WantedBy=sockets.target
mario@minikube:~$ systemctl status cri-docker.service cri-docker.socket 
● cri-docker.service - CRI Interface for Docker Application Container Engine
     Loaded: loaded (/lib/systemd/system/cri-docker.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/cri-docker.service.d
             └─10-cni.conf
     Active: active (running) since Mon 2024-04-01 19:13:33 UTC; 11h ago
TriggeredBy: ● cri-docker.socket
       Docs: https://docs.mirantis.com
   Main PID: 5762 (cri-dockerd)
      Tasks: 8
     Memory: 9.5M
        CPU: 9.963s
     CGroup: /system.slice/cri-docker.service
             └─5762 /usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image=registry.k8s.io/pause:3.9 --network-plugin=cni --hairpin-mode=hairpin-veth

Apr 01 19:13:33 minikube cri-dockerd[5762]: time="2024-04-01T19:13:33Z" level=info msg="Hairpin mode is set to hairpin-veth"
Apr 01 19:13:33 minikube cri-dockerd[5762]: time="2024-04-01T19:13:33Z" level=info msg="The binary conntrack is not installed, this can cause failures in network connection cleanup."
Apr 01 19:13:33 minikube cri-dockerd[5762]: time="2024-04-01T19:13:33Z" level=info msg="The binary conntrack is not installed, this can cause failures in network connection cleanup."
Apr 01 19:13:33 minikube cri-dockerd[5762]: time="2024-04-01T19:13:33Z" level=info msg="Loaded network plugin cni"
Apr 01 19:13:33 minikube cri-dockerd[5762]: time="2024-04-01T19:13:33Z" level=info msg="Docker cri networking managed by network plugin cni"
Apr 01 19:13:33 minikube cri-dockerd[5762]: time="2024-04-01T19:13:33Z" level=info msg="Setting cgroupDriver cgroupfs"
Apr 01 19:13:33 minikube cri-dockerd[5762]: time="2024-04-01T19:13:33Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
Apr 01 19:13:33 minikube cri-dockerd[5762]: time="2024-04-01T19:13:33Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
Apr 01 19:13:33 minikube cri-dockerd[5762]: time="2024-04-01T19:13:33Z" level=info msg="Start cri-dockerd grpc backend"
Apr 01 19:13:33 minikube systemd[1]: Started CRI Interface for Docker Application Container Engine.

● cri-docker.socket - CRI Docker Socket for the API
     Loaded: loaded (/lib/systemd/system/cri-docker.socket; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2024-04-01 19:13:33 UTC; 11h ago
   Triggers: ● cri-docker.service
     Listen: /run/cri-dockerd.sock (Stream)
      Tasks: 0 (limit: 9389)
     Memory: 0B
        CPU: 421us
     CGroup: /system.slice/cri-docker.socket

Apr 01 19:13:33 minikube systemd[1]: Starting CRI Docker Socket for the API...
Apr 01 19:13:33 minikube systemd[1]: Listening on CRI Docker Socket for the API.
mario@minikube:~$ 

@msplival
Copy link
Author

msplival commented Apr 2, 2024

I seem to be wrong about the logging part. Might have to run minikube ssh -- systemctl --version

mario@BUNTOR ~> minikube ssh -- systemctl --version
systemd 249 (249.11-0ubuntu3.12)
+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
mario@BUNTOR ~> 

@afbjorklund
Copy link
Collaborator

It is theoretically possible to always run the service, but it seemed like a good idea to have it socket-activated.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 2, 2024

I think it is the same bug as in the other issue, with ssh being called before IP is known:

The generic/ssh driver isn't getting much love, these days.

i.e. normally using the kvm driver would be the default here

@afbjorklund afbjorklund added triage/duplicate Indicates an issue is a duplicate of other open issue. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Apr 2, 2024
@msplival
Copy link
Author

msplival commented Apr 2, 2024

This part is also weird - this is excerpt from /var/log/auth.log, where all 'sudo' commands on the minikube vm are being logged:

Apr  1 18:17:55 minikube sudo:    mario : PWD=/home/mario ; USER=root ; COMMAND=/usr/bin/mkdir -p /etc/systemd/system/cri-docker.service.d
Apr  1 18:17:55 minikube sudo:    mario : PWD=/home/mario ; USER=root ; COMMAND=/usr/bin/test -d /etc/systemd/system/cri-docker.service.d
Apr  1 18:17:55 minikube sudo:    mario : PWD=/home/mario ; USER=root ; COMMAND=/usr/bin/scp -t /etc/systemd/system/cri-docker.service.d
Apr  1 18:17:56 minikube sudo:    mario : PWD=/home/mario ; USER=root ; COMMAND=/usr/sbin/service cri-docker.socket restart
Apr  1 18:18:24 minikube sudo:    mario : PWD=/home/mario ; USER=root ; COMMAND=/usr/bin/journalctl -u docker -u cri-docker -n 60
Apr  1 18:20:19 minikube sudo:    mario : PWD=/home/mario ; USER=root ; COMMAND=/usr/bin/env PATH=/var/lib/minikube/binaries/v1.28.3:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force
Apr  1 18:21:21 minikube sudo:    mario : TTY=pts/0 ; PWD=/etc/systemd/system/docker.service.d ; USER=root ; COMMAND=/usr/bin/systemctl start cri-docker
Apr  1 18:21:28 minikube sudo:    mario : TTY=pts/0 ; PWD=/etc/systemd/system/docker.service.d ; USER=root ; COMMAND=/usr/bin/systemctl start cri-docker
Apr  1 19:11:41 minikube sudo:    mario : PWD=/home/mario ; USER=root ; COMMAND=/usr/bin/mkdir -p /etc/systemd/system/cri-docker.service.d
Apr  1 19:11:41 minikube sudo:    mario : PWD=/home/mario ; USER=root ; COMMAND=/usr/bin/test -d /etc/systemd/system/cri-docker.service.d
Apr  1 19:11:41 minikube sudo:    mario : PWD=/home/mario ; USER=root ; COMMAND=/usr/bin/scp -t /etc/systemd/system/cri-docker.service.d
Apr  1 19:11:42 minikube sudo:    mario : PWD=/home/mario ; USER=root ; COMMAND=/usr/sbin/service cri-docker.socket restart
Apr  1 19:12:30 minikube sudo:    mario : TTY=pts/0 ; PWD=/etc/systemd/system/docker.service.d ; USER=root ; COMMAND=/usr/sbin/service cri-docker.socket restart
Apr  1 19:12:41 minikube sudo:    mario : TTY=pts/0 ; PWD=/etc/systemd/system/docker.service.d ; USER=root ; COMMAND=/usr/sbin/service cri-docker.socket restart
Apr  1 19:12:48 minikube sudo:    mario : TTY=pts/0 ; PWD=/etc/systemd/system/docker.service.d ; USER=root ; COMMAND=/usr/sbin/service cri-docker.socket status

So, systemctl is being called to start cri-docker (check the lines with timestamps Apr 1 18:21:21 and Apr 1 18:21:28.

I'll try to build minikube locally and see if I could learn more (it's been a while since I go-ed, might as well try to get some fu back :D ).

I wanted to test/use ssh driver because of zfs volume issue I created a week or two ago.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 1, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 31, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Aug 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/generic-driver co/runtime/docker Issues specific to a docker runtime lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. triage/duplicate Indicates an issue is a duplicate of other open issue.
Projects
None yet
Development

No branches or pull requests

5 participants