Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sudo service cri-docker.socket start: Process exited with status 5 #15413

Open
hualongfeng opened this issue Nov 28, 2022 · 26 comments
Open

sudo service cri-docker.socket start: Process exited with status 5 #15413

hualongfeng opened this issue Nov 28, 2022 · 26 comments
Labels
co/generic-driver co/runtime/docker Issues specific to a docker runtime kind/bug Categorizes issue or PR as related to a bug. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.

Comments

@hualongfeng
Copy link

What Happened?

 minikube start --driver=ssh --ssh-ip-address=10.239.241.111 --ssh-user=ssp -v=4 --alsologtostderr --ssh-key='/home/fhl/.ssh/id_rsa'

I1128 23:37:37.108706 3909210 ssh_runner.go:195] Run: sudo service cri-docker.socket status
I1128 23:37:37.147498 3909210 ssh_runner.go:195] Run: sudo service cri-docker.socket start
I1128 23:37:37.963215 3909210 out.go:177]

W1128 23:37:37.963625 3909210 out.go:239] X Exiting due to RUNTIME_ENABLE: sudo service cri-docker.socket start: Process exited with status 5
stdout:

stderr:
Failed to start cri-docker.socket.service: Unit cri-docker.socket.service not found.

X Exiting due to RUNTIME_ENABLE: sudo service cri-docker.socket start: Process exited with status 5
stdout:

stderr:
Failed to start cri-docker.socket.service: Unit cri-docker.socket.service not found.

W1128 23:37:37.963651 3909210 out.go:239] *
*
W1128 23:37:37.964530 3909210 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1128 23:37:37.964931 3909210 out.go:177]

I install cri-dockerd in target machine(target machine is real machine). And

ssp@ceph-server5:~$ sudo service cri-docker status
● cri-docker.service - CRI Interface for Docker Application Container Engine
     Loaded: loaded (/etc/systemd/system/cri-docker.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2022-11-28 11:56:08 UTC; 3h 47min ago
TriggeredBy: ● cri-docker.socket
       Docs: https://docs.mirantis.com
   Main PID: 28534 (cri-dockerd)
      Tasks: 35
     Memory: 18.9M
        CPU: 8.843s
     CGroup: /system.slice/cri-docker.service
             └─28534 /usr/local/bin/cri-dockerd --container-runtime-endpoint fd://

Nov 28 11:56:08 ceph-server5 cri-dockerd[28534]: time="2022-11-28T11:56:08Z" level=info msg="The binary conntrack is not installed, this can cause failures in network connection cleanup."
Nov 28 11:56:08 ceph-server5 cri-dockerd[28534]: time="2022-11-28T11:56:08Z" level=info msg="The binary conntrack is not installed, this can cause failures in network connection cleanup."
Nov 28 11:56:08 ceph-server5 cri-dockerd[28534]: time="2022-11-28T11:56:08Z" level=info msg="Loaded network plugin cni"
Nov 28 11:56:08 ceph-server5 cri-dockerd[28534]: time="2022-11-28T11:56:08Z" level=info msg="Docker cri networking managed by network plugin cni"
Nov 28 11:56:08 ceph-server5 cri-dockerd[28534]: time="2022-11-28T11:56:08Z" level=info msg="Docker Info: &{ID:E7DJ:WRPA:4GTU:ZTHK:6PBZ:SFWI:2GHL:4NTI:PVQ2:EKY6:SXFZ:GD2K Containers:0 ContainersRunning:0 Conta>
Nov 28 11:56:08 ceph-server5 cri-dockerd[28534]: time="2022-11-28T11:56:08Z" level=info msg="Setting cgroupDriver systemd"
Nov 28 11:56:08 ceph-server5 cri-dockerd[28534]: time="2022-11-28T11:56:08Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
Nov 28 11:56:08 ceph-server5 cri-dockerd[28534]: time="2022-11-28T11:56:08Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
Nov 28 11:56:08 ceph-server5 cri-dockerd[28534]: time="2022-11-28T11:56:08Z" level=info msg="Start cri-dockerd grpc backend"
Nov 28 11:56:08 ceph-server5 systemd[1]: Started CRI Interface for Docker Application Container Engine.

But

ssp@ceph-server5:~$ sudo service cri-docker.socket status
Unit cri-docker.socket.service could not be found.

Attach the log file

I1128 23:37:33.971076 3909210 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I1128 23:37:33.971098 3909210 ssh_runner.go:195] Run: curl -x http://child-prc.intel.com:913 -sS -m 2 https://registry.k8s.io/
I1128 23:37:33.971120 3909210 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1128 23:37:33.971132 3909210 sshutil.go:53] new ssh client: &{IP:10.239.241.111 Port:22 SSHKeyPath:/home/vagrant/.minikube/machines/minikube/id_rsa Username:ssp}
I1128 23:37:33.971134 3909210 sshutil.go:53] new ssh client: &{IP:10.239.241.111 Port:22 SSHKeyPath:/home/vagrant/.minikube/machines/minikube/id_rsa Username:ssp}
I1128 23:37:34.246717 3909210 ssh_runner.go:195] Run: sudo service containerd status
I1128 23:37:35.056487 3909210 ssh_runner.go:235] Completed: curl -x http://child-prc.intel.com:913 -sS -m 2 https://registry.k8s.io/: (1.085344076s)
I1128 23:37:35.057026 3909210 ssh_runner.go:195] Run: sudo service containerd stop
W1128 23:37:35.888338 3909210 cruntime.go:284] disable failed: sudo service containerd stop: Process exited with status 1
stdout:

stderr:
Job for containerd.service canceled.
I1128 23:37:35.888442 3909210 ssh_runner.go:195] Run: sudo service containerd status
W1128 23:37:35.938559 3909210 docker.go:136] disableOthers: containerd is still active
I1128 23:37:35.938784 3909210 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1128 23:37:35.986154 3909210 ssh_runner.go:195] Run: sudo service docker restart
I1128 23:37:37.108576 3909210 ssh_runner.go:235] Completed: sudo service docker restart: (1.122366052s)
I1128 23:37:37.108630 3909210 openrc.go:158] restart output:
I1128 23:37:37.108706 3909210 ssh_runner.go:195] Run: sudo service cri-docker.socket status
I1128 23:37:37.147498 3909210 ssh_runner.go:195] Run: sudo service cri-docker.socket start
I1128 23:37:37.963215 3909210 out.go:177]

W1128 23:37:37.963625 3909210 out.go:239] X Exiting due to RUNTIME_ENABLE: sudo service cri-docker.socket start: Process exited with status 5
stdout:

stderr:
Failed to start cri-docker.socket.service: Unit cri-docker.socket.service not found.

X Exiting due to RUNTIME_ENABLE: sudo service cri-docker.socket start: Process exited with status 5
stdout:

stderr:
Failed to start cri-docker.socket.service: Unit cri-docker.socket.service not found.

W1128 23:37:37.963651 3909210 out.go:239] *
*
W1128 23:37:37.964530 3909210 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ * Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ * Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯

Operating System

Ubuntu

Driver

SSH

@afbjorklund
Copy link
Collaborator

afbjorklund commented Nov 28, 2022

Did you install cri-dockerd ?

The requirements for "ssh" is the same as for "none"

@hualongfeng
Copy link
Author

Did you install cri-dockerd ?

The requirements for "ssh" is the same as for "none"

yes, I success in service cri-docker status. But error on service cri-docker.socket start.

@hualongfeng
Copy link
Author

service cri-docker status. But error on service cri-docker.socket start

And the command output is successful.

root@minikube:~/cri-dockerd# systemctl status cri-docker.socket
● cri-docker.socket - CRI Docker Socket for the API
     Loaded: loaded (/etc/systemd/system/cri-docker.socket; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2022-11-28 11:51:08 UTC; 4h 23min ago
   Triggers: ● cri-docker.service
     Listen: /run/cri-dockerd.sock (Stream)
      Tasks: 0 (limit: 230378)
     Memory: 0B
        CPU: 2ms
     CGroup: /system.slice/cri-docker.socket

Nov 28 11:51:08 ceph-server5 systemd[1]: Starting CRI Docker Socket for the API...
Nov 28 11:51:08 ceph-server5 systemd[1]: Listening on CRI Docker Socket for the API.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Nov 28, 2022

The "service" command doesn't know about systemd units, so it tries to append .service

Unit cri-docker.socket.service not found.

@hualongfeng
Copy link
Author

The "service" command doesn't know about systemd units, so it tries to append .service

Unit cri-docker.socket.service not found.

root@minikube:~/cri-dockerd# cp /etc/systemd/system/cri-docker.socket /etc/systemd/system/cri-docker.socket.service
root@minikube:~/cri-dockerd# systemctl enable --now cri-docker.socket
root@minikube:~/cri-dockerd# service cri-docker.socket status
○ cri-docker.socket.service - CRI Docker Socket for the API
     Loaded: bad-setting (Reason: Unit cri-docker.socket.service has a bad unit file setting.)
     Active: inactive (dead)

Nov 28 15:52:06 minikube systemd[1]: /etc/systemd/system/cri-docker.socket.service:5: Unknown section 'Socket'. Ignoring.
Nov 28 15:52:06 minikube systemd[1]: cri-docker.socket.service: Service has no ExecStart=, ExecStop=, or SuccessAction=. Refusing.
Nov 28 15:52:08 minikube systemd[1]: /etc/systemd/system/cri-docker.socket.service:5: Unknown section 'Socket'. Ignoring.
Nov 28 15:52:08 minikube systemd[1]: cri-docker.socket.service: Service has no ExecStart=, ExecStop=, or SuccessAction=. Refusing.
Nov 28 15:52:15 minikube systemd[1]: /etc/systemd/system/cri-docker.socket.service:5: Unknown section 'Socket'. Ignoring.
Nov 28 15:52:15 minikube systemd[1]: cri-docker.socket.service: Service has no ExecStart=, ExecStop=, or SuccessAction=. Refusing.
Nov 28 15:54:36 minikube systemd[1]: /etc/systemd/system/cri-docker.socket.service:5: Unknown section 'Socket'. Ignoring.
Nov 28 15:54:36 minikube systemd[1]: cri-docker.socket.service: Service has no ExecStart=, ExecStop=, or SuccessAction=. Refusing.
Nov 28 16:23:28 minikube systemd[1]: /etc/systemd/system/cri-docker.socket.service:5: Unknown section 'Socket'. Ignoring.
Nov 28 16:23:28 minikube systemd[1]: cri-docker.socket.service: Service has no ExecStart=, ExecStop=, or SuccessAction=. Refusing.
root@minikube:~/cri-dockerd# service cri-docker.socket start
Failed to start cri-docker.socket.service: Unit cri-docker.socket.service has a bad unit file setting.
See system logs and 'systemctl status cri-docker.socket.service' for details.
root@minikube:~/cri-dockerd# systemctl daemon-reload
root@minikube:~/cri-dockerd# systemctl enable --now cri-docker.socket
root@minikube:~/cri-dockerd# service cri-docker.socket status
○ cri-docker.socket.service - CRI Docker Socket for the API
     Loaded: bad-setting (Reason: Unit cri-docker.socket.service has a bad unit file setting.)
     Active: inactive (dead)

Nov 28 15:52:15 minikube systemd[1]: /etc/systemd/system/cri-docker.socket.service:5: Unknown section 'Socket'. Ignoring.
Nov 28 15:52:15 minikube systemd[1]: cri-docker.socket.service: Service has no ExecStart=, ExecStop=, or SuccessAction=. Refusing.
Nov 28 15:54:36 minikube systemd[1]: /etc/systemd/system/cri-docker.socket.service:5: Unknown section 'Socket'. Ignoring.
Nov 28 15:54:36 minikube systemd[1]: cri-docker.socket.service: Service has no ExecStart=, ExecStop=, or SuccessAction=. Refusing.
Nov 28 16:23:28 minikube systemd[1]: /etc/systemd/system/cri-docker.socket.service:5: Unknown section 'Socket'. Ignoring.
Nov 28 16:23:28 minikube systemd[1]: cri-docker.socket.service: Service has no ExecStart=, ExecStop=, or SuccessAction=. Refusing.
Nov 28 16:23:37 minikube systemd[1]: /etc/systemd/system/cri-docker.socket.service:5: Unknown section 'Socket'. Ignoring.
Nov 28 16:23:37 minikube systemd[1]: cri-docker.socket.service: Service has no ExecStart=, ExecStop=, or SuccessAction=. Refusing.
Nov 28 16:24:15 minikube systemd[1]: /etc/systemd/system/cri-docker.socket.service:5: Unknown section 'Socket'. Ignoring.
Nov 28 16:24:15 minikube systemd[1]: cri-docker.socket.service: Service has no ExecStart=, ExecStop=, or SuccessAction=. Refusing.
root@minikube:~/cri-dockerd# service cri-docker.socket start
Failed to start cri-docker.socket.service: Unit cri-docker.socket.service has a bad unit file setting.
See system logs and 'systemctl status cri-docker.socket.service' for details.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Nov 28, 2022

For some reason the systemd is not found, so it is trying to use the OpenRC code. It should be systemctl.

i.e. service cri-docker.socket status should be systemctl status cri-docker.socket

It is perfectly reasonable to expect cri-dockerd to run without systemd, but it has not been implemented...

There should be some clues in the log, as to why running systemctl --version failed ?

@hualongfeng
Copy link
Author

For some reason the systemd is not found, so it is trying to use the OpenRC code. It should be systemctl.

i.e. service cri-docker.socket status should be systemctl status cri-docker.socket

It is perfectly reasonable to expect cri-dockerd to run without systemd, but it has not been implemented...

There should be some clues in the log, as to why running systemctl --version failed ?

root@minikube:~/cri-dockerd# systemctl --version
systemd 249 (249.11-0ubuntu3)
+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS -OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP -LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified

@hualongfeng
Copy link
Author

Is there any method to solve the issue?

@afbjorklund
Copy link
Collaborator

It needs the log file, to troubleshoot.

@afbjorklund afbjorklund added kind/bug Categorizes issue or PR as related to a bug. co/runtime/docker Issues specific to a docker runtime priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Nov 28, 2022
@hualongfeng
Copy link
Author

logs.txt

@afbjorklund
Copy link
Collaborator

afbjorklund commented Nov 28, 2022

Yeah, that looks like a bug.

I1129 00:33:23.366905 3924022 ssh_runner.go:195] Run: systemctl --version
I1129 00:33:23.366965 3924022 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: IP address is not set
I1129 00:33:23.643348 3924022 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: IP address is not set
I1129 00:33:24.184521 3924022 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: IP address is not set

Probably sysinit is called "too early", before the driver knows the IP (it seems to be set OK, in the config)

... SSHIPAddress:10.239.241.111 SSHUser:ssp SSHKey:/home/vagrant/.ssh/id_rsa SSHPort:22 ...

@hualongfeng
Copy link
Author

Mirantis/cri-dockerd#133 (comment)
I asked cri-docker for the command service cri-docker.socket status
service cri-docker.socket start is meaningless as the service command is hardcoded to use a .service unit suffix.

So why minikube use the command service cri-docker.socket status not systemctl status cri-docker.socket or service cri-docker status?

@logopk
Copy link

logopk commented Feb 5, 2023

Let me add a Me2!

@afbjorklund
Copy link
Collaborator

This is probably a bug in the ssh driver, not detecting systemd on the remote system properly.

The amount of users that actually want to use OpenRC is probably diminishingly small.

Simplest fix is just removing the .socket

Then minikube can start the service directly

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 6, 2023
@logopk
Copy link

logopk commented May 6, 2023

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 6, 2023
@f0rkz
Copy link

f0rkz commented Jun 29, 2023

2023 - this is still an issue. 8|

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 23, 2024
@logopk
Copy link

logopk commented Jan 23, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 23, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 22, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 22, 2024
@logopk
Copy link

logopk commented May 22, 2024

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label May 22, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 20, 2024
@logopk
Copy link

logopk commented Aug 20, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 20, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 18, 2024
@logopk
Copy link

logopk commented Nov 18, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/generic-driver co/runtime/docker Issues specific to a docker runtime kind/bug Categorizes issue or PR as related to a bug. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

No branches or pull requests

6 participants