Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Kubernetes is starting…" state never ends #2990

Closed
plambert opened this issue Jun 11, 2018 · 149 comments
Closed

"Kubernetes is starting…" state never ends #2990

plambert opened this issue Jun 11, 2018 · 149 comments

Comments

@plambert
Copy link

  • [ x] I have tried with the latest version of my channel (Stable or Edge)
  • [ x] I have uploaded Diagnostics
  • Diagnostics ID: 804B8977-06D2-4E2A-BB3E-10FBBA99D1F9/20180611-105749

Expected behavior

Within a few minutes of starting Docker for Mac, Kubernetes should be available.

Actual behavior

After several hours, it is still 'starting…'

Information

  • macOS Version: 10.13.5

Diagnostic logs

(Uploaded as 804B8977-06D2-4E2A-BB3E-10FBBA99D1F9/20180611-105749)

Steps to reproduce the behavior

  1. I've quit Docker, rebooted, and started it again, with the same result
@yuzuco15
Copy link

yuzuco15 commented Jun 12, 2018

I have the same problem when update to the version 18.05.0-ce-mac67 (25042).

  • I can use kubectl commands even though Docker for mac is showing Kubernetes is starting....
  • I cannot stop Kubernetes from Preferences... -> Kubernetes.
  • External IP is not attached to LB (only showing <pending>) though I have waited for 20 min.
    • Previous version (18.05.0-ce-mac66) attached localhost as External IP to LB
    • This version only shows <pending> and cURL command like curl http://localhost/some-url returns error: curl: (7) Failed to connect to localhost port 80: Connection refused even though I could use this on the previous version.
$ kgs
NAME                                            TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
crazy-greyhound-nginx-ingress-controller        LoadBalancer   10.96.142.184    <pending>     80:31157/TCP,443:30351/TCP   27m
crazy-greyhound-nginx-ingress-default-backend   ClusterIP      10.103.196.198   <none>        80/TCP                       27m
...

Diagnostic logs: 2E0B1819-92A4-408C-9548-B52A29AAF164/20180612-214950

@norbertmocsnik
Copy link

Experiencing the same after upgrading to macOS 10.13.5. Also see #2985

@tuzla0autopilot4
Copy link

tuzla0autopilot4 commented Jun 13, 2018

I had exactly the same scenario and issue-- even rebooting didn't help. Simply doing the "Reset Kubernetes cluster" on the Reset tab resolved the issue for me. (I didn't have to go so far as to reset to factory defaults.)

@norbertmocsnik
Copy link

I didn't realize Reset was a tab! I thought it's a button that would reset everything at once (not just Kubernetes). Resetting the Kubernetes cluster has helped indeed, although I do hope that this is just a one time thing. I was happy to move from minikube to Docker for Mac primarily because minikube releases broke the Kubernetes cluster often. Hopefully this will not be the case with Docker for Mac in the future.

@yuzuco15
Copy link

"Reset Kubernetes cluster" also worked for my case! Thank you so much 😆

@tjamet
Copy link

tjamet commented Jun 19, 2018

the same problem here, kubernetes stuck in starting state
screen shot 2018-06-19 at 10 45 06

diagnostic ID 8A8F9D98-E405-4AB4-ABF8-B52AB26DBD61/20180619-104118

Running on OSX version 10.13.4 (17E202)

Docker for Mac: version: 18.05.0-ce-mac67 (1fa4e2acfc1a52f79623add2390604515d32297e)
macOS: version 10.13.4 (build: 17E202)
logs: /tmp/8A8F9D98-E405-4AB4-ABF8-B52AB26DBD61/20180619-104118.tar.gz
[OK]     vpnkit
[OK]     virtualization hypervisor
[OK]     vmnetd
[OK]     dns
[OK]     driver.amd64-linux
[OK]     virtualization VT-X
[OK]     app
[OK]     moby
[OK]     system
[OK]     moby-syslog
[OK]     kubernetes
[OK]     files
[OK]     env
[OK]     virtualization kern.hv_support
[OK]     osxfs
[OK]     moby-console
[OK]     logs
[OK]     docker-cli
[OK]     disk

@djs55
Copy link
Contributor

djs55 commented Jun 19, 2018

@plambert your logs have:

2018-06-11 10:57:46.097206-0700  localhost com.docker.driver.amd64-linux[10740]: Node is not ready: PIDPressure/False kubelet has sufficient PID available
2018-06-11 10:57:47.102135-0700  localhost com.docker.driver.amd64-linux[10740]: Node is not ready: PIDPressure/False kubelet has sufficient PID available
2018-06-11 10:57:48.097550-0700  localhost com.docker.driver.amd64-linux[10740]: Node is not ready: PIDPressure/False kubelet has sufficient PID available
2018-06-11 10:57:49.097725-0700  localhost com.docker.driver.amd64-linux[10740]: Node is not ready: PIDPressure/False kubelet has sufficient PID available
2018-06-11 10:57:50.102071-0700  localhost com.docker.driver.amd64-linux[10740]: Node is not ready: PIDPressure/False kubelet has sufficient PID available

and yet

$ /usr/local/bin/kubectl.docker  --context docker-for-desktop get nodes
NAME                 STATUS    ROLES     AGE       VERSION
docker-for-desktop   Ready     master    7d        v1.10.3

and

$ /usr/local/bin/kubectl.docker  --context docker-for-desktop describe nodes
Name:               docker-for-desktop
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=docker-for-desktop
                    node-role.kubernetes.io/master=
Annotations:        node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Mon, 04 Jun 2018 09:36:58 -0700
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Mon, 11 Jun 2018 10:58:31 -0700   Mon, 04 Jun 2018 09:36:44 -0700   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Mon, 11 Jun 2018 10:58:31 -0700   Mon, 04 Jun 2018 09:36:44 -0700   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Mon, 11 Jun 2018 10:58:31 -0700   Mon, 04 Jun 2018 09:36:44 -0700   KubeletHasNoDiskPressure     kubelet has no disk pressure
  Ready            True    Mon, 11 Jun 2018 10:58:31 -0700   Mon, 04 Jun 2018 09:36:44 -0700   KubeletReady                 kubelet is posting ready status
  PIDPressure      False   Mon, 11 Jun 2018 10:58:31 -0700   Thu, 07 Jun 2018 22:30:59 -0700   KubeletHasSufficientPID      kubelet has sufficient PID available

I notice that PIDPressure has Status False and yet is mentioned in the main Docker log. I'll check the code which waits for the node ready state to see if it is confused.

@djs55
Copy link
Contributor

djs55 commented Jun 20, 2018

I've fixed a bug in the code which failed to notice the node had become Ready. The fix is in the latest development build which can be downloaded form here: https://download-stage.docker.com/mac/bysha1/98b468326d2c579b87b48c47b2ac7e66c1f0a282/Docker.dmg Note that this build is only suitable for testing -- not production. If you get a chance to try it, let me know how it goes. If it still fails, please upload a fresh set of diagnostics.

@tjamet
Copy link

tjamet commented Jun 20, 2018

Hi @djs55 I have tried with my previous Docker.qcow2 image and started your version, I now have the kubernetes is running back! Thanks!
hope it works for the others as well :)

@djs55
Copy link
Contributor

djs55 commented Jun 21, 2018

@tjamet thanks a lot for the speedy confirmation!

@bitmvr
Copy link

bitmvr commented Jul 11, 2018

For what it is worth, Kubernetes is now running successfully without your patch @djs55 on version 18.05.0-ce-mac67 (25042)

I have edge installed via Homebrew. Here is what I did step-by-step:

  1. Completely removed docker
  2. Uninstalled it via homebrew
  3. Reinstalled it via homebrew

Not sure why this works and how it doesn't 'require' your patch, but it something others might try if they need a 'production' build.

@markhilton
Copy link

@djs55 I still have "Kubernetes is starting..." running your dmg build.
Diagnostic ID: 29A56A58-476F-4D93-B4E0-B518C213C244/20180726-012938

@arafatmohammed
Copy link

I had the same issue, found out my /etc/hosts was empty. Once I restored it original values, Kubernetes is now starting successfully.

@cicorias
Copy link

what worked for me is a reset of ALL data, then recreate it all.

@MartinEmrich
Copy link

I also have this issue. Kubernetes worked fine one time, but after I had to reboot, it got stuck in "Kubernetes is starting".

I tried these steps:

  • Reboot again
  • Reset Kubernetes
  • Reset Disk
  • Reset to factory defaults
  • Uninstall/Reinstall
  • Uninstall/Install edge release
  • Uninstall/Reinstall edge with manually removing stuff smelling like docker/kubernetes from my home directory

But it still won't start.

$ kubectl cluster-info 
Kubernetes master is running at https://localhost:6443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Unable to connect to the server: EOF
$ kubectl cluster-info dump
Unable to connect to the server: EOF

@avocade
Copy link

avocade commented Sep 11, 2018

Yep, getting the same thing. Latest release for macOS (stable). Will test edge too.

@avocade
Copy link

avocade commented Sep 17, 2018

Failing on edge too, tried all means of getting it running (yes I uninstalled homebrew too). Moving back to minikube for now.

@aditzel
Copy link

aditzel commented Sep 20, 2018

Same as above: stuck in starting after updating to latest macOS release on both stable and edge channels.

@keshavgupt
Copy link

I had it running fine on my mac but but after upgrading my OS to "High Sierra 10.13.6", running into same issue. Have already tried a few different stable and edge versions.

@thgsn
Copy link

thgsn commented Sep 23, 2018

Same here, testing both versions stable/edge, OS HS 10.13.6

@rdrgmnzs
Copy link

Same here as well, at first with the latest stable and now with Edge Version 2.0.0.0-beta1-mac75 (27117) and High Sierra 10.13.6.

@davidkarlsen
Copy link

davidkarlsen commented Sep 27, 2018

same with macos 10.14 and 18.06.1-ce-mac73 the problem seems to be with etcd:

{"log":"2018-09-27 09:54:08.188030 I | embed: peerTLS: cert = /run/config/pki/etcd/peer.crt, key = /run/config/pki/etcd/peer.key, ca = , trusted-ca = /run/config/pki/etcd/ca.crt, client-cer
t-auth = true\n","stream":"stderr","time":"2018-09-27T09:54:08.188456442Z"}
{"log":"2018-09-27 09:54:08.188034 W | embed: The scheme of peer url http://localhost:2380 is HTTP while peer key/cert files are presented. Ignored peer key/cert files.\n","stream":"stderr"
,"time":"2018-09-27T09:54:08.188459911Z"}
{"log":"2018-09-27 09:54:08.188038 W | embed: The scheme of peer url http://localhost:2380 is HTTP while client cert auth (--peer-client-cert-auth) is enabled. Ignored client cert auth for 
this url.\n","stream":"stderr","time":"2018-09-27T09:54:08.188463072Z"}
{"log":"2018-09-27 09:54:08.222240 C | etcdmain: listen tcp 172.30.105.230:2380: bind: cannot assign requested address\n","stream":"stderr","time":"2018-09-27T09:54:08.22338831Z"}

which does match the interfaces:

br-80b3dd052912 Link encap:Ethernet  HWaddr 02:42:52:07:B0:BF  
          inet addr:172.18.0.1  Bcast:172.18.255.255  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

br-e94472631828 Link encap:Ethernet  HWaddr 02:42:3E:FD:B5:C0  
          inet addr:172.19.0.1  Bcast:172.19.255.255  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

docker0   Link encap:Ethernet  HWaddr 02:42:65:D8:4C:63  
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          inet6 addr: fe80::42:65ff:fed8:4c63/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:928 (928.0 B)

eth0      Link encap:Ethernet  HWaddr 02:50:00:00:00:01  
          inet addr:192.168.65.3  Bcast:192.168.65.255  Mask:255.255.255.0
          inet6 addr: fe80::50:ff:fe00:1/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:474 errors:0 dropped:0 overruns:0 frame:0
          TX packets:487 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:42732 (41.7 KiB)  TX bytes:41878 (40.8 KiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:61919 errors:0 dropped:0 overruns:0 frame:0
          TX packets:61919 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1 
          RX bytes:4733663 (4.5 MiB)  TX bytes:4733663 (4.5 MiB)

@thgsn
Copy link

thgsn commented Sep 29, 2018

I found a solution that worked for me, don't change default Docker subnet 192.168.65.0/24

Version 18.06.1-ce-mac73 (26764)
Version 10.14 (18A391)

@maaizelahi
Copy link

maaizelahi commented Sep 27, 2020

Try setting swapMiB to 0 in ~/Library/Group Containers/group.com.docker/settings.json, that worked from me

@mrdulin
Copy link

mrdulin commented Oct 26, 2020

Hit this issue. After check the "Enable Kubernetes" configuration, it shows "Starting ...". state, but never ends.

However, it seems that the k8s cluster has been created and running.

⚡  kubectl cluster-info 
Kubernetes master is running at https://kubernetes.docker.internal:6443
KubeDNS is running at https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Environment information:

  • Docker 2.4.0.0
  • Kubernetes v1.18.8
  • macOS 10.13.6

@samtoi
Copy link

samtoi commented Oct 28, 2020

worked for me
rm -rf ~/Library/Group\ Containers/group.com.docker/pki/
rm -rf ~/.kube
and restart Docker for Mac: Preferences -> Reset -> Restart

And keep checking if all docker images have been downloaded. It takes quite a bit of time to download them all.
Use docker image ls until you have all the below listed images in your system.

upd. it seems he installed what is on the picture below
image

Happy the above worked for me too. Sad that, 2 years later, here we still are :(

Yet another "worked for me too" here, thanks!

Initially I tried to enable Kubernetes from the dashboard - got stuck in the starting phase. Console (app) kept repeating mDNSResponder lines with DNSServiceQueryRecords relating to (com.docker.driv) and process com.docker.driver.amd64-linux saying that it cannot get lease for master node: an error on the server ("") has prevented the request from succeeding (get leases.coordination.k8s.io docker-desktop). /etc/hosts was fine with the kubernetes.docker.internal row injected. Tried to reset Kubernetes Cluster and first only removing the config file from under ~/.kube, no help. Removing both folders as hinted above (+restart docker) did the thing and now it is running.

Docker 2.4.0.0
Kubernetes v1.18.8
MacOS 10.15.7

@jsphstls
Copy link

jsphstls commented Nov 3, 2020

I did not allocate enough resources. Once I did, this state cleared after restarting docker.

@dehypnosis
Copy link

Menu > TroubleShoot > Reset to Factory Defaults worked for me

@rwanjohi
Copy link

worked for me
rm -rf ~/Library/Group\ Containers/group.com.docker/pki/
rm -rf ~/.kube
and restart Docker for Mac: Preferences -> Reset -> Restart

worked for me. thanks

worked for me
rm -rf ~/Library/Group\ Containers/group.com.docker/pki/
rm -rf ~/.kube
and restart Docker for Mac: Preferences -> Reset -> Restart

worked for me. thanks

This worked, for Mac! Thanks

@KeitelDOG
Copy link

worked for me
rm -rf ~/Library/Group\ Containers/group.com.docker/pki/
rm -rf ~/.kube
and restart Docker for Mac: Preferences -> Reset -> Restart

Docker bottom notification still saying "Kubernetes is starting..." with orange icon, for minutes, while it was working. After 3 to 5 of minutes, it's show "running" with green icon.

@MrBuBBLs
Copy link

worked for me
rm -rf ~/Library/Group\ Containers/group.com.docker/pki/
rm -rf ~/.kube
and restart Docker for Mac: Preferences -> Reset -> Restart

Docker bottom notification still saying "Kubernetes is starting..." with orange icon, for minutes, while it was working. After 3 to 5 of minutes, it's show "running" with green icon.

Exactly that ! May I add that here we don't have to delete the hole ~/.kube directory and loosing all our K8s contexts.

Removing every entry of the docker-desktop context from the ~/.kube/config file is enough, either by editing it manually or with kubectx with the -d option.
After that I did exec this command : rm -rf ~/Library/Group\ Containers/group.com.docker/pki/ to remove any docker/kubernetes related files, before the Docker Desktop app restart.

Also had to wait a few minutes as stated here above before the Docker Desktop app showed a green light again for K8s, but the context & local cluster were already working anyway.

Environment information:
Docker Desktop 3.1.0
Kubernetes v1.19.3 (cli v1.16.13)
macOS 10.15.7

@SamYuan1990
Copy link

SamYuan1990 commented Jan 26, 2021

kubectl cluster-info dump
Unable to connect to the server: net/http: TLS handshake timeout

or

Error from server (InternalError): an error on the server ("") has prevented the request from succeeding (get nodes)

@sayjeyhi
Copy link

I have this issue on macbook pro M1.

Screen Shot 1399-11-10 at 16 04 12

@KeitelDOG
Copy link

@sayjeyhi if it still doesn't work, try to find out if 2 versions of Kubernetes are installed. This was my case at the beginning, the Docker Desktop app already install a Kubernetes version, and I installed another one with brew, that linked the bin to the new one, causing problem with Docker App. I uninstalled the brew version, and followed the steps in this issue.

@sayjeyhi
Copy link

@KeitelDOG Were you using the M1 preview application? because I did not install k8s with brew

@KeitelDOG
Copy link

@sayjeyhi no, I have a regular macbook pro. I was just using the Docker Desktop app that installed automatically kubernetes too.

@codeclown
Copy link

codeclown commented Mar 28, 2021

Having this issue right now, after trying to Reset the cluster it never started again, just waiting in Starting-state.

$ kubectl get pod -A
NAMESPACE     NAME                                     READY   STATUS             RESTARTS   AGE
kube-system   etcd-docker-desktop                      1/1     Running            0          2m5s
kube-system   kube-apiserver-docker-desktop            1/1     Running            0          2m6s
kube-system   kube-controller-manager-docker-desktop   0/1     CrashLoopBackOff   4          3m
kube-system   kube-scheduler-docker-desktop            1/1     Running            0          118s
$ kubectl -n kube-system logs kube-controller-manager-docker-desktop
# Click Details below to see output
Flag --port has been deprecated, see --secure-port instead.
I0328 16:32:17.102609       1 serving.go:331] Generated self-signed cert in-memory
I0328 16:32:18.135023       1 controllermanager.go:175] Version: v1.19.7
I0328 16:32:18.136310       1 dynamic_cafile_content.go:167] Starting request-header::/run/config/pki/front-proxy-ca.crt
I0328 16:32:18.136638       1 secure_serving.go:197] Serving securely on 127.0.0.1:10257
I0328 16:32:18.136670       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/run/config/pki/ca.crt
I0328 16:32:18.137366       1 tlsconfig.go:240] Starting DynamicServingCertificateController
W0328 16:32:18.146213       1 controllermanager.go:628] fetch api resource lists failed, use legacy client builder: Get "https://192.168.65.4:6443/api/v1?timeout=32s": x509: certificate is valid for 10.96.0.1, 0.0.0.0, 192.168.65.3, 127.0.0.1, not 192.168.65.4
F0328 16:32:28.180720       1 controllermanager.go:244] error building controller context: failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get "https://192.168.65.4:6443/healthz?timeout=32s": x509: certificate is valid for 10.96.0.1, 0.0.0.0, 192.168.65.3, 127.0.0.1, not 192.168.65.4
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000132001, 0xc000378280, 0x167, 0x276)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:996 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x6a47fa0, 0xc000000003, 0x0, 0x0, 0xc000a2c070, 0x68bfef4, 0x14, 0xf4, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:945 +0x191
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printf(0x6a47fa0, 0x3, 0x0, 0x0, 0x449dd15, 0x25, 0xc000fa3348, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:733 +0x17a
k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatalf(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1456
k8s.io/kubernetes/cmd/kube-controller-manager/app.Run.func1(0x4a691a0, 0xc000130018)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-controller-manager/app/controllermanager.go:244 +0x54e
k8s.io/kubernetes/cmd/kube-controller-manager/app.Run(0xc00069a458, 0xc0001160c0, 0xc000a60950, 0xc0008bf0d8)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-controller-manager/app/controllermanager.go:260 +0x9b1
k8s.io/kubernetes/cmd/kube-controller-manager/app.NewControllerManagerCommand.func2(0xc000184dc0, 0xc000a23b80, 0x0, 0x13)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-controller-manager/app/controllermanager.go:124 +0x2b7
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000184dc0, 0xc000138010, 0x13, 0x13, 0xc000184dc0, 0xc000138010)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:846 +0x2c2
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000184dc0, 0x16708eabf6ce043d, 0x6a47c00, 0x406525)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:950 +0x375
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:887
main.main()
	_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-controller-manager/controller-manager.go:46 +0xe5

goroutine 18 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x6a47fa0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b
created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:416 +0xd8

goroutine 70 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run(0xc0002aef00, 0x1, 0xc0001160c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:181 +0x2cd
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.unionCAContent.Run
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/union_content.go:104 +0xcb

goroutine 104 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x45b5708, 0x4a02c60, 0xc0009e5d40, 0x1, 0xc0001160c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x45b5708, 0x12a05f200, 0x0, 0xc000375001, 0xc0001160c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x45b5708, 0x12a05f200)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a

goroutine 63 [select]:
k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc000115900)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57

goroutine 105 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch.(*Broadcaster).loop(0xc00075b100)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch/mux.go:207 +0x66
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch.NewBroadcaster
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch/mux.go:75 +0xce

goroutine 106 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher.func1(0x4a129a0, 0xc000a8f950, 0xc0004f3200)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record/event.go:301 +0xaa
created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record/event.go:299 +0x6e

goroutine 107 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher.func1(0x4a129a0, 0xc000a8fb00, 0xc000a8fad0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record/event.go:301 +0xaa
created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record/event.go:299 +0x6e

goroutine 108 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc0002ae8a0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:198 +0xac
created by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.newQueue
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:58 +0x135

goroutine 109 [select]:
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0002aea20)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:231 +0x405
created by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.newDelayingQueue
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:68 +0x185

goroutine 110 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc0002af020)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:198 +0xac
created by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.newQueue
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:58 +0x135

goroutine 111 [select]:
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0002af1a0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:231 +0x405
created by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.newDelayingQueue
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:68 +0x185

goroutine 114 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc000308cc0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:198 +0xac
created by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.newQueue
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:58 +0x135

goroutine 115 [select]:
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000308e40)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:231 +0x405
created by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.newDelayingQueue
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:68 +0x185

goroutine 72 [sync.Cond.Wait]:
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:312
sync.runtime_notifyListWait(0xc00075b0d0, 0xc000000000)
	/usr/local/go/src/runtime/sema.go:513 +0xf8
sync.(*Cond).Wait(0xc00075b0c0)
	/usr/local/go/src/sync/cond.go:56 +0x9d
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Get(0xc0002af020, 0x0, 0x0, 0x3914e00)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:145 +0x89
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).processNextWorkItem(0xc0002af620, 0x203000)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:190 +0x66
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).runWorker(0xc0002af620)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:185 +0x2b
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0004fc040)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0004fc040, 0x4a02c60, 0xc000fd8000, 0x1, 0xc0001160c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0004fc040, 0x3b9aca00, 0x0, 0x1, 0xc0001160c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc0004fc040, 0x3b9aca00, 0xc0001160c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:171 +0x245

goroutine 71 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run(0xc0002af620, 0x1, 0xc0001160c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:181 +0x2cd
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.unionCAContent.Run
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/union_content.go:104 +0xcb

goroutine 73 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc0000501c0, 0xc0004fc070, 0xc000116c60, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:539 +0x11d
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollUntil(0xdf8475800, 0xc0004fc070, 0xc0001160c0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:492 +0xc5
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xdf8475800, 0xc0004fc070, 0xc0001160c0, 0xb, 0xc000fc5f48)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:511 +0xb3
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:174 +0x2b3

goroutine 117 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run(0xc000fb0e00, 0x1, 0xc0001160c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:254 +0x245
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*SecureServingInfo).tlsConfig
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/secure_serving.go:136 +0x5fa

goroutine 74 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1(0xc0001160c0, 0xc0004fc0b0, 0x4a69160, 0xc000510000)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:279 +0xbd
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:278 +0x8c

goroutine 75 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1.1(0xc000116d20, 0xdf8475800, 0x0, 0xc000116cc0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:588 +0x17b
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:571 +0x8c

goroutine 118 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.RunServer.func1(0xc0000cf260, 0xc0001160c0, 0x0, 0xc0006bcd20)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/secure_serving.go:221 +0x65
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.RunServer
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/secure_serving.go:219 +0x88

goroutine 119 [IO wait]:
internal/poll.runtime_pollWait(0x7fa1c26e1df0, 0x72, 0x0)
	/usr/local/go/src/runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc000a7a198, 0x72, 0x0, 0x0, 0x44214e0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Accept(0xc000a7a180, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/local/go/src/internal/poll/fd_unix.go:394 +0x1fc
net.(*netFD).accept(0xc000a7a180, 0x203000, 0x203000, 0x45b64c0)
	/usr/local/go/src/net/fd_unix.go:172 +0x45
net.(*TCPListener).accept(0xc0008bac20, 0xc000dbb130, 0x50, 0x50)
	/usr/local/go/src/net/tcpsock_posix.go:139 +0x32
net.(*TCPListener).Accept(0xc0008bac20, 0x30, 0x406f500, 0xc000dbb130, 0x0)
	/usr/local/go/src/net/tcpsock.go:261 +0x65
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.tcpKeepAliveListener.Accept(0x4a65220, 0xc0008bac20, 0x6a48d80, 0x0, 0x50, 0x3f4fae0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/secure_serving.go:261 +0x35
crypto/tls.(*listener).Accept(0xc000dca780, 0x406f500, 0xc000a79110, 0x3b66940, 0x6a0ec70)
	/usr/local/go/src/crypto/tls/tls.go:67 +0x37
net/http.(*Server).Serve(0xc0006bcd20, 0x4a4e9e0, 0xc000dca780, 0x0, 0x0)
	/usr/local/go/src/net/http/server.go:2937 +0x266
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.RunServer.func2(0x4a65220, 0xc0008bac20, 0xc0006bcd20, 0xc0001160c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/secure_serving.go:236 +0xe9
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.RunServer
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/secure_serving.go:227 +0xc8

goroutine 76 [sync.Cond.Wait]:
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:312
sync.runtime_notifyListWait(0xc00075ad90, 0xc000000000)
	/usr/local/go/src/runtime/sema.go:513 +0xf8
sync.(*Cond).Wait(0xc00075ad80)
	/usr/local/go/src/sync/cond.go:56 +0x9d
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Get(0xc0002ae8a0, 0x0, 0x0, 0x3914e00)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:145 +0x89
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).processNextWorkItem(0xc0002aef00, 0x203000)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:190 +0x66
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).runWorker(0xc0002aef00)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:185 +0x2b
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0004fc110)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0004fc110, 0x4a02c60, 0xc000a791a0, 0x45b4d01, 0xc0001160c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0004fc110, 0x3b9aca00, 0x0, 0x1, 0xc0001160c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc0004fc110, 0x3b9aca00, 0xc0001160c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:171 +0x245

goroutine 77 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc000050620, 0xc0004fc130, 0xc000116d80, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:539 +0x11d
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollUntil(0xdf8475800, 0xc0004fc130, 0xc0001160c0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:492 +0xc5
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xdf8475800, 0xc0004fc130, 0xc0001160c0, 0xb, 0xc000fbef48)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:511 +0xb3
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:174 +0x2b3

goroutine 78 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1(0xc0001160c0, 0xc0004fc1b0, 0x4a69160, 0xc000510200)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:279 +0xbd
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:278 +0x8c

goroutine 79 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1.1(0xc000116e40, 0xdf8475800, 0x0, 0xc000116de0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:588 +0x17b
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:571 +0x8c

goroutine 123 [sync.Cond.Wait]:
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:312
sync.runtime_notifyListWait(0xc00075b890, 0xc000000000)
	/usr/local/go/src/runtime/sema.go:513 +0xf8
sync.(*Cond).Wait(0xc00075b880)
	/usr/local/go/src/sync/cond.go:56 +0x9d
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Get(0xc000308cc0, 0x0, 0x0, 0x3914e00)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:145 +0x89
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).processNextWorkItem(0xc000fb0e00, 0x203000)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:263 +0x66
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).runWorker(0xc000fb0e00)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:258 +0x2b
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0005325b0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005325b0, 0x4a02c60, 0xc000a79140, 0x45b4d01, 0xc0001160c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0005325b0, 0x3b9aca00, 0x0, 0x1, 0xc0001160c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc0005325b0, 0x3b9aca00, 0xc0001160c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:247 +0x1b3

goroutine 124 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000532600, 0x4a02c60, 0xc000a79080, 0x1, 0xc0001160c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000532600, 0xdf8475800, 0x0, 0x1, 0xc0001160c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc000532600, 0xdf8475800, 0xc0001160c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:250 +0x22b

goroutine 139 [runnable]:
net/http.setRequestCancel.func4(0x0, 0xc00061cab0, 0xc000eed220, 0xc0002bd478, 0xc0000cf980)
	/usr/local/go/src/net/http/client.go:398 +0xe5
created by net/http.setRequestCancel
	/usr/local/go/src/net/http/client.go:397 +0x337

Edit: here's what worked for me, combining information from various comments:

  1. Shut down docker completely from: menu bar icon -> Quit Docker Desktop
  2. rm -rf ~/Library/Group\ Containers/group.com.docker/pki/
  3. sed -i '' 's/"kubernetesEnabled" : true/"kubernetesEnabled" : false/' ~/Library/Group\ Containers/group.com.docker/settings.json <- this edits your docker settings file directly
  4. Restart Docker for Mac, enable Kubernetes again from the settings

@arthurgbranco
Copy link

@codeclown That worked for me, thank you!

@AntonAleksandrov13
Copy link

In Debug mode settings, reseting cluster and cleaning up the docker data helped me.

@dECRISES
Copy link

dECRISES commented Apr 9, 2021

I confirm . the solution is :
$ rm -rf ~/Library/Group\ Containers/group.com.docker/pki/
$ rm -rf ~/.kube
$ # check resources are enough ( cpu, memory, swap)

Then restart or apply changes

This solution worked for me too

@ggamzang
Copy link

@codeclown Thanks!!! It worked :)

@wkhatch
Copy link

wkhatch commented Jun 12, 2021

I'd fixed this previously, and upon upgrading docker desktop, it's back to being broken, with the infinite "starting". Confirmed that @dECRISES solution worked for me in fixing it up again.

@Tanmak
Copy link

Tanmak commented Jun 14, 2021

I solved that in my M1 MacBook Air. Commenting it here in case this helps anyone.

On Docker Desktop, open Preferences and follow the steps,

  1. Go to "Experimental Features" tab. Select the checkbox "Use the new Virtualisation framework". This will enable Big Sur virtualization.framework
  2. Go to Kubernetes tab and click on Enable Kubernetes. Reset Kubernetes Cluster if required.
  3. Apply and Restart.

After restarting Kubernetes will be enabled. Open terminal and type "kubectl version" to confirm.

@captainIT
Copy link

I have this issue on macbook pro M1.

Screen Shot 1399-11-10 at 16 04 12

v1.21.2 m1. has the same P

@captainIT
Copy link

I have this issue on macbook pro M1.
Screen Shot 1399-11-10 at 16 04 12

v1.21.2 m1. has the same P

https://github.com/maguowei/k8s-docker-desktop-for-mac resolveed

@docker-robott
Copy link
Collaborator

Issues go stale after 90 days of inactivity.
Mark the issue as fresh with /remove-lifecycle stale comment.
Stale issues will be closed after an additional 30 days of inactivity.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle stale

@docker-robott
Copy link
Collaborator

Closed issues are locked after 30 days of inactivity.
This helps our team focus on active issues.

If you have found a problem that seems similar to this, please open a new issue.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle locked

@docker docker locked and limited conversation to collaborators Dec 3, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.