Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for Windows 10 WSL2 #5392

Closed
feaber opened this issue Sep 18, 2019 · 36 comments · Fixed by #8368
Closed

Support for Windows 10 WSL2 #5392

feaber opened this issue Sep 18, 2019 · 36 comments · Fixed by #8368
Assignees
Labels
kind/documentation Categorizes issue or PR as related to documentation. kind/feature Categorizes issue or PR as related to a new feature. os/windows priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@feaber
Copy link

feaber commented Sep 18, 2019

Is there is a plan to support WSL2 on Windows 10?

Docker is already experimenting with it:
https://docs.docker.com/docker-for-windows/wsl-tech-preview/

Do You think supporting WSL2 could save us from using virtual machines?

@tstromberg
Copy link
Contributor

Yes, there is a plan to support WSL2! I'm not sure if folks have experimented with --vm-driver=none and WSL2, but we definitely plan to support it using the Docker deployment model in #4772

Help wanted =)

@tstromberg tstromberg added kind/feature Categorizes issue or PR as related to a new feature. os/windows priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Sep 19, 2019
@tstromberg tstromberg changed the title Windows 10 WSL2 Support for Windows 10 WSL2 Sep 19, 2019
@afbjorklund
Copy link
Collaborator

Interestingly, using WSL will not save you from running a VM. Similar to Docker-for-Windows it will just let you run on someone else’s VM...

https://devblogs.microsoft.com/commandline/announcing-wsl-2/

WSL 2 uses the latest and greatest in virtualization technology to run its Linux kernel inside of a lightweight utility virtual machine (VM).

@LouisStAmour
Copy link

LouisStAmour commented Sep 26, 2019

For folks looking for WSL 2 support via vm-driver=none, using a stable docker-ce build, it's not yet ready, at least with a stock kernel and stock Ubuntu from the store under 1909. I followed the stable instructions at https://docs.docker.com/install/linux/docker-ce/ubuntu/ after installing WSL 2 and Ubuntu, and had a few errors trying to get Docker to run.

In particular, it's worth noting the following:

WhitewaterFoundry/Pengwin#485 (comment) for a Debian iptables incompatibility

It's possible the Debian or Pengwin distros are compatible with the suggested fix, but I was not having luck switching on legacy iptables in the store-provided Ubuntu distro.

Switching to virtualized minikube, it looks as if there are some Kernel flag incompatibilities in the stock WSL 2 kernel that ships with 1909:

microsoft/WSL#4165 (comment) notes some kernel incompatibilities requiring a new kernel compile

The noted kernel flag is now turned on by default in future kernel builds: https://github.com/microsoft/WSL2-Linux-Kernel/blob/master/Microsoft/config-wsl#L1448 Running that config file through Docker's ./check-config, all but CONFIG_DEVPTS_MULTIPLE_INSTANCES under "Generally Necessary" are enabled in Microsoft's latest Linux kernel builds.

So I got myself a copy of the WSL2 kernel and followed the compile instructions (and inserting the missing "apt"): https://github.com/microsoft/WSL2-Linux-Kernel/blob/master/README-Microsoft.WSL2 (make -j6 KCONFIG_CONFIG=Microsoft/config-wsl used more of my CPU cores nicely... 6 of 8 cores, in this case) And it failed to build correctly because I'd checked out the Linux code to a folder in my Windows home folder and Windows is not case-sensitive. I did a cd ~ to use the case-sensitive Linux disk and another checkout, and the kernel build succeeded without issue.

I found slightly more detailed (and scary) instructions on replacing the WSL 2 kernel image here: https://github.com/kdrag0n/proton_wsl2/blob/v4.19-proton/README.md#installation but I would also use the previous thread as a guide: microsoft/WSL#4165 (comment) It appears that WSL since build 18945 supports a config file with a path to a custom kernel to use which sounds much safer than trying to override Windows defaults https://docs.microsoft.com/en-us/windows/wsl/release-notes#build-18945 I'm running Windows build 18363 (1909 aka 19H2) so it sounds like I'll have to upgrade Windows to 20H1 in order to try that. If so, upgrading Windows might just ship with a compatible kernel from day one.

It's getting late here, so I'll leave replacing the kernel upgrading Windows and testing minikube under it for another day, but the thread above shows it working, so I'll assume it's possible if Docker runs virtualized. Not sure if/when native docker will run without issue in WSL 2 distros directly. If I had to guess, the existing Docker for Windows WSL2 preview builds probably run their existing Linux images under WSL2 rather than support any specific distro of WSL2, I could maybe explore this further. The lack of Systemd under WSL2 is surprisingly annoying also, given how much I hated Systemd when I first had to start learning it.

@chrisfowles
Copy link

chrisfowles commented Oct 8, 2019

following these instructions:
microsoft/WSL#994 (comment)
i was able to get minikube running on wsl2 with vmdriver=none

having some networking difficulties with that though so YMMV

Edit: I also then managed to completely break my WSL install so be warned. ⚠️

Edit: ran afoul of this issue: microsoft/WSL#4364 😿

@LouisStAmour
Copy link

@chrisfowles It looks like there's a collision in the use of port 53 between WSL and Docker for WSL (or another user program on Windows listening on port 53) that causes this error... microsoft/WSL#4364 (comment)

Personally, I'm still running Windows preview builds, but I've temporarily stopped experimenting with Docker/K8S inside WSL. I'll wait until I either have an urgent need or things have shipped and stabilized a bit more. Right now I don't see much of a performance penalty to using K8S from Docker for Windows, it would just be nice to standardize on one copy of Linux for all the things, but it sounds like there are still a couple rough edges being worked out. :)

@gbraad
Copy link
Contributor

gbraad commented Nov 19, 2019

WSL2 and it's network limitations are causing issues for us (CRC/Minishift) to consider using this. There is a dedicated switch used, that is only usable for WSL2 and created on the fly (WSL), which does not allow you to easily communicate with others on the same host but other segments. This is unlike WSL1.

We are still experimenting, but so far it has not been an improvement over the approach to just run a separate, and dedicated, VM

@medyagh
Copy link
Member

medyagh commented Dec 16, 2019

not entirely same but in same category #3248

@cheslijones
Copy link

cheslijones commented Mar 6, 2020

Wanting to check in on WSL2 support.

Just thought I'd give it a shot but running into:

$ sudo minikube start --vm-driver=none  
😄  minikube v1.7.3 on Ubuntu 18.04
✨  Using the none driver based on user configuration
⌛  Reconfiguring existing host ...
🔄  Starting existing none VM for "minikube" ...
ℹ️   OS release is Ubuntu 18.04.2 LTS

💣  Failed to enable container runtime
❌  Error: [NONE_DOCKER_EXIT_1] enable docker.: sudo systemctl start docker: exit status 1
stdout:

stderr:
System has not been booted with systemd as init system (PID 1). Can't operate.

💡  Suggestion: Either systemctl is not installed, or Docker is broken. Run 'sudo systemctl start docker' and 'journalctl -u docker'
📘  Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/none
⁉️   Related issues:
    ▪ https://github.com/kubernetes/minikube/issues/4498

$ sudo systemctl start docker          
System has not been booted with systemd as init system (PID 1). Can't operate.

Docker is running with sudo service docker start though and docker run hello-world works fine.

$ grep -E --color 'vmx|svm' /proc/cpuinfo returns nothing.

I image WSL2 and minikube just aren't there yet, but thought I'd ask.

EDIT: Never mind... I ran it without --vm-drive=none and it automatically picked the docker driver. Runs fine.

EDIT: Never mind... Worked for one day and now it is back to not working:

 $ minikube start   
😄  minikube v1.8.1 on Ubuntu 18.04
✨  Automatically selected the docker driver
🔥  Creating Kubernetes in docker container with (CPUs=2) (12 available), Memory=12834MB (51338MB available) ...

💣  Unable to start VM. Please investigate and run 'minikube delete' if possible: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --cpus=2 --memory=12834mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.7@sha256:a6f288de0e5863cdeab711fa6bafa38ee7d8d285ca14216ecf84fcfb07c7d176] output: e73d46e76ef388c40b284dbf3c5e67c5c09300db7aba97d36985acbc77c7186a
docker: Error response from daemon: cgroups: cannot find cgroup mount destination: unknown.
: exit status 125

😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
👉  https://github.com/kubernetes/minikube/issues/new/choose

@rofangaci
Copy link

Have the same issue as above. Still can't use minikube on WSL on win10.

@tstromberg
Copy link
Contributor

Huzzah - WSL2 support works at head, with some caveats!

I did some experimentation today with minikube on WSL2 (Ubuntu), specifically with --driver=docker. It gets very close. Once I got docker working, minikube gave this error, mentioned on this thread:

cgroups: cannot find cgroup mount destination: unknown.

I found a solution for this issue at microsoft/WSL#4189 (comment)

sudo mkdir /sys/fs/cgroup/systemd
sudo mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd

That got me further, but it then v1.8.2 crashed with some bogus "machine not found error".

I then upgraded to minikube v1.9.0 (master), and it all works just fine:

t@voimala:~/minikube$ ./out/minikube kubectl -- get po -A
    > kubectl.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s
    > kubectl: 41.99 MiB / 41.99 MiB [----------------] 100.00% 6.36 MiB p/s 7s
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-66bff467f8-2r5df           0/1     Running   0          14s
kube-system   coredns-66bff467f8-g2492           1/1     Running   0          14s
kube-system   etcd-minikube                      1/1     Running   0          25s
kube-system   kindnet-njh6d                      1/1     Running   0          14s
kube-system   kube-apiserver-minikube            1/1     Running   0          25s
kube-system   kube-controller-manager-minikube   1/1     Running   0          25s
kube-system   kube-proxy-8vdrw                   1/1     Running   0          14s
kube-system   kube-scheduler-minikube            1/1     Running   0          25s
kube-system   storage-provisioner                1/1     Running   0          29s

We'll definitely need to document this.

@tstromberg tstromberg self-assigned this Mar 24, 2020
@tstromberg tstromberg added kind/documentation Categorizes issue or PR as related to documentation. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Mar 24, 2020
@tstromberg
Copy link
Contributor

tstromberg commented Mar 24, 2020

Proof:

Annotation 2020-03-23 225806

@dexhunter
Copy link

@tstromberg Hi, thanks for the report, just wonder which version of minikube should I download to get it working?

@cheslijones
Copy link

cheslijones commented Mar 31, 2020

I'm curious what others' experience using Docker, kubectl, and minikube in WSL2 is. I personally had to throw in the towel after a few weeks and go back to my Ubuntu partition for a couple reasons:

  1. Would get this error every single day I turned my machine on and occasionally throughout the day:

    docker: Error response from daemon: cgroups: cannot find cgroup mount destination: unknown.

    I'd turn my machine on, start Docker up in WSL2, and run docker run hello-world to see if it felt like working today. Most of the time I'd get this error, sometimes I wouldn't. When it wouldn't minikube start would fail, I'd run docker run hello-world and then I'd get that error. The fix to the error is to do the following:

    sudo mkdir /sys/fs/cgroup/systemd
    sudo mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd
    

    Then it was 50/50 chance as whether I'd have to minikube delete and start a new one.

    Then I could finally start everything and start working. Had to do it every single day. Sometimes multiple times a day. Got old.

  2. Incredibly slow. Using Skaffold to spin up my dev cluster, WSL2 would take upwards of a minute to move updates into images. Ubuntu 19.10 and macOS 10.15 would take ~ 10 seconds.

If others are not experiencing this and are able to use this stack in WSL2 without issues, I'd like to hear what you are doing that I might be doing wrong. Would love to use WSL2, but in its current state, it just doesn't work for me.

EDIT
Thought I'd spend a few hours with v1.9.0. Still running into pretty much the same issues. Had to do:

sudo mkdir /sys/fs/cgroup/systemd
sudo mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd
minikube delete
minikube start

Probably a dozen times in that time frame.

@tstromberg
Copy link
Contributor

@eox-dev - thanks for the update. I only had a chance to play around for a couple of minutes. Once a cluster is started, does it work until the host machine reboots? If not, what kind of errors are you seeing?

@cheslijones
Copy link

cheslijones commented Apr 8, 2020

Sorry just getting back to this.

For a while, about a week and half, it was until the host machine rebooted. Then it became several times throughout the day.

  • I've just done a clean install of WSL2. Latest stable releases of docker, kubectl, and minikube v1.9.2.
  • Docker is up and running, docker run hello-world pulls fine.
  • Using --driver=docker by default, since that is the only driver that works in WSL2. At least, --driver=none has never once worked for me.

First attempt at minikube start:

$ minikube start
😄  minikube v1.9.2 on Ubuntu 18.04
✨  Automatically selected the docker driver
👍  Starting control plane node m01 in cluster minikube
🚜  Pulling base image ...
🔥  Creating Kubernetes in docker container with (CPUs=2) (12 available), Memory=12800MB (51338MB available) ...
🤦  StartHost failed, but will try again: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --cpus=2 --memory=12800mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: 901a3c242ecc4ab541a993136473e3ff7b899a8b3ce7ff0415942360c91c745a
docker: Error response from daemon: cgroups: cannot find cgroup mount destination: unknown.
: exit status 125
🤷  docker "minikube" container is missing, will recreate.
🔥  Creating Kubernetes in docker container with (CPUs=2) (12 available), Memory=12800MB (51338MB available) ...

❌  [DOCKER_WSL2_MOUNT] Failed to start docker container. "minikube start" may fix it. recreate: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --cpus=2 --memory=12800mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: 9077af54f007fdbfa75443729d3f2f2cd77953195e21bb866846141bb6292f96
docker: Error response from daemon: cgroups: cannot find cgroup mount destination: unknown.
: exit status 125
💡  Suggestion: Run: 'sudo mkdir /sys/fs/cgroup/systemd && sudo mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd'
📘  Documentation: https://github.com/microsoft/WSL/issues/4189
⁉️   Related issue: https://github.com/kubernetes/minikube/issues/5392

$ minikube status
E0407 21:22:25.361884   12177 status.go:114] status error: host: state: unknown state
E0407 21:22:25.362337   12177 status.go:117] The "minikube" host does not exist!
m01
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

Check docker run hellow-world and now I get:

docker: Error response from daemon: cgroups: cannot find cgroup mount destination: unknown.
ERRO[0001] error waiting for container: context canceled 

Fix it with the following:

sudo mkdir /sys/fs/cgroup/systemd
sudo mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd

Docker is back up and running.

Second attempt as minikube start:

$ minikube start
😄  minikube v1.9.2 on Ubuntu 18.04
✨  Using the docker driver based on existing profile
👍  Starting control plane node m01 in cluster minikube
🚜  Pulling base image ...
🤷  docker "minikube" container is missing, will recreate.
🔥  Creating Kubernetes in docker container with (CPUs=2) (12 available), Memory=12800MB (51338MB available) ...
🐳  Preparing Kubernetes v1.18.0 on Docker 19.03.2 ...
    ▪ kubeadm.pod-network-cidr=10.244.0.0/16
❗  This container is having trouble accessing https://k8s.gcr.io
💡  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
🌟  Enabling addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube"

Success!

skaffold dev --port-forward to spin up the cluster. I just get a bunch these errors for pretty much ever build artifact:

FATA[0025] exiting dev mode because first build failed: build failed: building [postgres]: build artifact: unable to stream build output: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 172.17.208.1:53: read udp 172.17.0.2:53045->172.17.208.1:53: i/o timeout 

I threw in the towel at that point. I found some similar issues that says it could be the /etc/resolv.conf and that it needs to be updated to nameserver 8.8.8.8 in the minikube VM.

I tried to minikube start and minikube stop, minikube delete and minikube start to see if the same issue came up like my first attempt. They did not occur. So maybe the issue has be resolved. Would be helpful if I could spin the cluster up though to really test it out.

Also, it takes about five minutes minikube start whether it is creating a new VM or not. Kind of time consuming compared to how fast my Ubuntu partition is.

@svrc
Copy link

svrc commented Apr 23, 2020

Windows 10 Home insider edition build 19041.208,
Docker Desktop 2.3.0 edge (which has WSL2 integration enabled)
WSL2 distro is Ubuntu 18.04

Looks like everything is coming up except the pod/service network and the external NAT to the API server.

stu@DESKTOP-48R13CC:~$ minikube start --driver=docker --memory=12g --kubernetes-version=1.17.5
😄  minikube v1.9.2 on Ubuntu 18.04
✨  Using the docker driver based on user configuration
👍  Starting control plane node m01 in cluster minikube
🚜  Pulling base image ...
💾  Downloading Kubernetes v1.17.5 preload ...
    > preloaded-images-k8s-v2-v1.17.5-docker-overlay2-amd64.tar.lz4: 522.19 MiB
🔥  Creating Kubernetes in docker container with (CPUs=2) (4 available), Memory=12288MB (51400MB available) ...
🐳  Preparing Kubernetes v1.17.5 on Docker 19.03.2 ...
    ▪ kubeadm.pod-network-cidr=10.244.0.0/16
🌟  Enabling addons: default-storageclass, storage-provisioner
❗  Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get https://172.17.0.2:8443/apis/storage.k8s.io/v1/storageclasses: dial tcp 172.17.0.2:8443: i/o timeout]

💣  startup failed: Wait failed: wait for healthy API server: apiserver healthz never reported healthy

😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
👉  https://github.com/kubernetes/minikube/issues/new/choose

Some minikube logs

container status

CONTAINER           IMAGE                                                                                      CREATED             STATE               NAME                      ATTEMPT             POD ID
81b2368b449f1       4689081edb103                                                                              3 minutes ago       Running             storage-provisioner       0                   699e43042c886
83e2969661984       kindest/kindnetd@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555   4 minutes ago       Running             kindnet-cni               0                   e3a3c63b6a532
8bc9f4c18468f       70f311871ae12                                                                              4 minutes ago       Running             coredns                   0                   50b16ed89843b
cc7f3f67e9d62       70f311871ae12                                                                              4 minutes ago       Running             coredns                   0                   a7170f5318c1b
631e6fbefcce9       e13db435247de                                                                              4 minutes ago       Running             kube-proxy                0                   c73bd71cda482
a866f9d56cfef       303ce5db0e90d                                                                              4 minutes ago       Running             etcd                      0                   b2d59661b2dd5
7f69a372eea65       f648efaff9663                                                                              4 minutes ago       Running             kube-scheduler            0                   e88ec0e81b05a
515fef8fd5382       fe3d691efbf31                                                                              4 minutes ago       Running             kube-controller-manager   0                   ccf60cf7094ea
ecd94f9e70792       f640481f6db3c                                                                              4 minutes ago       Running             kube-apiserver            0                   988f123284da9

docker

Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.800788000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.800806100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.800824300Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.800880400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.800904300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.800924200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.800942000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.801255200Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.801321300Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.801374700Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.801390800Z" level=info msg="containerd successfully booted in 0.055339s"
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.804669200Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0007148f0, READY" module=grpc
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.808185800Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.808264900Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.808297800Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] }" module=grpc
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.808312800Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.808371900Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000847460, CONNECTING" module=grpc
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.812989500Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000847460, READY" module=grpc
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.814097200Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.814161200Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.814231700Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] }" module=grpc
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.814250500Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.814357700Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0007a4590, CONNECTING" module=grpc
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.814876100Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0007a4590, READY" module=grpc
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.817883100Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.832853300Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.832890200Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.832902600Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.832918900Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.832929100Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.832939300Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.833399700Z" level=info msg="Loading containers: start."
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.837261800Z" level=warning msg="Running modprobe bridge br_netfilter failed with message: modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.19.84-microsoft-standard/modules.dep.bin'\nmodprobe: WARNING: Module bridge not found in directory /lib/modules/4.19.84-microsoft-standard\nmodprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.19.84-microsoft-standard/modules.dep.bin'\nmodprobe: WARNING: Module br_netfilter not found in directory /lib/modules/4.19.84-microsoft-standard\n, error: exit status 1"
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.839319100Z" level=warning msg="Running modprobe nf_nat failed with message: `modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.19.84-microsoft-standard/modules.dep.bin'\nmodprobe: WARNING: Module nf_nat not found in directory /lib/modules/4.19.84-microsoft-standard`, error: exit status 1"
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.841134200Z" level=warning msg="Running modprobe xt_conntrack failed with message: `modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.19.84-microsoft-standard/modules.dep.bin'\nmodprobe: WARNING: Module xt_conntrack not found in directory /lib/modules/4.19.84-microsoft-standard`, error: exit status 1"
Apr 23 17:59:18 minikube dockerd[487]: time="2020-04-23T17:59:18.966796200Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Apr 23 17:59:19 minikube dockerd[487]: time="2020-04-23T17:59:19.023295600Z" level=info msg="Loading containers: done."
Apr 23 17:59:19 minikube dockerd[487]: time="2020-04-23T17:59:19.044963300Z" level=info msg="Docker daemon" commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2
Apr 23 17:59:19 minikube dockerd[487]: time="2020-04-23T17:59:19.045086400Z" level=info msg="Daemon has completed initialization"
Apr 23 17:59:19 minikube dockerd[487]: time="2020-04-23T17:59:19.073282000Z" level=info msg="API listen on /var/run/docker.sock"
Apr 23 17:59:19 minikube dockerd[487]: time="2020-04-23T17:59:19.073383500Z" level=info msg="API listen on [::]:2376"
Apr 23 17:59:19 minikube systemd[1]: Started Docker Application Container Engine.
Apr 23 17:59:35 minikube dockerd[487]: time="2020-04-23T17:59:35.853149700Z" level=info msg="shim containerd-shim started" address=/containerd-shim/706a1382c11a3b16b398bbece7e2aed74e00fba38e79170f35ad8f1e988b1321.sock debug=false pid=1655
Apr 23 17:59:35 minikube dockerd[487]: time="2020-04-23T17:59:35.868055700Z" level=info msg="shim containerd-shim started" address=/containerd-shim/271acde0c05840467da53504a5cff2ef861cfa9aa5956a91dc59fef8de3ce6ec.sock debug=false pid=1672
Apr 23 17:59:35 minikube dockerd[487]: time="2020-04-23T17:59:35.895606800Z" level=info msg="shim containerd-shim started" address=/containerd-shim/c714b8ccf949c9aff95db01b56ce778d6b671db1f9c1599dd2ac1ce98b45755e.sock debug=false pid=1700
Apr 23 17:59:35 minikube dockerd[487]: time="2020-04-23T17:59:35.937413600Z" level=info msg="shim containerd-shim started" address=/containerd-shim/751f413427c2d2a27d938715b45346f3e2b20756b3b66df601b1308426f2bcd3.sock debug=false pid=1726
Apr 23 17:59:36 minikube dockerd[487]: time="2020-04-23T17:59:36.181383200Z" level=info msg="shim containerd-shim started" address=/containerd-shim/1324296ff207a981eaaea798544729a998621cead0cbd4db950cca52b3ab3ae7.sock debug=false pid=1844
Apr 23 17:59:36 minikube dockerd[487]: time="2020-04-23T17:59:36.258164300Z" level=info msg="shim containerd-shim started" address=/containerd-shim/1ec1467bc7cb55d0aa3adc419d851f9d27630519a272aea4cd47addb8778b722.sock debug=false pid=1883
Apr 23 17:59:36 minikube dockerd[487]: time="2020-04-23T17:59:36.269464600Z" level=info msg="shim containerd-shim started" address=/containerd-shim/27f18f9f83998633b6e00879870a688fc051ad311497ff0f0c5f1f3a6fb8e775.sock debug=false pid=1889
Apr 23 17:59:36 minikube dockerd[487]: time="2020-04-23T17:59:36.287560800Z" level=info msg="shim containerd-shim started" address=/containerd-shim/c5b6d868661222438bdfeb2485ff44c811e1343eda955ca8f519a4b3cfcf6b06.sock debug=false pid=1905
Apr 23 18:00:03 minikube dockerd[487]: time="2020-04-23T18:00:03.459128400Z" level=info msg="shim containerd-shim started" address=/containerd-shim/9782729114d2164926a8f2ede6e420754f823d03f4d8b3b7dc93ba51080cf47b.sock debug=false pid=2701
Apr 23 18:00:03 minikube dockerd[487]: time="2020-04-23T18:00:03.604152700Z" level=info msg="shim containerd-shim started" address=/containerd-shim/36ded5a9ca183ae3391f136626c958aacd8ba9e0f18e14c1115344424a266b5f.sock debug=false pid=2734
Apr 23 18:00:03 minikube dockerd[487]: time="2020-04-23T18:00:03.632058900Z" level=info msg="shim containerd-shim started" address=/containerd-shim/3a0ae9732354d4e14b5770a937f43f761b68c8479095582aad7e8516fb5ecf2f.sock debug=false pid=2749
Apr 23 18:00:03 minikube dockerd[487]: time="2020-04-23T18:00:03.656004300Z" level=info msg="shim containerd-shim started" address=/containerd-shim/673b68f85536f4c554227614156b6f334224e5479f26667bca3930a6dca36e52.sock debug=false pid=2775
Apr 23 18:00:03 minikube dockerd[487]: time="2020-04-23T18:00:03.976204600Z" level=info msg="shim containerd-shim started" address=/containerd-shim/0dff05fd2c7f8ce3ac3f73f9dfaaaf2a854680eb751cc278fc90d5990d07bb92.sock debug=false pid=2864
Apr 23 18:00:03 minikube dockerd[487]: time="2020-04-23T18:00:03.993892200Z" level=info msg="shim containerd-shim started" address=/containerd-shim/ff750f32681ecb0b2009f160be85a55553a6cd236aba0993f8480c24b08ac762.sock debug=false pid=2880
Apr 23 18:00:04 minikube dockerd[487]: time="2020-04-23T18:00:04.313931700Z" level=info msg="shim containerd-shim started" address=/containerd-shim/d9e48adf3a11cf7021ae105b1a5319a117096e69f1019cd4fa4518d945ff8310.sock debug=false pid=2966
Apr 23 18:00:08 minikube dockerd[487]: time="2020-04-23T18:00:08.251999900Z" level=info msg="shim containerd-shim started" address=/containerd-shim/f82b62866bdcb6c83e264bc17da4daafa34fbb9b9fde87494fe40d5f6af6734d.sock debug=false pid=3093
Apr 23 18:00:20 minikube dockerd[487]: time="2020-04-23T18:00:20.758361800Z" level=info msg="shim containerd-shim started" address=/containerd-shim/b4c4811232f2298036849dc40906fedf3210016d5ad1bbb691b0320236e3f658.sock debug=false pid=3204
Apr 23 18:00:20 minikube dockerd[487]: time="2020-04-23T18:00:20.901276400Z" level=info msg="shim containerd-shim started" address=/containerd-shim/6c1298612e541b4532f199b719cc652d2e88cc8bd41edbf55952904248586306.sock debug=false pid=3235

kubelet

Apr 23 18:00:03 minikube kubelet[2322]: W0423 18:00:03.850148    2322 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-56m65 through plugin: invalid network status for
Apr 23 18:00:04 minikube kubelet[2322]: W0423 18:00:04.090824    2322 pod_container_deletor.go:75] Container "50b16ed89843b7bb49c7cfe6c038c46cdf66048bf493ec81fa6340d427428068" not found in pod's containers
Apr 23 18:00:04 minikube kubelet[2322]: W0423 18:00:04.091791    2322 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-f4j4t through plugin: invalid network status for
Apr 23 18:00:04 minikube kubelet[2322]: W0423 18:00:04.096454    2322 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-56m65 through plugin: invalid network status for
Apr 23 18:00:04 minikube kubelet[2322]: W0423 18:00:04.235300    2322 pod_container_deletor.go:75] Container "a7170f5318c1b266e089ac119bd142ea6ecc8ec09b33cb2af734975c5ee5f9fa" not found in pod's containers
Apr 23 18:00:05 minikube kubelet[2322]: W0423 18:00:05.252068    2322 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-f4j4t through plugin: invalid network status for
Apr 23 18:00:05 minikube kubelet[2322]: W0423 18:00:05.263128    2322 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-56m65 through plugin: invalid network status for
Apr 23 18:00:20 minikube kubelet[2322]: I0423 18:00:20.391524    2322 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/135f631c-a084-4199-88c7-2ad8fb627457-tmp") pod "storage-provisioner" (UID: "135f631c-a084-4199-88c7-2ad8fb627457")
Apr 23 18:00:20 minikube kubelet[2322]: I0423 18:00:20.391604    2322 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-tjckp" (UniqueName: "kubernetes.io/secret/135f631c-a084-4199-88c7-2ad8fb627457-storage-provisioner-token-tjckp") pod "storage-provisioner" (UID: "135f631c-a084-4199-88c7-2ad8fb627457")

kube-apiserver

==> kube-apiserver [ecd94f9e7079] <==
W0423 17:59:40.278674       1 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0423 17:59:40.291015       1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0423 17:59:40.313418       1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0423 17:59:40.316950       1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0423 17:59:40.336804       1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0423 17:59:40.368051       1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources.
W0423 17:59:40.368132       1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources.
I0423 17:59:40.378630       1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I0423 17:59:40.378698       1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I0423 17:59:40.380724       1 client.go:361] parsed scheme: "endpoint"
I0423 17:59:40.380800       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]
I0423 17:59:40.391761       1 client.go:361] parsed scheme: "endpoint"
I0423 17:59:40.391831       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]
I0423 17:59:43.086692       1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0423 17:59:43.086789       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0423 17:59:43.087022       1 dynamic_serving_content.go:129] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I0423 17:59:43.087423       1 secure_serving.go:178] Serving securely on [::]:8443
I0423 17:59:43.087508       1 tlsconfig.go:219] Starting DynamicServingCertificateController
I0423 17:59:43.087601       1 controller.go:81] Starting OpenAPI AggregationController
I0423 17:59:43.087571       1 available_controller.go:386] Starting AvailableConditionController
I0423 17:59:43.087711       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0423 17:59:43.087775       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0423 17:59:43.087788       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0423 17:59:43.087815       1 autoregister_controller.go:140] Starting autoregister controller
I0423 17:59:43.087829       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0423 17:59:43.087835       1 crd_finalizer.go:263] Starting CRDFinalizer
I0423 17:59:43.087914       1 controller.go:85] Starting OpenAPI controller
I0423 17:59:43.087939       1 customresource_discovery_controller.go:208] Starting DiscoveryController
I0423 17:59:43.087953       1 naming_controller.go:288] Starting NamingConditionController
I0423 17:59:43.087973       1 establishing_controller.go:73] Starting EstablishingController
I0423 17:59:43.087995       1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I0423 17:59:43.088017       1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0423 17:59:43.088900       1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0423 17:59:43.088951       1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
I0423 17:59:43.106433       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0423 17:59:43.106476       1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0423 17:59:43.106496       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0423 17:59:43.106504       1 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller
E0423 17:59:43.138086       1 controller.go:151] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.2, ResourceVersion: 0, AdditionalErrorMsg:
I0423 17:59:43.232071       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0423 17:59:43.232480       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0423 17:59:43.232573       1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller
I0423 17:59:43.232801       1 cache.go:39] Caches are synced for autoregister controller
I0423 17:59:43.233540       1 shared_informer.go:204] Caches are synced for crd-autoregister
I0423 17:59:44.086893       1 controller.go:107] OpenAPI AggregationController: Processing item
I0423 17:59:44.086963       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0423 17:59:44.086979       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0423 17:59:44.093251       1 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000
I0423 17:59:44.100554       1 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000
I0423 17:59:44.100634       1 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
I0423 17:59:44.672667       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0423 17:59:44.725666       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0423 17:59:44.880868       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.17.0.2]
I0423 17:59:44.881617       1 controller.go:606] quota admission added evaluator for: endpoints
I0423 17:59:45.342692       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0423 17:59:46.451876       1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0423 17:59:46.471497       1 controller.go:606] quota admission added evaluator for: deployments.apps
I0423 17:59:46.698610       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0423 18:00:02.795373       1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0423 18:00:02.899361       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps

describe nodes

Name:               minikube
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=minikube
                    kubernetes.io/os=linux
                    minikube.k8s.io/commit=93af9c1e43cab9618e301bc9fa720c63d5efa393
                    minikube.k8s.io/name=minikube
                    minikube.k8s.io/updated_at=2020_04_23T11_59_47_0700
                    minikube.k8s.io/version=v1.9.2
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Thu, 23 Apr 2020 17:59:43 +0000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  minikube
  AcquireTime:     <unset>
  RenewTime:       Thu, 23 Apr 2020 18:04:07 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Thu, 23 Apr 2020 18:00:17 +0000   Thu, 23 Apr 2020 17:59:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 23 Apr 2020 18:00:17 +0000   Thu, 23 Apr 2020 17:59:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 23 Apr 2020 18:00:17 +0000   Thu, 23 Apr 2020 17:59:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Thu, 23 Apr 2020 18:00:17 +0000   Thu, 23 Apr 2020 17:59:57 +0000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  172.17.0.2
  Hostname:    minikube
Capacity:
  cpu:                4
  ephemeral-storage:  263174212Ki
  hugepages-2Mi:      0
  memory:             52633720Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  263174212Ki
  hugepages-2Mi:      0
  memory:             52633720Ki
  pods:               110
System Info:
  Machine ID:                 27056dc5e7eb4e02b9085670b952028e
  System UUID:                27056dc5e7eb4e02b9085670b952028e
  Boot ID:                    a5355932-27fe-44e9-9254-8ae3bc387124
  Kernel Version:             4.19.84-microsoft-standard
  OS Image:                   Ubuntu 19.10
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://19.3.2
  Kubelet Version:            v1.17.5
  Kube-Proxy Version:         v1.17.5
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (9 in total)
  Namespace                   Name                                CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                ------------  ----------  ---------------  -------------  ---
  kube-system                 coredns-6955765f44-56m65            100m (2%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m9s
  kube-system                 coredns-6955765f44-f4j4t            100m (2%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m9s
  kube-system                 etcd-minikube                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
  kube-system                 kindnet-fcwhx                       100m (2%)     100m (2%)   50Mi (0%)        50Mi (0%)      4m9s
  kube-system                 kube-apiserver-minikube             250m (6%)     0 (0%)      0 (0%)           0 (0%)         4m24s
  kube-system                 kube-controller-manager-minikube    200m (5%)     0 (0%)      0 (0%)           0 (0%)         4m24s
  kube-system                 kube-proxy-j8mzv                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
  kube-system                 kube-scheduler-minikube             100m (2%)     0 (0%)      0 (0%)           0 (0%)         4m24s
  kube-system                 storage-provisioner                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                850m (21%)  100m (2%)
  memory             190Mi (0%)  390Mi (0%)
  ephemeral-storage  0 (0%)      0 (0%)
Events:
  Type    Reason                   Age                    From                  Message
  ----    ------                   ----                   ----                  -------
  Normal  NodeHasSufficientMemory  4m36s (x5 over 4m37s)  kubelet, minikube     Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    4m36s (x5 over 4m37s)  kubelet, minikube     Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     4m36s (x5 over 4m37s)  kubelet, minikube     Node minikube status is now: NodeHasSufficientPID
  Normal  Starting                 4m24s                  kubelet, minikube     Starting kubelet.
  Normal  NodeHasSufficientMemory  4m24s                  kubelet, minikube     Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    4m24s                  kubelet, minikube     Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     4m24s                  kubelet, minikube     Node minikube status is now: NodeHasSufficientPID
  Normal  NodeNotReady             4m24s                  kubelet, minikube     Node minikube status is now: NodeNotReady
  Normal  NodeAllocatableEnforced  4m24s                  kubelet, minikube     Updated Node Allocatable limit across pods
  Normal  NodeReady                4m14s                  kubelet, minikube     Node minikube status is now: NodeReady
  Normal  Starting                 4m7s                   kube-proxy, minikube  Starting kube-proxy.

@nnashok
Copy link

nnashok commented May 4, 2020

I also ran into the same issue running minikube --driver=docker in an Ubuntu shell in WSL:

I0503 16:45:14.426445   12987 api_server.go:192] Checking apiserver healthz at https://172.17.0.3:8443/healthz ...
I0503 16:45:35.428540   12987 api_server.go:202] stopped: https://172.17.0.3:8443/healthz: Get https://172.17.0.3:8443/healthz: dial tcp 172.17.0.3:8443: connect: connection refused

Upon inspecting the minikube docker container, I do see the that using the exposed port (127.0.0.1:32834 ) works from the Ubuntu shell:

$ docker inspect minikube | jq '.[].NetworkSettings.Ports'
{
  "22/tcp": [
    {
      "HostIp": "127.0.0.1",
      "HostPort": "32837"
    }
  ],
  "2376/tcp": [
    {
      "HostIp": "127.0.0.1",
      "HostPort": "32836"
    }
  ],
  "5000/tcp": [
    {
      "HostIp": "127.0.0.1",
      "HostPort": "32835"
    }
  ],
  "8443/tcp": [
    {
      "HostIp": "127.0.0.1",
      "HostPort": "32834"
    }
  ]
}

$ curl -v http://127.0.0.1:32834
* Rebuilt URL to: http://127.0.0.1:32834/
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 32834 (#0)
> GET / HTTP/1.1
> Host: 127.0.0.1:32834
> User-Agent: curl/7.58.0
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 400 Bad Request
<
Client sent an HTTP request to an HTTPS server.
* Closing connection 0

Comparing this to the URL used by minikube start --driver=docker in a Command prompt on Windows on the same machine, I see it using the correct URL:

I0502 17:52:10.401714   16288 api_server.go:184] Checking apiserver healthz at https://127.0.0.1:32795/healthz ...

and as a result, it works without any issues.

@nezorflame
Copy link
Contributor

nezorflame commented May 17, 2020

Windows 10 Pro: Insider build 19041.264
WSL instance: Ubuntu 20.04
Docker version: 19.03.8 (WSL2 enabled)

docker version
Client: Docker Engine - Community
 Version:           19.03.8
 API version:       1.40
 Go version:        go1.12.17
 Git commit:        afacb8b7f0
 Built:             Wed Mar 11 01:25:46 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.8
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.17
  Git commit:       afacb8b
  Built:            Wed Mar 11 01:29:16 2020
  OS/Arch:          linux/amd64
  Experimental:     true
 containerd:
  Version:          v1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

Got the same error as @svrc-pivotal with some additional info:

~ minikube start --disk-size 30g --memory 3072 --cpus 2 --kubernetes-version v1.14.0 --extra-config=kube-proxy.IPTables.SyncPeriod.Duration=5000000000 --extra-config=kube-proxy.IPTables.MinSyncPeriod.Duration=3000000000 --vm-driver=docker

...

callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://172.17.0.2:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 172.17.0.2:8443: i/o timeout]
* Enabled addons: default-storageclass, storage-provisioner
*
X failed to start node: startup failed: Wait failed: node pressure: list nodes: Get "https://172.17.0.2:8443/api/v1/nodes": dial tcp 172.17.0.2:8443: i/o timeout

@ReiJr
Copy link

ReiJr commented May 24, 2020

I was already trying to run the minikube on WSL 2 for a few days. Like my colleagues @nezorflame and @svrc-pivotal I was using a docker driver to start the minikube, but the same error was happening to them. So, I started looking for driver alternatives and in the minikube documentation I found: https://minikube.sigs.k8s.io/docs/drivers/podman/

Which is an experimental driver. I was hopeless about finding a solution, but I decided to test it and it works!
minikube

@afbjorklund
Copy link
Collaborator

@ReiJr : that is excellent news! It was actually intended for RHEL, great that it works on "Windows"
We aim to make the podman driver feature-compatible with the docker driver, using the same KIC

Currently we are test-driving a new kicbase image that updates the underlying OS to Ubuntu 20.04
Feel free to try it on WSL, if you want: --base-image afbjorklund/kicbase:v0.0.10-snapshot

Issue #8250

Hopefully 1.11

@thehappycoder
Copy link

Seems that microk8s doesn't have this problem https://ubuntu.com/blog/kubernetes-on-windows-with-microk8s-and-wsl-2

@antoineco
Copy link

antoineco commented May 25, 2020

@thehappycoder I've been a happy user of microk8s myself but it's not the best experience on WSL2. microk8s runs as Snaps, which means it requires systemd, a fairly heavy dependency which isn't suitable for the init-less model of WSL2. It works, but the usability is not the smoothest.

@afbjorklund
Copy link
Collaborator

Minikube also defaults to systemd, mostly because kubeadm does... There is some work to make it work on distributions without systemd, but those are getting more and more rare now.

Especially the hacks to get systemd running in docker-on-docker (and crio-on-podman) are especially awful, and we regularly run into bugs with systemd that we have to patch downstream.

@antoineco
Copy link

antoineco commented May 25, 2020

There is some work to make it work on distributions without systemd, but those are getting more and more rare now.

In the case of WSL2 it's not a matter of distro, WSL2 instances simply don't run systemd by design, no matter the distro.

What I'm trying to say is that it shouldn't be expected that users hack WSL2 themselves in order to be able to run microk8s or minikube.

I'm not sure how the WSL2 driver for minikube handles this. Docker for Windows does it by running Kubernetes in a separate WSL2 instance and, because all instances share the same network interface under the hood, kubectl remains usable on localhost without touching the user's instance. It's an amazing user experience, really.

@afbjorklund
Copy link
Collaborator

afbjorklund commented May 25, 2020

In the case of WSL2 it's not a matter of distro, WSL2 instances simply don't run systemd by design, no matter the distro.

The same is true for docker containers, so there are similar tricks into enabling it. We "borrow" those from KIND, but still: eeww

COPY entrypoint /usr/local/bin/entrypoint

# systemd exits on SIGRTMIN+3, not SIGTERM (which re-executes it)
# https://bugzilla.redhat.com/show_bug.cgi?id=1201657
STOPSIGNAL SIGRTMIN+3

ENTRYPOINT [ "/usr/local/bin/entrypoint", "/sbin/init" ]

(the entrypoint has all kinds of hacks)

https://github.com/kubernetes-sigs/kind/blob/master/images/base/files/usr/local/bin/entrypoint

@afbjorklund
Copy link
Collaborator

I guess I will have to watch the video, to see why you want to run a LXD system container inside a Windows Server VM - rather than just spawning an Ubuntu VM and installing Kubernetes on it...

@antoineco
Copy link

antoineco commented May 25, 2020

Because Microsoft did a great job at integrating WSL2 inside their OS. You can share files seamlessly between Windows and WSL2 instances, access services running inside WSL2 on localhost on the Windows side, call Windows executable from WSL2 (open foo.html opens your browser in Windows, not Linux. code foo.go opens VSCode in Windows).

It's all about the tight integration and the user experience. If you don't care about that, a VM is perfectly fine, but WSL2 offers much more than that.


But I digress, this thread is about minikube's WSL2 integration. @afbjorklund when you mentioned

The same is true for docker containers

did you mean the current WSL2 backend is just a wrapper around kind?

@afbjorklund
Copy link
Collaborator

afbjorklund commented May 25, 2020

It's all about the tight integration and the user experience. If you don't care about that, a VM is perfectly fine, but WSL2 offers much more than that.

Ok, get it. Sounds similar to the Docker Desktop user experience.

did you mean the current WSL2 backend is just a wrapper around kind?

I don't think that minikube has a WSL2 backend (yet?)

This issue is talking about two very different tracks:

  1. Using the "none" driver, which basically runs kubeadm right on the WSL 2 VM
    Apparently there are some quirks, before docker runs out-of-the-box on this platform

  2. Using the "docker" driver, which indeed looks a lot like kind (docker-in-docker, etc)
    It's not technically a wrapper, we just borrowed the same base image for the "kicbase"

Then there are other ways, like running "remotely".

  1. We have the traditional ways, of running a VM in either virtualbox or hyper-v
  1. We have the new way, of setting the DOCKER_HOST to use the Docker VM

@antoineco
Copy link

Sounds like a "proper" WSL2 backend could borrow a lot of great ideas from Docker for Windows then :)

  • minikube start could import the minikube ISO as a WSL2 instance and run it (meaning it could run systemd outside of the user's dev instance 🎉)
  • minikube start creates a mount to minikube's TLS certs inside the user's instance (/dev/sdX -> /mnt/wsl/minikube/tls).
  • minikube start ensures the user's minikube kubeconfig profile is up-to-date.
  • minikube start injects 127.0.0.1 minikube.local inside the user's /etc/hosts for TLS.
  • minikube stop just stops the instance.
  • minikube delete reverts all the above.

@afbjorklund
Copy link
Collaborator

@antoineco : we are trying converting the buildroot iso image to a docker tar image #6942

Maybe some of that would be applicable to WSL2 as well, since it's not a traditional hypervisor
Basically what we are doing there is importing the rootfs, which seems to be the same here ?

Finally, we can import the rootfs as a WSL2 custom distro:
wsl --import mk8s C:\wsldistros\mk8s C:\wslsources\focal.tar.gz --version 2

But there's all kinds of fun stuff involved, like when minikube-automount uses first available disk.
And we had currently hardcoded that buildroot means hypervisor and ubuntu means container...

@antoineco
Copy link

Right, even better if it's already a tgz. I've never looked into the internals and I believe you when you say it will most likely not work out of the box. Maybe some of the ideas exchanged here will become useful if someone ever gets their hands dirty with a true WSL2 backend.

@cheslijones
Copy link

I was already trying to run the minikube on WSL 2 for a few days. Like my colleagues @nezorflame and @svrc-pivotal I was using a docker driver to start the minikube, but the same error was happening to them. So, I started looking for driver alternatives and in the minikube documentation I found: https://minikube.sigs.k8s.io/docs/drivers/podman/

Which is an experimental driver. I was hopeless about finding a solution, but I decided to test it and it works!
minikube

Curious how you are connecting to the cluster from browser.
I'm running into the same issue as I do with --driver=docker. Browser just spins until I get a ""Hmmm... can't reach this page."
Where as --driver=docker gives 172.17.0.2 which I can't reach from the browser, --driver=podman is giving me 10.88.0.2, which results in the same error when trying to access my ingress routes from the browser. minikube dashboard shows my deployments and ingress being deployed, but can't access them.
So what did you have to do?

@nezorflame
Copy link
Contributor

nezorflame commented Jun 4, 2020

I've just tried the docker driver again with my setup and the new binary including the fix from the #8368 (see #7420 (comment) for the binary link) and minikube start worked!

❯ chmod +x ./minikube-linux-amd64 && sudo rm /usr/local/bin/minikube && mv minikube-linux-amd64 minikube && sudo install minikube /usr/local/bin/

❯ minikube start --vm-driver=docker --disk-size 30g --memory 3072 --cpus 2 --kubernetes-version v1.14.10 --extra-config=kube-proxy.IPTables.SyncPeriod.Duration=5000000000 --extra-config=kube-proxy.IPTables.MinSyncPeriod.Duration=3000000000

😄  minikube v1.11.0 on Ubuntu 20.04
✨  Using the docker driver based on user configuration
🆕  Kubernetes 1.18.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.18.3
👍  Starting control plane node minikube in cluster minikube
🔥  Creating docker container (CPUs=2, Memory=3072MB) ...
🐳  Preparing Kubernetes v1.14.10 on Docker 19.03.2 ...
    ▪ kube-proxy.IPTables.SyncPeriod.Duration=5000000000
    ▪ kube-proxy.IPTables.MinSyncPeriod.Duration=3000000000
    ▪ kubeadm.pod-network-cidr=10.244.0.0/16
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube"

minikube dashboard worked and opened a new tab, but the address is not reachable for some reason:

❯ minikube dashboard
🤔  Verifying dashboard health ...
🚀  Launching proxy ...
🤔  Verifying proxy health ...
🎉  Opening http://127.0.0.1:39315/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...

Not sure, maybe some extra work is required, but for now it seems working. Great job!

@PavelSosin-320
Copy link

I'm trying to play with the Windows build 161, WSL 2.0 after upgrades of May 2020 and CentOS 8. It looks like I hit the same issue. My iptables version is v1.8.4 (nf_tables). All WSL 2 distros share the same Linux kernel from Microsoft 4.19.121-microsoft-standard The is purpose is VSCode partner Docker instead of Docker Desktop which introduces too many dependencies from Windows and MobyLinux, for integration with Cmd.exe, PowerShell, Windows Defender and VPN.
It's OK for me to use new Windows Terminal WT, i.e. pure Linux CLI If I need something from Windows I will get it from WSL 2 interop

@huberv
Copy link

huberv commented Dec 1, 2020

Hi all,
I just tinkered with this setup as well. I noticed that I can reach the dashboard via a wsl curl 127.0.0.1... but not from a browser under Windows (as mentioned in the last comments). I wondered whether running a browser inside wsl and a Windows-based X server would do the trick... and it did:

wsl minikube dashboard

(I followed this article: https://medium.com/javarevisited/using-wsl-2-with-x-server-linux-on-windows-a372263533c3)

I haven't got the foggiest idea as to why this is the case... any explanation is highly appreciated!

@antoineco
Copy link

antoineco commented Dec 1, 2020

@huberv if the minikube command you posted is running inside the WSL instance and not in some other VM, all services exposed on the WSL loopback should also be exposed on the Windows network.

I use this all the time with kubectl port-forward (which also exposes things on 127.0.0.1 by default). However, I also occasionally notice failures to use that feature. It doesn't seem to happen often to me, but when it does a restart with wsl --shutdown solves it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/documentation Categorizes issue or PR as related to documentation. kind/feature Categorizes issue or PR as related to a new feature. os/windows priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.