Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Creating Cluster failed in my dind container. #890

Closed
zcc35357949 opened this issue Sep 30, 2019 · 22 comments
Closed

Creating Cluster failed in my dind container. #890

zcc35357949 opened this issue Sep 30, 2019 · 22 comments
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@zcc35357949
Copy link

hi, I install go1.13+ in a dind container.And run kind create cluster, but it seem to be always failed.
I find in debug logs there are some errors not expected, such as:

[WARNING CRI]: container runtime is not running: output: NAME:
failed to pull image k8s.gcr.io/kube-apiserver:v1.15.3: output: time="2019-09-30T18:05:07Z" level=fatal msg="failed to connect: failed to connect: context deadline exceeded"...

I don't know kind how to validate container runtime, my dockerd is running normally.
And I also can pull these images manually.

docker info

Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 8
Server Version: 18.06.1-ce
Storage Driver: vfs
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Kernel Version: 4.9.0-0.bpo.6-amd64
Operating System: Debian GNU/Linux 8 (jessie) (containerized)
OSType: linux
Architecture: x86_64
CPUs: 40
Total Memory: 125.8GiB
Name: 4d3d5875d428
ID: RXWO:CLTC:FWY4:PQW7:T4OC:KW2N:DVQW:F24T:H4Z6:OMSH:JZGN:KV63
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

kind.log

@zcc35357949 zcc35357949 added the kind/support Categorizes issue or PR as a support question. label Sep 30, 2019
@BenTheElder
Copy link
Member

is /var/lib/docker a volume?

@BenTheElder
Copy link
Member

kind runs a container runtime inside each node container, we do use with docker in docker for CI ourselves but don't necessarily recommend doing that if you can avoid it

see #303, the same points apply to a single container as apply to kubernetes pods

@zcc35357949
Copy link
Author

kind runs a container runtime inside each node container, we do use with docker in docker for CI ourselves but don't necessarily recommend doing that if you can avoid it

see #303, the same points apply to a single container as apply to kubernetes pods

I run kind create cluster in a physical machine which has installed docker with another root dir /data/docker, but get the same errors. How can I specify the docker root dir when I use kind?

@BenTheElder
Copy link
Member

you shouldn't need to.

can you give more info about kind create cluster on your physical machine? have you looked at https://kind.sigs.k8s.io/docs/user/known-issues/?

@BenTheElder
Copy link
Member

note that for docker-in-docker the data root must be a volume regardless. see the details in #303 for other docker in docker requirements.

@zcc35357949
Copy link
Author

you shouldn't need to.

can you give more info about kind create cluster on your physical machine? have you looked at https://kind.sigs.k8s.io/docs/user/known-issues/?

docker info in my physical machine

Containers: 20
 Running: 8
 Paused: 0
 Stopped: 12
Images: 108
Server Version: 17.05.0-ce
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: false
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9048e5e50717ea4497b757314bad98ea3763c145
runc version: 9c2d8d184e5da67c95d601382adf14862e4f2228
init version: 949e6fa
Kernel Version: 4.9.0-0.bpo.6-amd64
Operating System: Debian GNU/Linux 8 (jessie)
OSType: linux
Architecture: x86_64
CPUs: 40
Total Memory: 125.8GiB
Name: shyp-docker-14
ID: G4X4:RPC3:5VRK:PVK5:D6MW:NBXV:YLTK:4XVK:AU4V:4ZMU:TTZ2:H2W6
Docker Root Dir: /data/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

kind.log

@BenTheElder
Copy link
Member

Server Version: 17.05.0-ce

that is from 2017-05-04, any chance you can use a newer version of docker? I wouldn't be surprised if we're hitting a bug, we definitely aren't testing that far back.

@aojea
Copy link
Contributor

aojea commented Oct 1, 2019

@zcc35357949 the kubelet is not running, you have to create the cluster with the --retain flag and check why is it failing

[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:

  • The kubelet is not running
  • The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:

  • 'systemctl status kubelet'
  • 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:

  • 'docker ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
  • 'docker logs CONTAINERID'
    error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster

you can use kind export logs and upload a tarball with the folder with the logs if you want us to take a look

@zcc35357949
Copy link
Author

@zcc35357949 the kubelet is not running, you have to create the cluster with the --retain flag and check why is it failing

[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:

  • The kubelet is not running
  • The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:

  • 'systemctl status kubelet'
  • 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:

  • 'docker ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
  • 'docker logs CONTAINERID'
    error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster

you can use kind export logs and upload a tarball with the folder with the logs if you want us to take a look

I check the dind Dockerfile which I used, it has included VOLUME /var/lib/docker .
And I have exported the logs.
kind_log.tar.gz

@aojea
Copy link
Contributor

aojea commented Oct 1, 2019

seems that is not able to load the components images

	[WARNING ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.15.3: output: time="2019-09-30T18:05:07Z" level=fatal msg="failed to connect: failed to connect: context deadline exceeded"
, error: exit status 1

can you check inside the node if the images are there?

docker exec -it kind-control-plane crictl images
IMAGE                                TAG                 IMAGE ID            SIZE
docker.io/kindest/kindnetd           0.5.0               ef97cccdfdb50       83.6MB
k8s.gcr.io/coredns                   1.3.1               eb516548c180f       40.5MB
k8s.gcr.io/etcd                      3.3.10              2c4adeb21b4ff       258MB
k8s.gcr.io/kube-apiserver            v1.15.3             be321f2ded3f3       249MB
k8s.gcr.io/kube-controller-manager   v1.15.3             ac7d3fe5b34b7       200MB
k8s.gcr.io/kube-proxy                v1.15.3             d428039608992       97.3MB
k8s.gcr.io/kube-scheduler            v1.15.3             a44f53b10fee0       96.5MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca19       746kB

@BenTheElder
Copy link
Member

what volume driver are you using for docker in docker?

@zcc35357949
Copy link
Author

seems that is not able to load the components images

	[WARNING ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.15.3: output: time="2019-09-30T18:05:07Z" level=fatal msg="failed to connect: failed to connect: context deadline exceeded"
, error: exit status 1

can you check inside the node if the images are there?

docker exec -it kind-control-plane crictl images
IMAGE                                TAG                 IMAGE ID            SIZE
docker.io/kindest/kindnetd           0.5.0               ef97cccdfdb50       83.6MB
k8s.gcr.io/coredns                   1.3.1               eb516548c180f       40.5MB
k8s.gcr.io/etcd                      3.3.10              2c4adeb21b4ff       258MB
k8s.gcr.io/kube-apiserver            v1.15.3             be321f2ded3f3       249MB
k8s.gcr.io/kube-controller-manager   v1.15.3             ac7d3fe5b34b7       200MB
k8s.gcr.io/kube-proxy                v1.15.3             d428039608992       97.3MB
k8s.gcr.io/kube-scheduler            v1.15.3             a44f53b10fee0       96.5MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca19       746kB

Inside the node crictl images:

FATA[0000] listing images failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.ImageService

@zcc35357949
Copy link
Author

what volume driver are you using for docker in docker?

dind container's inspect info:

[
    {
        "Id": "067ee55be645d6486087a036febfad47d41ea75c1fa7cf4e37388e308df68f01",
        "Created": "2019-10-01T15:14:35.092321811Z",
        "Path": "/start.sh",
        "Args": [],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 33676,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2019-10-01T15:14:35.644743729Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "sha256:eacca85d947b9f39030d8f2083c41ffd14c8e3ee9026c27c7e713ae80e821d4b",
        "ResolvConfPath": "/data/docker/containers/067ee55be645d6486087a036febfad47d41ea75c1fa7cf4e37388e308df68f01/resolv.conf",
        "HostnamePath": "/data/docker/containers/067ee55be645d6486087a036febfad47d41ea75c1fa7cf4e37388e308df68f01/hostname",
        "HostsPath": "/data/docker/containers/067ee55be645d6486087a036febfad47d41ea75c1fa7cf4e37388e308df68f01/hosts",
        "LogPath": "/data/docker/containers/067ee55be645d6486087a036febfad47d41ea75c1fa7cf4e37388e308df68f01/067ee55be645d6486087a036febfad47d41ea75c1fa7cf4e37388e308df68f01-json.log",
        "Name": "/infallible_albattani",
        "RestartCount": 0,
        "Driver": "overlay2",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": [
            "aea06bfd4494ae11c03e460416dc915cdb3bbc0048f19c30baf76baa5ff7c829"
        ],
        "HostConfig": {
            "Binds": [
                "/lib/modules:/lib/modules",
                "/var/lib/docker:/var/lib/docker",
                "/root/intall.sh:/install.sh"
            ],
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {
                    "max-size": "50m"
                }
            },
            "NetworkMode": "default",
            "PortBindings": {},
            "RestartPolicy": {
                "Name": "no",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": null,
            "CapDrop": null,
            "Dns": [],
            "DnsOptions": [],
            "DnsSearch": [],
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": true,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": [
                "label=disable"
            ],
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 0,
            "NanoCpus": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": [],
            "DeviceCgroupRules": null,
            "DiskQuota": 0,
            "KernelMemory": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": -1,
            "OomKillDisable": false,
            "PidsLimit": 0,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0
        },
        "GraphDriver": {
            "Data": {
                "LowerDir": "/data/docker/overlay2/96599901d8c541cc2f301c00f58c0cbea64de6676db93dcf276c38f9ba6987e9-init/diff:/data/docker/overlay2/6c4c6b4b4cfc7b43526f7f0bc56a2396bf108cd61f86dcc1526beda918ee221e/diff:/data/docker/overlay2/dc4d9ab79a0b573a2f2efc3d76324d34cfc012e5eb04285ca1df45f133f6f3ef/diff:/data/docker/overlay2/283898b8b658c7e02d1beae84f94b34ec9e230703c4ed18d3e9f29d0114cc820/diff:/data/docker/overlay2/4377ed3bc050086c0f85f0cb656ed8ba8284f0b4e55926e1656f41756cea8822/diff:/data/docker/overlay2/ac42a5b924498732c55fb9b5690a8b7735e3bbc090185048a741fb13fa7dc55b/diff:/data/docker/overlay2/96d7b360e0cf753d480212b4ce9673e8ed12a7ed1d50cc33da1716dfc3a86020/diff:/data/docker/overlay2/980e4cb1341f7b38ed9e7a36391a121367db3154074980905f8e39805eb0e083/diff:/data/docker/overlay2/e256f6202750c81f9c431dbb1faca0074972ebb1750da48f897b8334622143d2/diff",
                "MergedDir": "/data/docker/overlay2/96599901d8c541cc2f301c00f58c0cbea64de6676db93dcf276c38f9ba6987e9/merged",
                "UpperDir": "/data/docker/overlay2/96599901d8c541cc2f301c00f58c0cbea64de6676db93dcf276c38f9ba6987e9/diff",
                "WorkDir": "/data/docker/overlay2/96599901d8c541cc2f301c00f58c0cbea64de6676db93dcf276c38f9ba6987e9/work"
            },
            "Name": "overlay2"
        },
        "Mounts": [
            {
                "Type": "bind",
                "Source": "/lib/modules",
                "Destination": "/lib/modules",
                "Mode": "",
                "RW": true,
                "Propagation": ""
            },
            {
                "Type": "bind",
                "Source": "/var/lib/docker",
                "Destination": "/var/lib/docker",
                "Mode": "",
                "RW": true,
                "Propagation": ""
            },
            {
                "Type": "bind",
                "Source": "/root/intall.sh",
                "Destination": "/install.sh",
                "Mode": "",
                "RW": true,
                "Propagation": ""
            }
        ],
        "Config": {
            "Hostname": "067ee55be645",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "2375/tcp": {},
                "8443/tcp": {}
            },
            "Tty": true,
            "OpenStdin": true,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "MINIKUBE_VERSION=v0.25.0",
                "K8S_VERSION=v1.8.0",
                "KUBECTL_VERSION=v1.9.1",
                "MINIKUBE_WANTUPDATENOTIFICATION=false",
                "MINIKUBE_WANTREPORTERRORPROMPT=false",
                "CHANGE_MINIKUBE_NONE_USER=true"
            ],
            "Cmd": null,
            "ArgsEscaped": true,
            "Image": "unboundedsystems/minikube-dind",
            "Volumes": {
                "/var/lib/docker": {}
            },
            "WorkingDir": "",
            "Entrypoint": [
                "/start.sh"
            ],
            "OnBuild": null,
            "Labels": {}
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "0709dac0de681d23894e1811c9e48edf9bf85e4f149229220f2c072c8d781828",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "2375/tcp": null,
                "8443/tcp": null
            },
            "SandboxKey": "/data/docker/netns/0709dac0de68",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "d2bbd855ec343a134cf4fee8882b83b83448a6f4fddeac080ac2d08da3caf502",
            "Gateway": "10.10.2.10",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "10.10.2.4",
            "IPPrefixLen": 24,
            "IPv6Gateway": "",
            "MacAddress": "02:42:0a:0a:02:04",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "e3c02c6f63b8026e83ec884d44e98da16c9d3806e408ed688492e654b3050af5",
                    "EndpointID": "d2bbd855ec343a134cf4fee8882b83b83448a6f4fddeac080ac2d08da3caf502",
                    "Gateway": "10.10.2.10",
                    "IPAddress": "10.10.2.4",
                    "IPPrefixLen": 24,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:0a:0a:02:04"
                }
            }
        }
    }
]

@BenTheElder
Copy link
Member

er we need to know how the inner docker is configured

@BenTheElder
Copy link
Member

you also appear to be missing cgroups

@zcc35357949
Copy link
Author

er we need to know how the inner docker is configured

ps -ewf

UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 01:51 ?        00:00:00 /bin/bash -e /start.sh
root         8     1  0 01:51 ?        00:00:02 tail -F /var/log/docker.log /var/log/minikube-start.log /var/lib/localkube/localkube.err
root         9     1  0 01:51 ?        00:06:07 dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375
root        40     9  1 01:51 ?        00:10:14 docker-containerd --config /var/run/docker/containerd/containerd.toml
root     17715     0  0 17:29 ?        00:00:00 bash
root     21484 17715  0 17:30 ?        00:00:00 ps -efw

docker-containerd config:

root = "/var/lib/docker/containerd/daemon"
state = "/var/run/docker/containerd/daemon"
disabled_plugins = ["cri"]
oom_score = -500

[grpc]
  address = "/var/run/docker/containerd/docker-containerd.sock"
  uid = 0
  gid = 0
  max_recv_message_size = 16777216
  max_send_message_size = 16777216

[debug]
  address = "/var/run/docker/containerd/docker-containerd-debug.sock"
  uid = 0
  gid = 0
  level = "info"

[metrics]
  address = ""
  grpc_histogram = false

[cgroup]
  path = ""

[plugins]
  [plugins.linux]
    shim = "docker-containerd-shim"
    runtime = "docker-runc"
    runtime_root = "/var/lib/docker/runc"
    no_shim = false
    shim_debug = false

docker info:

Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 7
Server Version: 18.06.1-ce
Storage Driver: vfs
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Kernel Version: 4.9.0-0.bpo.6-amd64
Operating System: Debian GNU/Linux 8 (jessie) (containerized)
OSType: linux
Architecture: x86_64
CPUs: 40
Total Memory: 125.8GiB
Name: 3503b5d2bb90
ID: FSJ6:OTT4:4E4M:7WJC:6RVY:BSZM:UJ7F:JGJQ:PVYS:HO4W:53WK:YIQG
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

@BenTheElder
Copy link
Member

it should probably not be using vfs.

@aojea
Copy link
Contributor

aojea commented Oct 1, 2019

just curious, what are those minikube references inside that containter?

           "MINIKUBE_VERSION=v0.25.0",
            "K8S_VERSION=v1.8.0",
            "KUBECTL_VERSION=v1.9.1",
            "MINIKUBE_WANTUPDATENOTIFICATION=false",
            "MINIKUBE_WANTREPORTERRORPROMPT=false",
            "CHANGE_MINIKUBE_NONE_USER=true"
        ],
        "Cmd": null,
        "ArgsEscaped": true,
        "Image": "unboundedsystems/minikube-dind",

@zcc35357949
Copy link
Author

just curious, what are those minikube references inside that containter?

           "MINIKUBE_VERSION=v0.25.0",
            "K8S_VERSION=v1.8.0",
            "KUBECTL_VERSION=v1.9.1",
            "MINIKUBE_WANTUPDATENOTIFICATION=false",
            "MINIKUBE_WANTREPORTERRORPROMPT=false",
            "CHANGE_MINIKUBE_NONE_USER=true"
        ],
        "Cmd": null,
        "ArgsEscaped": true,
        "Image": "unboundedsystems/minikube-dind",

It is a dind image which has been installed minikube, but I had deleted minikube cluster before using kind. And using a dind image without minikube I got the same result.

@BenTheElder
Copy link
Member

You probably need for docker in docker:

  • the cgroup mount from the linked issue
  • overlay instead of vfs

For the host, I would start to upgrading to a newer docker.

@zcc35357949
Copy link
Author

You probably need for docker in docker:

  • the cgroup mount from the linked issue
  • overlay instead of vfs

For the host, I would start to upgrading to a newer docker.

I upgrading docker version to 18.06 in my physical machine, and change the storage driver to overlay2.Indeed kind create cluster successfully.
And testing kind in the dind container also passed.
Thanks a lot. @BenTheElder @aojea

@BenTheElder
Copy link
Member

Excellent!
This is helpful to know, thank you 😅

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

3 participants