-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cluster not starting with DIND setup #625
Comments
Note #303, we run our official CI on a Prow cluster with that image 😅 |
Nice, I'll try to add the mounts and see if it works! Thanks! |
Not working :-( I run as follows but still getting the same error:
Docker is pretty recent (18.09.6), the os where I run the container from is a Fedora 30 |
@fedepaol does this match your pod config fully? you need to have a volume mounted at I see no issues with the following:
NOTE:
|
similarly no issues with the following to sort of simulate an empty dir: |
Oh, I was missing the docker-graph dir. Works now! |
Glad it works!! Interesting that it may have worked with kind 0.2 without that ... I would not have expected that to work reliably 😅 |
What happened:
Kind 0.3.0 cluster not starting on prow with k8s test images and docker in docker enabled.
Kind 0.2.1 works fine.
We are running our CI in a prow cluster, using
cr.io/k8s-testimages/bootstrap:latest
as base image with dind enabled.The cluster does not start.
What you expected to happen:
The cluster to be up & running.
How to reproduce it (as minimally and precisely as possible):
docker run --privileged --rm -it -e DOCKER_IN_DOCKER_ENABLED='true' -v $(pwd):/workspace --entrypoint /usr/local/bin/runner.sh gcr.io/k8s-testimages/bootstrap:latest bash -c "wget https://dl.google.com/go/go1.12.6.linux-amd64.tar.gz && tar -C /usr/local -xzf go1.12.6.linux-amd64.tar.gz && GO111MODUILE='on' /usr/local/go/bin/go get sigs.k8s.io/kind@v0.3.0 && ~/go/bin/kind create cluster --name=fede
Anything else we need to know?:
The
control plane
node starts correctly. If I look for kubelet logs inside I can see a bunch ofIf I bash into it and look for the kubelet logs:
And also a bunch of failures while hitting the api (which makes sense since the apiserver is still down).
Environment:
kind version
):kubectl version
):docker info
):/etc/os-release
):The text was updated successfully, but these errors were encountered: