-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"failed to init node with kubeadm" when using new base image #884
Comments
to be clear this is not a kind base image, this is the image you are running kind inside of (in a GKE pod?) |
can you share the podspec / image? is there a reason you are using vfs instead of overlay on an emptyDir or similar? |
Yes this is the image in gke, we are not using a special kind base image. The pod spec is in the issue (collapsed). I'm not exactly sure why vfs is used.. I can try not using vfs |
These are both normal on any cluster using kubeadm, these files simply haven't been written yet and kubelet etc. are crashlooping. It's part of the design of kuebadm
This looks like the actual issue and is likely due to switching to vfs (which docker does not recommend for production use and behaves quite differently than the other drivers) Looking at your podspec it looks like the docker data root is inside the (pod) container filesystem, this is probably going to be wildly slow especially when combined with vfs, overlay(2) is generally the recommended docker/containerd storage driver. #303 -- the note about volumeMounts:
- name: docker-root
mountPath: /var/lib/docker
volumes:
- name: docker-root
emptyDir: {} |
Ok so the I think this is because in the new image We actually never mount /var/lib/docker to a emptyDir, I will try this on our existing tests and see if it impacts performance. Thanks yet again for your help! |
ACK, we probably had overlay on vfs on overlay(2) 😅 (or roughly: overlay on overlay which is not going to work),
The directory / volume itself shouldn't affect the image size materially.
If it is declared as a
Thanks for the detailed issue! |
Hello, I'm not sure if the errors are related but I'm facing the same error message trying to create a cluster on my laptop. |
@cippaciong please file a new support issue to track, it's unlikely to be related to what we discussed here. Have you also checked https://kind.sigs.k8s.io/docs/user/known-issues/ ? |
@BenTheElder Thanks, I checked the known issues and moved from |
What happened:
kind create cluster
fails withError: failed to create cluster: failed to init node with kubeadm: exit status 1
What you expected to happen:
kind create cluster
does not failHow to reproduce it (as minimally and precisely as possible):
Apply this pod to a kubernetes cluster should do it:
Anything else we need to know?:
We are running this in prow on GKE. The only variable here compared to our other setups (we run everything in kind currently with no issues) is the image we are using. The previous image we start
service start docker
thenkind create cluster
. In the new image we rundaemon -U -- dockerd -s=vfs
Environment:
kind version
): 0.5.1kubectl version
): We are running prow on GKE 1.13, starting Kind 1.15docker info
): 19.03.2/etc/os-release
): cOSDump of logs: kind.tar.gz
Interesting logs:
Any help debugging this would be appreciated.
We can
docker run hello-world
from within podThe text was updated successfully, but these errors were encountered: