-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Manjaro Linux: failed to create cluster: failed to init node with kubeadm #1999
Comments
iptables-nft? I don't have easy access to a manjaro environment, can you run |
Hi there, no there is no running firewall on Manjaro by default, i did look through those common issues as i do remember hitting weird docker issues on Fedora, but none of those seemed to be obvious to me. Not a problem, i've zipped up the logs produced here: Thanks! |
I had a quick scan of the logs and found these from kubelet:
|
taking a look at these now, those logs are both normal:
|
This log twoards the end of the kubelet logs is a problem:
This looks like the cgroupsmanager in kubelet is unhappy checking that the necessary cgroups are present. For Kuberntes v1.19 you must be running with cgroupsv1 and certain cgroups should be mounted, on most systems they are by default. |
Thanks for the detailed explanations of the logs, i learnt something new 😄 Also thanks for taking time to read through the log files, i did some searching on the cgroups issues and it seems manjaro has support for both v1/v2 of cgroups, which you can turn off by setting Do you have any other ideas here, or is this an upstream manjaro issue? |
If I were maintaining manjaro I probably wouldn't be thrilled about a report from kind given the hacks going, but if you perhaps try I suspect something with your environment is making kubelet uhappy with the cgroups heirarchy and under normal circumstances kind only adds more groups it doesn't mount additional subsystems etc., which we probably shouldn't given the shared kernel. It's difficult to tell from the logs / panic what exactly is wrong though. I haven't seen anyone report an issue like this on other distros like debian, fedora, ubuntu, except when fedora first enabled cgroupsv2 by default and most container tools couldn't support it yet (docker, runc, kubernetes...), only In the future cgroupsv2 should work in kubernetes/kind as well FWWIW |
we have cgroupsv2 CI now with kubernetes 1.20, kind v0.10.0 ships 1.20.2 by default along with some improved cgroups handling, but if a required cgroup is not mounted we can't do much about that. |
Sadly still no luck with 1.20, is there anyway to list what cgroups are enabled and see which ones are missing? |
Same problem here. Environment: kind version (kiind version): kubectl version (kubectl version): docker version (docker info):
OS Release (cat /etc/*release):
kind logs: |
ls /sys/fs/cgroup will show cgroups, it's a vfs. Is manjaro shipping systemd? What version? This sort of thing is NOT a normal problem. Please try with the latest version as well, v0.9.0 is out of date. |
Hi @BenTheElder |
|
|
those look fine, can you share the logs as in #1999 (comment)? btw: I see also highly recommend #1999 (comment) for more direct debugging. This still seems to be an issue between kubernetes and manjaro's cgroups, which is not something the kind project can fix directly, but we can attempt to help. |
here're logs: I've noticed an interesting line in kubelet.log file:
I've tried searching for a solution, but almost all of the issues with the same error message refer to btrfs filesystem (mine's f2fs btw) |
Ah, so in your case you probably need to manually mount cat << EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- extraMounts:
- hostPath: /dev/mapper
containerPath: /dev/mapper
EOF ref: #1945 (comment) That issue is unrelated to the issue in the original post here. |
https://docs.docker.com/storage/storagedriver/select-storage-driver/#supported-backing-filesystems did/does not list f2fs, so we haven't done anything for it. |
Well, docker has been running w/o any issues on f2fs with |
Er sorry, it's not so much the storage driver that's relevant, it's the filesystem backing that driver (typically at Does it work if you manually specifying mounting /dev/mapper as in #1999 (comment) ? If so, we can autodetect running on f2fs and do this OOTB. |
Sadly, the solution above doesn't work. Still, I'll try to check it on vm in case that solution works with supported filesystems. |
I had an configuration in my docker daemon.json config file with storage driver overlay2. But I'm using btrfs file system. I didn't notice that before, so I change it to btrfs and it worked.
daemon.json file:
docker version:
Manjaro Release:
|
Yeah, can confirm the same: everything works well with the latest version of kind on btrfs filesystem. |
The btrfs thing is #1945 (comment) Docker doesn't list overlay2 on btrfs as supported but apparently it works fine. We're currently auto-detecting the storage driver to decide if we should mount OP's original issue remains unsolved but I don't think that one is a kind bug, kind will create cgroups but it will not and should not alter the hosts's core controller mounts etc. |
@papanito this is a different issue. the podman backend is much more experimental and does not autodetect storage drivers at all. #2113 (comment) |
This thread has veered off into different problems:
|
Only for reference because i'm not sure if its documented in the comments yet: I'm not sure whether i got the same issue but after updating manjaro 21.0.7 & docker 20.10.7 & kindest:node:v1.18.8 |
If you're on arch/manjaro there's a good chance https://kind.sigs.k8s.io/docs/user/known-issues/#failure-to-create-cluster-with-cgroups-v2 applies, kubernetes only supports cgroupsv2 from 1.19 forward, so 1.18 requires the host to use cgroupsv1 not v2. |
没有中文的吗 |
What happened:
I tried 'kind create cluster' on Manjaro linux, but the control plane failed to start.
What you expected to happen:
A kind k8s cluster to be created successfully
How to reproduce it (as minimally and precisely as possible):
Run 'kind create cluster' on Majaro 20.2
Anything else we need to know?:
Environment:
kind version: (use
kind version
):kind v0.9.0 go1.15.6 linux/amd64
Kubernetes version: (use
kubectl version
):Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.1", GitCommit:"c4d752765b3bbac2237bf87cf0b1c2e307844666", GitTreeState:"clean", BuildDate:"2020-12-18T12:09:25Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Docker version: (use
docker info
):/etc/os-release
):Manjaro 20.2
The text was updated successfully, but these errors were encountered: