Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fail to create 3 node cluster (or 3 single node clusters) with Podman on Fedora 37 #3050

Closed
pslobo opened this issue Jan 2, 2023 · 3 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@pslobo
Copy link

pslobo commented Jan 2, 2023

What happened:

Seeing a similar issue to the one reported in #2689 with minor nuances, namely: it only happens when I try and create a third cluster (or 3 nodes).
I can create 2 single node clusters or a single cluster with 2 nodes, but if I try and create either a third single node cluster or single cluster with 3 nodes, things break apart.

When creating a 3rd single node cluster, I'm immediately presented with:

kind create cluster
enabling experimental podman provider
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.25.3) 🖼
 ✗ Preparing nodes 📦  
ERROR: failed to create cluster: could not find a log line that matches "Reached target .*Multi-User System.*|detected cgroup v1"

When creating a single cluster with 3 nodes, I get the following:

kind create cluster --config=build-config.yaml
enabling experimental podman provider
Creating cluster "build" ...
 ✓ Ensuring node image (kindest/node:v1.25.3) 🖼
 ✓ Preparing nodes 📦 📦 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
 ✗ Joining worker nodes 🚜 
ERROR: failed to create cluster: failed to join node with kubeadm: command "podman exec --privileged build-worker2 kubeadm join --config /kind/kubeadm.conf --skip-phases=preflight --v=6" failed with error: exit status 1
Command Output: I0102 14:53:55.884615     138 join.go:416] [preflight] found NodeName empty; using OS hostname as NodeName

Environment:

  • kind version: kind v0.17.0 go1.19.2 linux/amd64
  • OS: Fedora 37 with kernel 6.0.15-300.fc37.x86_64
  • Kubernetes version: Client Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.0"
  • Podman info:
host:
  arch: amd64
  buildahVersion: 1.28.0
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.5-1.fc37.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.5, commit: '
  cpuUtilization:
    idlePercent: 90.02
    systemPercent: 2.27
    userPercent: 7.71
  cpus: 12
  distribution:
    distribution: fedora
    variant: workstation
    version: "37"
  eventLogger: journald
  hostname: void
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 6.0.15-300.fc37.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 5229686784
  memTotal: 33433710592
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: crun-1.7.2-2.fc37.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.7.2
      commit: 0356bf4aff9a133d655dc13b1d9ac9424706cac4
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-8.fc37.x86_64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.3
  swapFree: 8565813248
  swapTotal: 8589930496
  uptime: 162h 13m 13.00s (Approximately 6.75 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /home/pedro/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/pedro/.local/share/containers/storage
  graphRootAllocated: 510389125120
  graphRootUsed: 37953458176
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 1
  runRoot: /run/user/1000/containers
  volumePath: /home/pedro/.local/share/containers/storage/volumes
version:
  APIVersion: 4.3.1
  Built: 1668178887
  BuiltTime: Fri Nov 11 15:01:27 2022
  GitCommit: ""
  GoVersion: go1.19.2
  Os: linux
  OsArch: linux/amd64
  Version: 4.3.1

Log files

@pslobo pslobo added the kind/bug Categorizes issue or PR as related to a bug. label Jan 2, 2023
@aojea
Copy link
Contributor

aojea commented Jan 2, 2023

if the problem is when you are adding more clusters it usually is related to exhaustion of system resources, commonly inotify limits #2972 (comment)

@pslobo
Copy link
Author

pslobo commented Jan 2, 2023

Hi @aojea, confirmed. I was pretty sure I had gone through the usual suspects and looked at the known issues, but missed that one. Bumped those limits and it's working as expected. Sorry for the false alarm.

@pslobo pslobo closed this as completed Jan 2, 2023
@aojea
Copy link
Contributor

aojea commented Jan 2, 2023

no worries, this is not very obvious, you can see that you are not the only one ;)

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

2 participants