Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Control Plane Fails to Start on Fedora 35 with rootless podman #2524

Closed
rverenich opened this issue Nov 3, 2021 · 17 comments
Closed

Control Plane Fails to Start on Fedora 35 with rootless podman #2524

rverenich opened this issue Nov 3, 2021 · 17 comments
Labels
area/provider/podman Issues or PRs related to podman area/rootless Issues or PRs related to rootless containers kind/bug Categorizes issue or PR as related to a bug.

Comments

@rverenich
Copy link

rverenich commented Nov 3, 2021

What happened:

When using the podman provider, the KinD control plane fails to come up on Fedora 35

What you expected to happen:

Control plane to start using rootless Podman

How to reproduce it (as minimally and precisely as possible):

  1. Install Fedora 35
  2. Install KinD

Anything else we need to know?:

configured to comply https://kind.sigs.k8s.io/docs/user/known-issues/#fedora
https://kind.sigs.k8s.io/docs/user/rootless/

tried both drivers in /etc/containers/storage.conf

# Default Storage Driver, Must be set for proper operation.
# driver = "overlay"
driver = "btrfs"

Environment:

  • kind version: (use kind version):

kind v0.11.1 go1.16.8 linux/amd64
kind v0.12.0-alpha go1.16.8 linux/amd64

  • Kubernetes version: (use kubectl version):

N/A

  • Docker version: (use docker info):
$ podman info
host:
  arch: amd64
  buildahVersion: 1.23.1
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.0.30-2.fc35.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.30, commit: '
  cpus: 8
  distribution:
    distribution: fedora
    variant: workstation
    version: "35"
  eventLogger: journald
  hostname: rvx1
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.14.14-300.fc35.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 18337759232
  memTotal: 33361170432
  ociRuntime:
    name: crun
    package: crun-1.2-1.fc35.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.2
      commit: 4f6c8e0583c679bfee6a899c05ac6b916022561b
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.12-2.fc35.x86_64
    version: |-
      slirp4netns version 1.1.12
      commit: 7a104a101aa3278a2152351a082a6df71f57c9a3
      libslirp: 4.6.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.0
  swapFree: 8589930496
  swapTotal: 8589930496
  uptime: 1h 7m 48.3s (Approximately 0.04 days)
plugins:
  log:
  - k8s-file
  - none
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /home/rvx1/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: btrfs
  graphOptions: {}
  graphRoot: /home/rvx1/.local/share/containers/storage
  graphStatus:
    Build Version: 'Btrfs v5.14.1 '
    Library Version: "102"
  imageStore:
    number: 1
  runRoot: /run/user/1000/containers
  volumePath: /home/rvx1/.local/share/containers/storage/volumes
version:
  APIVersion: 3.4.1
  Built: 1634740316
  BuiltTime: Wed Oct 20 17:31:56 2021
  GitCommit: ""
  GoVersion: go1.16.8
  OsArch: linux/amd64
  Version: 3.4.1
  • OS (e.g. from /etc/os-release):
$ cat /etc/os-release
NAME="Fedora Linux"
VERSION="35 (Workstation Edition)"
ID=fedora
VERSION_ID=35
VERSION_CODENAME=""
PLATFORM_ID="platform:f35"
PRETTY_NAME="Fedora Linux 35 (Workstation Edition)"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:35"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f35/system-administrators-guide/"
SUPPORT_URL="https://ask.fedoraproject.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=35
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=35
PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
VARIANT="Workstation Edition"
VARIANT_ID=workstation
@rverenich rverenich added the kind/bug Categorizes issue or PR as related to a bug. label Nov 3, 2021
@rverenich
Copy link
Author

Logs from control-plane node:

-- Journal begins at Wed 2021-11-03 23:20:47 UTC, ends at Wed 2021-11-03 23:21:44 UTC. --
Nov 03 23:20:48 rootlesspodman-control-plane systemd[1]: Condition check resulted in kubelet: The Kubernetes Node Agent being skipped.
░░ Subject: A start job for unit kubelet.service has finished successfully
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ A start job for unit kubelet.service has finished successfully.
░░ 
░░ The job identifier is 51.
Nov 03 23:20:51 rootlesspodman-control-plane systemd[1]: Starting kubelet: The Kubernetes Node Agent...
░░ Subject: A start job for unit kubelet.service has begun execution
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ A start job for unit kubelet.service has begun execution.
░░ 
░░ The job identifier is 57.
Nov 03 23:20:51 rootlesspodman-control-plane systemd[1]: Started kubelet: The Kubernetes Node Agent.
░░ Subject: A start job for unit kubelet.service has finished successfully
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ A start job for unit kubelet.service has finished successfully.
░░ 
░░ The job identifier is 57.
Nov 03 23:20:51 rootlesspodman-control-plane kubelet[161]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 03 23:20:51 rootlesspodman-control-plane kubelet[161]: Flag --provider-id has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 03 23:20:51 rootlesspodman-control-plane kubelet[161]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 03 23:20:51 rootlesspodman-control-plane kubelet[161]: Flag --cgroup-root has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 03 23:20:51 rootlesspodman-control-plane kubelet[161]: I1103 23:20:51.852802     161 server.go:199] "Warning: For remote container runtime, --pod-infra-container-image is ignored in kubelet, which should be set in that remote runtime instead"
Nov 03 23:20:51 rootlesspodman-control-plane kubelet[161]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 03 23:20:51 rootlesspodman-control-plane kubelet[161]: Flag --provider-id has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 03 23:20:51 rootlesspodman-control-plane kubelet[161]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 03 23:20:51 rootlesspodman-control-plane kubelet[161]: Flag --cgroup-root has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 03 23:20:52 rootlesspodman-control-plane kubelet[161]: I1103 23:20:52.127422     161 server.go:440] "Kubelet version" kubeletVersion="v1.22.2"
Nov 03 23:20:52 rootlesspodman-control-plane kubelet[161]: I1103 23:20:52.127641     161 server.go:868] "Client rotation is on, will bootstrap in background"
Nov 03 23:20:52 rootlesspodman-control-plane kubelet[161]: W1103 23:20:52.129812     161 manager.go:159] Cannot detect current cgroup on cgroup v2
Nov 03 23:20:52 rootlesspodman-control-plane kubelet[161]: I1103 23:20:52.129818     161 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Nov 03 23:20:52 rootlesspodman-control-plane kubelet[161]: E1103 23:20:52.137843     161 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://rootlesspodman-control-plane:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp [fc00:f853:ccd:e793::2]:6443: connect: connection refused
Nov 03 23:20:54 rootlesspodman-control-plane kubelet[161]: E1103 23:20:54.277699     161 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://rootlesspodman-control-plane:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp [fc00:f853:ccd:e793::2]:6443: connect: connection refused
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: W1103 23:20:57.134139     161 fs.go:214] stat failed on /dev/nvme0n1p6 with error: no such file or directory
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: I1103 23:20:57.151138     161 container_manager_linux.go:280] "Container manager verified user specified cgroup-root exists" cgroupRoot=[kubelet]
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: I1103 23:20:57.151175     161 container_manager_linux.go:285] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/kubelet CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: I1103 23:20:57.151188     161 topology_manager.go:133] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: I1103 23:20:57.151194     161 container_manager_linux.go:320] "Creating device plugin manager" devicePluginEnabled=true
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: I1103 23:20:57.151241     161 state_mem.go:36] "Initialized new in-memory state store"
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: I1103 23:20:57.351827     161 server.go:793] "Failed to ApplyOOMScoreAdj" err="write /proc/self/oom_score_adj: permission denied"
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: I1103 23:20:57.352158     161 kubelet.go:418] "Attempting to sync node with API server"
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: I1103 23:20:57.352216     161 kubelet.go:279] "Adding static pod path" path="/etc/kubernetes/manifests"
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: I1103 23:20:57.352278     161 kubelet.go:290] "Adding apiserver pod source"
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: I1103 23:20:57.352362     161 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: E1103 23:20:57.354082     161 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://rootlesspodman-control-plane:6443/api/v1/nodes?fieldSelector=metadata.name%3Drootlesspodman-control-plane&limit=500&resourceVersion=0": dial tcp [fc00:f853:ccd:e793::2]:6443: connect: connection refused
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: E1103 23:20:57.354247     161 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://rootlesspodman-control-plane:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp [fc00:f853:ccd:e793::2]:6443: connect: connection refused
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: I1103 23:20:57.357184     161 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="v1.5.7" apiVersion="v1alpha2"
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: W1103 23:20:57.357662     161 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: I1103 23:20:57.358406     161 server.go:1213] "Started kubelet"
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: I1103 23:20:57.358511     161 server.go:149] "Starting to listen" address="0.0.0.0" port=10250
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: E1103 23:20:57.359705     161 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"rootlesspodman-control-plane.16b42ca649e68854", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"rootlesspodman-control-plane", UID:"rootlesspodman-control-plane", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"rootlesspodman-control-plane"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc058e436555c4e54, ext:5529112642, loc:(*time.Location)(0x55a44f36c680)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc058e436555c4e54, ext:5529112642, loc:(*time.Location)(0x55a44f36c680)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://rootlesspodman-control-plane:6443/api/v1/namespaces/default/events": dial tcp [fc00:f853:ccd:e793::2]:6443: connect: connection refused'(may retry after sleeping)
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: I1103 23:20:57.360239     161 server.go:409] "Adding debug handlers to kubelet server"
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: I1103 23:20:57.361190     161 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: I1103 23:20:57.361373     161 volume_manager.go:291] "Starting Kubelet Volume Manager"
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: I1103 23:20:57.361484     161 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: E1103 23:20:57.362190     161 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://rootlesspodman-control-plane:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/rootlesspodman-control-plane?timeout=10s": dial tcp [fc00:f853:ccd:e793::2]:6443: connect: connection refused
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: E1103 23:20:57.363135     161 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://rootlesspodman-control-plane:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp [fc00:f853:ccd:e793::2]:6443: connect: connection refused
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: E1103 23:20:57.363461     161 kubelet.go:2332] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: I1103 23:20:57.377773     161 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv4
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: I1103 23:20:57.385030     161 cpu_manager.go:209] "Starting CPU manager" policy="none"
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: I1103 23:20:57.385052     161 cpu_manager.go:210] "Reconciling" reconcilePeriod="10s"
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: I1103 23:20:57.385065     161 state_mem.go:36] "Initialized new in-memory state store"
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: I1103 23:20:57.388619     161 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv6
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: I1103 23:20:57.388637     161 status_manager.go:158] "Starting to sync pod status with apiserver"
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: I1103 23:20:57.388647     161 kubelet.go:1967] "Starting kubelet main sync loop"
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: E1103 23:20:57.388675     161 kubelet.go:1991] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: E1103 23:20:57.389103     161 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://rootlesspodman-control-plane:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp [fc00:f853:ccd:e793::2]:6443: connect: connection refused
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: I1103 23:20:57.391472     161 policy_none.go:49] "None policy: Start"
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: I1103 23:20:57.391713     161 memory_manager.go:168] "Starting memorymanager" policy="None"
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: I1103 23:20:57.391733     161 state_mem.go:35] "Initializing new in-memory state store"
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: W1103 23:20:57.403563     161 fs.go:588] stat failed on /dev/nvme0n1p6 with error: no such file or directory
Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: E1103 23:20:57.403587     161 kubelet.go:1423] "Failed to start ContainerManager" err="failed to get rootfs info: failed to get device for dir \"/var/lib/kubelet\": could not find device with major: 0, minor: 41 in cached partitions map"
Nov 03 23:20:57 rootlesspodman-control-plane systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE

@rverenich rverenich changed the title Control Plane Fails to Start on Fedora 35 Control Plane Fails to Start on Fedora 35 with rootless podman Nov 3, 2021
@BenTheElder BenTheElder added area/rootless Issues or PRs related to rootless containers area/provider/podman Issues or PRs related to podman labels Nov 4, 2021
@aojea
Copy link
Contributor

aojea commented Nov 4, 2021

Nov 03 23:20:57 rootlesspodman-control-plane kubelet[161]: E1103 23:20:57.403587 161 kubelet.go:1423] "Failed to start ContainerManager" err="failed to get rootfs info: failed to get device for dir "/var/lib/kubelet": could not find device with major: 0, minor: 41 in cached partitions map"

:/

@AkihiroSuda
Copy link
Member

PR: #2525

@aojea
Copy link
Contributor

aojea commented Nov 4, 2021

@BenTheElder was right, we have some logic

// mountDevMapper checks if the podman storage driver is Btrfs or ZFS
func mountDevMapper() bool {
storage := ""
cmd := exec.Command("podman", "info", "-f",
`{{ index .Store.GraphStatus "Backing Filesystem"}}`)
lines, err := exec.OutputLines(cmd)
if err != nil {
return false
}
if len(lines) > 0 {
storage = strings.ToLower(strings.TrimSpace(lines[0]))
}
return storage == "btrfs" || storage == "zfs"
}

but seems podman output has changed :/

graphDriverName: btrfs
graphOptions: {}
graphRoot: /home/rvx1/.local/share/containers/storage
graphStatus:
Build Version: 'Btrfs v5.14.1 '
Library Version: "102"

@aojea
Copy link
Contributor

aojea commented Nov 4, 2021

but seems podman output has changed :/

it didn't change in podman, we are not using the same logic we use in docker, I have a PR for fixing that #2527

@rverenich any chance you can try with the PR I linked and confirm if this works? I don't have a system handy for testing the patch

@rverenich
Copy link
Author

/etc/containers/storage.conf
driver = "btrfs"

  graphStatus:
    Build Version: 'Btrfs v5.14.1 '
    Library Version: "102"

/etc/containers/storage.conf
driver = "overlay"

  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"

@aojea
Copy link
Contributor

aojea commented Nov 4, 2021

I mean if you can checkout #2527, make and try with that patch

@rverenich
Copy link
Author

of course

kind version 0.12.0-alpha+09b2bcb0f7ebcb

output from
root@rootlesspodman-control-plane:/# journalctl -xeu kubelet

driver = "overlay"

Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: Flag --provider-id has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: Flag --cgroup-root has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.359674     910 server.go:199] "Warning: For remote container runtime, --pod-infra-container-image is ignored in kubelet, which should be set in that remote runtime instead"
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: Flag --provider-id has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: Flag --cgroup-root has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.388210     910 server.go:440] "Kubelet version" kubeletVersion="v1.22.2"
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.388731     910 server.go:868] "Client rotation is on, will bootstrap in background"
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.394041     910 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.394732     910 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: W1104 22:14:02.394733     910 manager.go:159] Cannot detect current cgroup on cgroup v2
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: W1104 22:14:02.395002     910 fs.go:214] stat failed on /dev/nvme0n1p6 with error: no such file or directory
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.410935     910 container_manager_linux.go:280] "Container manager verified user specified cgroup-root exists" cgroupRoot=[kubelet]
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.410970     910 container_manager_linux.go:285] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/kubelet CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.410980     910 topology_manager.go:133] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.410985     910 container_manager_linux.go:320] "Creating device plugin manager" devicePluginEnabled=true
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.411000     910 state_mem.go:36] "Initialized new in-memory state store"
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.611783     910 server.go:793] "Failed to ApplyOOMScoreAdj" err="write /proc/self/oom_score_adj: permission denied"
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.612130     910 kubelet.go:418] "Attempting to sync node with API server"
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.612153     910 kubelet.go:279] "Adding static pod path" path="/etc/kubernetes/manifests"
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.612184     910 kubelet.go:290] "Adding apiserver pod source"
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.612204     910 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: E1104 22:14:02.614973     910 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://rootlesspodman-control-plane:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp [fc00:f853:ccd:e793::2]:6443: connect: connection refused
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: E1104 22:14:02.615513     910 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://rootlesspodman-control-plane:6443/api/v1/nodes?fieldSelector=metadata.name%3Drootlesspodman-control-plane&limit=500&resourceVersion=0": dial tcp [fc00:f853:ccd:e793::2]:6443: connect: connection refused
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.616502     910 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="v1.5.7-13-g9d0acfe46" apiVersion="v1alpha2"
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.616978     910 server.go:1213] "Started kubelet"
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.617041     910 server.go:149] "Starting to listen" address="0.0.0.0" port=10250
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: E1104 22:14:02.617633     910 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"rootlesspodman-control-plane.16b4779419f904b5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"rootlesspodman-control-plane", UID:"rootlesspodman-control-plane", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"rootlesspodman-control-plane"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc05934aaa4c5e0b5, ext:278706883, loc:(*time.Location)(0x5634f38ef760)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc05934aaa4c5e0b5, ext:278706883, loc:(*time.Location)(0x5634f38ef760)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://rootlesspodman-control-plane:6443/api/v1/namespaces/default/events": dial tcp [fc00:f853:ccd:e793::2]:6443: connect: connection refused'(may retry after sleeping)
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.618448     910 server.go:409] "Adding debug handlers to kubelet server"
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.618569     910 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.618642     910 volume_manager.go:291] "Starting Kubelet Volume Manager"
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.618788     910 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: E1104 22:14:02.619562     910 kubelet.go:2332] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: E1104 22:14:02.619572     910 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://rootlesspodman-control-plane:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/rootlesspodman-control-plane?timeout=10s": dial tcp [fc00:f853:ccd:e793::2]:6443: connect: connection refused
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: E1104 22:14:02.619719     910 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://rootlesspodman-control-plane:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp [fc00:f853:ccd:e793::2]:6443: connect: connection refused
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.629250     910 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv4
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.634809     910 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv6
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.634822     910 status_manager.go:158] "Starting to sync pod status with apiserver"
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.634831     910 kubelet.go:1967] "Starting kubelet main sync loop"
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: E1104 22:14:02.634862     910 kubelet.go:1991] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: E1104 22:14:02.635302     910 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://rootlesspodman-control-plane:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp [fc00:f853:ccd:e793::2]:6443: connect: connection refused
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.640839     910 cpu_manager.go:209] "Starting CPU manager" policy="none"
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.640849     910 cpu_manager.go:210] "Reconciling" reconcilePeriod="10s"
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.640859     910 state_mem.go:36] "Initialized new in-memory state store"
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.640947     910 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.640956     910 state_mem.go:96] "Updated CPUSet assignments" assignments=map[]
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.640959     910 policy_none.go:49] "None policy: Start"
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.641101     910 memory_manager.go:168] "Starting memorymanager" policy="None"
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.641115     910 state_mem.go:35] "Initializing new in-memory state store"
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: I1104 22:14:02.641175     910 state_mem.go:75] "Updated machine memory state"
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: W1104 22:14:02.641190     910 fs.go:588] stat failed on /dev/nvme0n1p6 with error: no such file or directory
Nov 04 22:14:02 rootlesspodman-control-plane kubelet[910]: E1104 22:14:02.641198     910 kubelet.go:1423] "Failed to start ContainerManager" err="failed to get rootfs info: failed to get device for dir \"/var/lib/kubelet\": could not find device with major: 0, minor: 41 in cached partitions map"
Nov 04 22:14:02 rootlesspodman-control-plane systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE

driver = "btrfs"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: Flag --provider-id has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: Flag --cgroup-root has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.365399    1811 server.go:199] "Warning: For remote container runtime, --pod-infra-container-image is ignored in kubelet, which should be set in that remote runtime instead"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: Flag --provider-id has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: Flag --cgroup-root has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.383590    1811 server.go:440] "Kubelet version" kubeletVersion="v1.22.2"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.383742    1811 server.go:868] "Client rotation is on, will bootstrap in background"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.384972    1811 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.385862    1811 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: W1104 22:08:44.385885    1811 manager.go:159] Cannot detect current cgroup on cgroup v2
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: W1104 22:08:44.386261    1811 fs.go:214] stat failed on /dev/nvme0n1p6 with error: no such file or directory
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.400515    1811 container_manager_linux.go:280] "Container manager verified user specified cgroup-root exists" cgroupRoot=[kubelet]
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.400544    1811 container_manager_linux.go:285] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/kubelet CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.400558    1811 topology_manager.go:133] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.400563    1811 container_manager_linux.go:320] "Creating device plugin manager" devicePluginEnabled=true
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.400574    1811 state_mem.go:36] "Initialized new in-memory state store"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.601449    1811 server.go:793] "Failed to ApplyOOMScoreAdj" err="write /proc/self/oom_score_adj: permission denied"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.601874    1811 kubelet.go:418] "Attempting to sync node with API server"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.601924    1811 kubelet.go:279] "Adding static pod path" path="/etc/kubernetes/manifests"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.601973    1811 kubelet.go:290] "Adding apiserver pod source"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.602009    1811 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: E1104 22:08:44.605609    1811 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://rootlesspodman-control-plane:6443/api/v1/nodes?fieldSelector=metadata.name%3Drootlesspodman-control-plane&limit=500&resourceVersion=0": dial tcp [fc00:f853:ccd:e793::2]:6443: connect: connection refused
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: E1104 22:08:44.605630    1811 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://rootlesspodman-control-plane:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp [fc00:f853:ccd:e793::2]:6443: connect: connection refused
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.608293    1811 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="v1.5.7-13-g9d0acfe46" apiVersion="v1alpha2"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.609060    1811 server.go:1213] "Started kubelet"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.610600    1811 server.go:149] "Starting to listen" address="0.0.0.0" port=10250
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: E1104 22:08:44.611378    1811 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"rootlesspodman-control-plane.16b4774a0f39337e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"rootlesspodman-control-plane", UID:"rootlesspodman-control-plane", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"rootlesspodman-control-plane"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc059345b244cfb7e, ext:262388998, loc:(*time.Location)(0x55f33fa09760)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc059345b244cfb7e, ext:262388998, loc:(*time.Location)(0x55f33fa09760)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://rootlesspodman-control-plane:6443/api/v1/namespaces/default/events": dial tcp [fc00:f853:ccd:e793::2]:6443: connect: connection refused'(may retry after sleeping)
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.613904    1811 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.614202    1811 server.go:409] "Adding debug handlers to kubelet server"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.614893    1811 volume_manager.go:291] "Starting Kubelet Volume Manager"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.615389    1811 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: E1104 22:08:44.617670    1811 kubelet.go:2332] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: E1104 22:08:44.618798    1811 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://rootlesspodman-control-plane:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/rootlesspodman-control-plane?timeout=10s": dial tcp [fc00:f853:ccd:e793::2]:6443: connect: connection refused
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: E1104 22:08:44.619042    1811 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://rootlesspodman-control-plane:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp [fc00:f853:ccd:e793::2]:6443: connect: connection refused
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.628018    1811 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv4
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.634639    1811 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv6
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.634660    1811 status_manager.go:158] "Starting to sync pod status with apiserver"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.634669    1811 kubelet.go:1967] "Starting kubelet main sync loop"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: E1104 22:08:44.634701    1811 kubelet.go:1991] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: E1104 22:08:44.635137    1811 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://rootlesspodman-control-plane:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp [fc00:f853:ccd:e793::2]:6443: connect: connection refused
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.640033    1811 cpu_manager.go:209] "Starting CPU manager" policy="none"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.640045    1811 cpu_manager.go:210] "Reconciling" reconcilePeriod="10s"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.640053    1811 state_mem.go:36] "Initialized new in-memory state store"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.640133    1811 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.640141    1811 state_mem.go:96] "Updated CPUSet assignments" assignments=map[]
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.640144    1811 policy_none.go:49] "None policy: Start"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.640312    1811 memory_manager.go:168] "Starting memorymanager" policy="None"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.640326    1811 state_mem.go:35] "Initializing new in-memory state store"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: I1104 22:08:44.640415    1811 state_mem.go:75] "Updated machine memory state"
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: W1104 22:08:44.640433    1811 fs.go:588] stat failed on /dev/nvme0n1p6 with error: no such file or directory
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: E1104 22:08:44.640446    1811 kubelet.go:1423] "Failed to start ContainerManager" err="failed to get rootfs info: failed to get device for dir \"/var/lib/kubelet\": could not find device with major: 0, minor: 41 in cached partitions map"
Nov 04 22:08:44 rootlesspodman-control-plane systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE

@aojea
Copy link
Contributor

aojea commented Nov 4, 2021

it seems the errors are the same

Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: W1104 22:08:44.640433 1811 fs.go:588] stat failed on /dev/nvme0n1p6 with error: no such file or directory
Nov 04 22:08:44 rootlesspodman-control-plane kubelet[1811]: E1104 22:08:44.640446 1811 kubelet.go:1423] "Failed to start ContainerManager" err="failed to get rootfs info: failed to get device for dir "/var/lib/kubelet": could not find device with major: 0, minor: 41 in cached partitions map"

@mheon does this rings a bell? any suggestion?

@BenTheElder
Copy link
Member

@aojea the better way to test this is a kind config that unconditionally extraMounts /dev/mapper and see if that helps, completely cutting out the detection logic (which we can then spend time getting correct if the end result is helpful to begin with).

@aojea
Copy link
Contributor

aojea commented Nov 4, 2021

diff --git a/pkg/cluster/internal/providers/podman/util.go b/pkg/cluster/internal/providers/podman/util.go
index 393f6ada..606f2ccb 100644
--- a/pkg/cluster/internal/providers/podman/util.go
+++ b/pkg/cluster/internal/providers/podman/util.go
@@ -119,6 +119,7 @@ func deleteVolumes(names []string) error {
 
 // mountDevMapper checks if the podman storage driver is Btrfs or ZFS
 func mountDevMapper() bool {
+       return true
        cmd := exec.Command("podman", "info", "--format", "json")
        out, err := exec.Output(cmd)
        if err != nil {

@BenTheElder
Copy link
Member

BenTheElder commented Nov 4, 2021

No code changes necessary:
config.yaml:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- extraMounts:
  - hostPath: /dev/mapper
    containerPath: /dev/mapper

kind create cluster --config=config.yaml

or in a shell:

cat << EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- extraMounts:
  - hostPath: /dev/mapper
    containerPath: /dev/mapper
EOF

#1999 (comment)

@rverenich
Copy link
Author

rverenich commented Nov 5, 2021

with config.yaml provided

driver = "overlay"
$ kind create cluster -v10 --config=config.yaml --name rootlesspodman
enabling experimental podman provider
Cgroup controller detection is not implemented for Podman. If you see cgroup-related errors, you might need to set systemd property "Delegate=yes", see https://kind.sigs.k8s.io/docs/user/rootless/
Creating cluster "rootlesspodman" ...
DEBUG: podman/images.go:58] Image: kindest/node@sha256:69860bda5563ac81e3c0057d654b5253219618a22ec3a346306239bba8cfa1a6 present locally
 ✓ Ensuring node image (kindest/node:v1.21.1) 🖼 
 ✗ Preparing nodes 📦  
ERROR: failed to create cluster: podman run error: command "podman run --hostname rootlesspodman-control-plane --name rootlesspodman-control-plane --label io.x-k8s.kind.role=control-plane --privileged --tmpfs /tmp --tmpfs /run --volume cf1eba3ddec9acdbd7acb9f7ecd5df03e4414945143cc30686b1518a1ef55c15:/var:suid,exec,dev --volume /lib/modules:/lib/modules:ro --detach --tty --net kind --label io.x-k8s.kind.cluster=rootlesspodman -e container=podman --volume /dev/mapper:/dev/mapper --volume=/dev/mapper:/dev/mapper --publish=127.0.0.1:45271:6443/tcp -e KUBECONFIG=/etc/kubernetes/admin.conf kindest/node@sha256:69860bda5563ac81e3c0057d654b5253219618a22ec3a346306239bba8cfa1a6" failed with error: exit status 125
Command Output: Error: /dev/mapper: duplicate mount destination
Command Output: Error: /dev/mapper: duplicate mount destination
driver = "btrfs"
Nov 05 08:13:43 rootlesspodman-control-plane kubelet[901]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 05 08:13:43 rootlesspodman-control-plane kubelet[901]: Flag --provider-id has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 05 08:13:43 rootlesspodman-control-plane kubelet[901]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 05 08:13:43 rootlesspodman-control-plane kubelet[901]: Flag --cgroup-root has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 05 08:13:43 rootlesspodman-control-plane kubelet[901]: I1105 08:13:43.829315     901 server.go:199] "Warning: For remote container runtime, --pod-infra-container-image is ignored in kubelet, which should be set in that remote runtime instead"
Nov 05 08:13:43 rootlesspodman-control-plane kubelet[901]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 05 08:13:43 rootlesspodman-control-plane kubelet[901]: Flag --provider-id has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 05 08:13:43 rootlesspodman-control-plane kubelet[901]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 05 08:13:43 rootlesspodman-control-plane kubelet[901]: Flag --cgroup-root has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 05 08:13:43 rootlesspodman-control-plane kubelet[901]: I1105 08:13:43.846622     901 server.go:440] "Kubelet version" kubeletVersion="v1.22.2"
Nov 05 08:13:43 rootlesspodman-control-plane kubelet[901]: I1105 08:13:43.847035     901 server.go:868] "Client rotation is on, will bootstrap in background"
Nov 05 08:13:43 rootlesspodman-control-plane kubelet[901]: I1105 08:13:43.851621     901 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Nov 05 08:13:43 rootlesspodman-control-plane kubelet[901]: W1105 08:13:43.853008     901 manager.go:159] Cannot detect current cgroup on cgroup v2
Nov 05 08:13:43 rootlesspodman-control-plane kubelet[901]: I1105 08:13:43.853117     901 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Nov 05 08:13:43 rootlesspodman-control-plane kubelet[901]: W1105 08:13:43.853619     901 fs.go:214] stat failed on /dev/nvme0n1p6 with error: no such file or directory
Nov 05 08:13:43 rootlesspodman-control-plane kubelet[901]: I1105 08:13:43.883745     901 container_manager_linux.go:280] "Container manager verified user specified cgroup-root exists" cgroupRoot=[kubelet]
Nov 05 08:13:43 rootlesspodman-control-plane kubelet[901]: I1105 08:13:43.883828     901 container_manager_linux.go:285] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/kubelet CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Nov 05 08:13:43 rootlesspodman-control-plane kubelet[901]: I1105 08:13:43.883868     901 topology_manager.go:133] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Nov 05 08:13:43 rootlesspodman-control-plane kubelet[901]: I1105 08:13:43.883885     901 container_manager_linux.go:320] "Creating device plugin manager" devicePluginEnabled=true
Nov 05 08:13:43 rootlesspodman-control-plane kubelet[901]: I1105 08:13:43.883929     901 state_mem.go:36] "Initialized new in-memory state store"
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: I1105 08:13:44.084983     901 server.go:793] "Failed to ApplyOOMScoreAdj" err="write /proc/self/oom_score_adj: permission denied"
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: I1105 08:13:44.085315     901 kubelet.go:418] "Attempting to sync node with API server"
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: I1105 08:13:44.085347     901 kubelet.go:279] "Adding static pod path" path="/etc/kubernetes/manifests"
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: I1105 08:13:44.085391     901 kubelet.go:290] "Adding apiserver pod source"
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: I1105 08:13:44.085451     901 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: E1105 08:13:44.088304     901 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://rootlesspodman-control-plane:6443/api/v1/nodes?fieldSelector=metadata.name%3Drootlesspodman-control-plane&limit=500&resourceVersion=0": dial tcp [fc00:f853:ccd:e793::2]:6443: connect: connection refused
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: E1105 08:13:44.088362     901 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://rootlesspodman-control-plane:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp [fc00:f853:ccd:e793::2]:6443: connect: connection refused
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: I1105 08:13:44.090429     901 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="v1.5.7-13-g9d0acfe46" apiVersion="v1alpha2"
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: I1105 08:13:44.090633     901 server.go:1213] "Started kubelet"
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: I1105 08:13:44.090704     901 server.go:149] "Starting to listen" address="0.0.0.0" port=10250
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: E1105 08:13:44.091063     901 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"rootlesspodman-control-plane.16b4984db0ee009f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"rootlesspodman-control-plane", UID:"rootlesspodman-control-plane", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"rootlesspodman-control-plane"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc05957ce0566d09f, ext:278229230, loc:(*time.Location)(0x560075415760)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc05957ce0566d09f, ext:278229230, loc:(*time.Location)(0x560075415760)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://rootlesspodman-control-plane:6443/api/v1/namespaces/default/events": dial tcp [fc00:f853:ccd:e793::2]:6443: connect: connection refused'(may retry after sleeping)
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: I1105 08:13:44.091268     901 server.go:409] "Adding debug handlers to kubelet server"
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: I1105 08:13:44.091436     901 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: I1105 08:13:44.091519     901 volume_manager.go:291] "Starting Kubelet Volume Manager"
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: I1105 08:13:44.091547     901 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: E1105 08:13:44.091782     901 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://rootlesspodman-control-plane:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/rootlesspodman-control-plane?timeout=10s": dial tcp [fc00:f853:ccd:e793::2]:6443: connect: connection refused
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: E1105 08:13:44.091910     901 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://rootlesspodman-control-plane:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp [fc00:f853:ccd:e793::2]:6443: connect: connection refused
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: E1105 08:13:44.092125     901 kubelet.go:2332] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: I1105 08:13:44.103948     901 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv4
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: I1105 08:13:44.109405     901 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv6
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: I1105 08:13:44.109425     901 status_manager.go:158] "Starting to sync pod status with apiserver"
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: I1105 08:13:44.109440     901 kubelet.go:1967] "Starting kubelet main sync loop"
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: E1105 08:13:44.109478     901 kubelet.go:1991] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: E1105 08:13:44.110076     901 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://rootlesspodman-control-plane:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp [fc00:f853:ccd:e793::2]:6443: connect: connection refused
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: I1105 08:13:44.115219     901 cpu_manager.go:209] "Starting CPU manager" policy="none"
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: I1105 08:13:44.115230     901 cpu_manager.go:210] "Reconciling" reconcilePeriod="10s"
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: I1105 08:13:44.115239     901 state_mem.go:36] "Initialized new in-memory state store"
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: I1105 08:13:44.115331     901 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: I1105 08:13:44.115340     901 state_mem.go:96] "Updated CPUSet assignments" assignments=map[]
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: I1105 08:13:44.115344     901 policy_none.go:49] "None policy: Start"
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: I1105 08:13:44.115538     901 memory_manager.go:168] "Starting memorymanager" policy="None"
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: I1105 08:13:44.115551     901 state_mem.go:35] "Initializing new in-memory state store"
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: I1105 08:13:44.115632     901 state_mem.go:75] "Updated machine memory state"
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: W1105 08:13:44.115649     901 fs.go:588] stat failed on /dev/nvme0n1p6 with error: no such file or directory
Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: E1105 08:13:44.115663     901 kubelet.go:1423] "Failed to start ContainerManager" err="failed to get rootfs info: failed to get device for dir \"/var/lib/kubelet\": could not find device with major: 0, minor: 41 in cached partitions map"

err="failed to get rootfs info: failed to get device for dir \"/var/lib/kubelet\": could not find device with major: 0, minor: 41 in cached partitions map"

@rverenich
Copy link
Author

rverenich commented Nov 5, 2021

Ok then

Nov 05 08:13:44 rootlesspodman-control-plane kubelet[901]: W1105 08:13:44.115649     901 fs.go:588] stat failed on /dev/nvme0n1p6 with error: no such file or directory

with driver = "btrfs"
explicitly defined it in config.yaml

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- extraMounts:
  - hostPath: /dev/nvme0n1p6
    containerPath: /dev/nvme0n1p6

and finally started

@aojea
Copy link
Contributor

aojea commented Nov 5, 2021

ok, then this seems a dup of #2411 (comment)

is there any way to automate this?

@BenTheElder
Copy link
Member

is there any way to automate this?

#2411 (comment) currently we don't know of one (otherwise we'd have implemented it).

podman/docker don't expose these. we'd have to do something gross, slow, and API breaking like running a container that mounts / and poking around to see what is there before we run the cluster.

I recommend using a filesystem with better support from Kubernetes, kubernetes has also failed to work properly on these filesystems in the past and has no CI on them. CI is ext4 + overlayfs (and maybe some fuse-overlayfs).

@BenTheElder
Copy link
Member

This does seem to be a dupe #2524 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/provider/podman Issues or PRs related to podman area/rootless Issues or PRs related to rootless containers kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
4 participants