Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

setting pids.max failed with cgroups v2 #3395

Closed
Muyan0828 opened this issue Sep 15, 2022 · 6 comments
Closed

setting pids.max failed with cgroups v2 #3395

Muyan0828 opened this issue Sep 15, 2022 · 6 comments
Labels
component/util Utility functions shared between CephFS and RBD

Comments

@Muyan0828
Copy link

Describe the bug

setting pids.max failed with cgroups v2

Environment details

  • Image/version of Ceph CSI driver : quay.io/cephcsi/cephcsi:v3.7.0
  • Helm chart version :
  • Kernel version : 5.15.0-47-generic
  • Mounter used for mounting PVC (for cephFS its fuse or kernel. for rbd its
    krbd or rbd-nbd) :
  • Kubernetes cluster version : v1.21.14
  • Ceph cluster version : v14.2.11

Steps to reproduce

Steps to reproduce the behavior:

  1. Deploy cephcsi on Ubuntu 22

Actual results

csi pod cannot set pids.max.

Expected behavior

csi pod can set pids.max.

Logs

E0915 01:33:41.157796 1 cephcsi.go:208] Failed to get the PID limit, can not reconfigure: open /sys/fs/cgroup//pids.max: no such file or directory

Additional context

run cat /proc/1/cgroup in csi-cephfsplugin

[root@vm-01 /]# ps -aux
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root           1  0.0  0.3 1673412 64456 ?       Ssl  01:33   0:01 /usr/local/bin/cephcsi --nodeid=192.168.176.68 --type=cephfs --endpoint=unix:///csi/csi.sock --v=0 --nodeserver=true --drivername=rook-ceph.cephfs.csi.ceph.com --pidlimit=
root          80  0.0  0.0  12184  3360 pts/0    Ss   04:21   0:00 bash
root          97  0.0  0.0  47620  3600 pts/0    R+   04:22   0:00 ps -aux
[root@vm-01 /]# cat /proc/1/cgroup
0::/

For example:

Any existing bug report which describe about the similar issue/behavior

@Muyan0828
Copy link
Author

@Madhu-1 cc

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Sep 20, 2022

same as #3085? #3091 should have fixed it.

@Muyan0828
Copy link
Author

but the version I'm using is quay.io/cephcsi/cephcsi:v3.7.0, #3091 is already merged in the release

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Sep 22, 2022

@Muyan0828 i need to reproduce it locally and see whats happening. Thanks for reporting this one.

@Madhu-1 Madhu-1 added the component/util Utility functions shared between CephFS and RBD label Sep 22, 2022
@Madhu-1
Copy link
Collaborator

Madhu-1 commented Oct 6, 2022

vagrant@worker0:~/ceph-csi$ kuberc logs po/csi-rbdplugin-x8tnk -c csi-rbdplugin
I1006 11:38:05.844207   25589 cephcsi.go:192] Driver version: v3.7.0 and Git version: 34fd27bbd1b9e0efff48805d025a1069b5dbbc53
I1006 11:38:05.845335   25589 cephcsi.go:210] Initial PID limit is set to 4575
I1006 11:38:05.845706   25589 cephcsi.go:216] Reconfigured PID limit to -1 (max)
I1006 11:38:05.846423   25589 cephcsi.go:241] Starting driver type: rbd with name: rook-ceph.rbd.csi.ceph.com
I1006 11:38:05.847814   25589 server.go:114] listening for CSI-Addons requests on address: &net.UnixAddr{Name:"/tmp/csi-addons.sock", Net:"unix"}
I1006 11:38:05.857632   25589 mount_linux.go:283] Detected umount with safe 'not mounted' behavior
I1006 11:38:05.858045   25589 rbd_attach.go:231] nbd module loaded
I1006 11:38:05.858143   25589 rbd_attach.go:245] kernel version "5.15.0-48-generic" supports cookie feature
I1006 11:38:05.968026   25589 rbd_attach.go:261] rbd-nbd tool supports cookie feature
I1006 11:38:05.968631   25589 server.go:126] Listening for connections on address: &net.UnixAddr{Name:"//csi/csi.sock", Net:"unix"}
I1006 11:38:06.292155   25589 utils.go:195] ID: 1 GRPC call: /csi.v1.Identity/GetPluginInfo
I1006 11:38:06.301526   25589 utils.go:199] ID: 1 GRPC request: {}
I1006 11:38:06.301565   25589 identityserver-default.go:39] ID: 1 Using default GetPluginInfo
I1006 11:38:06.301696   25589 utils.go:206] ID: 1 GRPC response: {"name":"rook-ceph.rbd.csi.ceph.com","vendor_version":"v3.7.0"}
I1006 11:38:06.690380   25589 utils.go:195] ID: 2 GRPC call: /csi.v1.Node/NodeGetInfo
I1006 11:38:06.690934   25589 utils.go:199] ID: 2 GRPC request: {}
I1006 11:38:06.691050   25589 nodeserver-default.go:51] ID: 2 Using default NodeGetInfo
I1006 11:38:06.691461   25589 utils.go:206] ID: 2 GRPC response: {"accessible_topology":{},"node_id":"worker0"}
vagrant@worker0:~/ceph-csi$ uname -r
5.15.0-48-generic
vagrant@worker0:~/ceph-csi$ uname -a
Linux worker0 5.15.0-48-generic #54-Ubuntu SMP Fri Aug 26 13:26:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
vagrant@worker0:~/ceph-csi$ lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 22.04.1 LTS
Release:	22.04
Codename:	jammy
vagrant@worker0:~/ceph-csi$

am not able to reproduce it, i dont see any error message in csi logs.

@Madhu-1 Madhu-1 closed this as completed Oct 6, 2022
@gattytto
Copy link

I do here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/util Utility functions shared between CephFS and RBD
Projects
None yet
Development

No branches or pull requests

3 participants