-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cgroup2: does not work with rootless podman #2163
Comments
$ cat /proc/self/cgroup
0::/user.slice/user-1001.slice/session-1.scope
$ podman --cgroup-manager=cgroupfs run --name=foo -d --runtime=/usr/local/bin/crun docker.io/library/alpine tail -f /dev/null
d81f9a37b4569d30bfcd08f3339398c1eba21f44ca617f95212559527a969145
$ podman exec foo cat /proc/1/cgroup
0::/user.slice/user-1001.slice/user@1001.service/user.slice/podman-11242.scope
$ podman inspect foo | jq -r .[0].OCIConfigPath | xargs jq -r .linux.cgroupsPath
/libpod_parent/libpod-d81f9a37b4569d30bfcd08f3339398c1eba21f44ca617f95212559527a969145 @giuseppe How is this happening? |
I think that runc is trying to connect to the systemd system instance. For rootless, it should connect to the user session. We had the same issue with Podman but it was easy to retrieve the original UID for the rootless user: https://github.com/containers/libpod/blob/e7540d0406c49b22de245246d16ebc6e1778df37/pkg/cgroups/cgroups.go#L374-L392 but I am not sure how to retrieve the uid while in a namespace. We could either parse |
Why does --cgroup-manager=cgroupfs use systemd? |
when running as rootless, if it is not able to create a cgroup using cgroupfs and no limits are set, then it silently ignore errors and use the same cgroups podman was running in. It is the same behaviour Podman has on a cgroups v1 system where cgroups for rootless mode are not supported at all. |
Hmm,
Is my understanding correct? |
cgroupfs is being fixed in #2169 |
systemd is being fixed in #2281 |
Rootful mode seems working without problem (both cgroupfs and systemd).
5.3.0-19-generic #20-Ubuntu
withsystemd.unified_cgroup_hierarchy=1 cgroup_enable=memory swapaccount=1
The text was updated successfully, but these errors were encountered: