Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCI runtime create failed: cgroup.subtree_control`: Invalid argument: unknown #21

Closed
zinovya opened this issue Jul 19, 2021 · 6 comments

Comments

@zinovya
Copy link

zinovya commented Jul 19, 2021

Hi There, I'm trying to run docker on Fedora 32, using Kernel 5.4.
containerd and dockerd started normally:

[root@fedora-riscv ~]# containerd
INFO[2021-07-19T01:28:08.361208892-04:00] starting containerd                           revision=d76c121f76a5fc8a462dc64594aea72fe18e1178 version=v1.3.3
INFO[2021-07-19T01:28:08.744277818-04:00] loading plugin "io.containerd.content.v1.content"...  type=io.containerd.content.v1
INFO[2021-07-19T01:28:08.746948345-04:00] loading plugin "io.containerd.snapshotter.v1.devmapper"...  type=io.containerd.snapshotter.v1
WARN[2021-07-19T01:28:08.748776194-04:00] failed to load plugin io.containerd.snapshotter.v1.devmapper  error="devmapper not configured"
INFO[2021-07-19T01:28:08.751477020-04:00] loading plugin "io.containerd.snapshotter.v1.aufs"...  type=io.containerd.snapshotter.v1
INFO[2021-07-19T01:28:08.824732398-04:00] skip loading plugin "io.containerd.snapshotter.v1.aufs"...  error="modprobe aufs failed: \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.4.8-g888ecd6-dirty\\n\": exit status 1: skip plugin" type=io.containerd.snapshotter.v1
INFO[2021-07-19T01:28:08.836851963-04:00] loading plugin "io.containerd.snapshotter.v1.native"...  type=io.containerd.snapshotter.v1
INFO[2021-07-19T01:28:08.840054875-04:00] loading plugin "io.containerd.snapshotter.v1.overlayfs"...  type=io.containerd.snapshotter.v1
INFO[2021-07-19T01:28:08.844153961-04:00] loading plugin "io.containerd.snapshotter.v1.zfs"...  type=io.containerd.snapshotter.v1
INFO[2021-07-19T01:28:08.848809933-04:00] skip loading plugin "io.containerd.snapshotter.v1.zfs"...  error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
INFO[2021-07-19T01:28:08.852666026-04:00] loading plugin "io.containerd.metadata.v1.bolt"...  type=io.containerd.metadata.v1
WARN[2021-07-19T01:28:08.855722742-04:00] could not use snapshotter devmapper in metadata plugin  error="devmapper not configured"
INFO[2021-07-19T01:28:08.857164802-04:00] metadata content store policy set             policy=shared
[ 1723.347900] br_netfilter: target ffffffe00002a958 can not be addressed by the 32-bit offset from PC = 0000000016d328e5
[root@fedora-riscv ~]# dockerd
INFO[2021-07-19T01:35:48.716459477-04:00] Starting up                                  
WARN[2021-07-19T01:35:48.869117254-04:00] could not change group /var/run/docker.sock to docker: group docker not found 
INFO[2021-07-19T01:35:48.936396959-04:00] parsed scheme: "unix"                         module=grpc
INFO[2021-07-19T01:35:48.937069880-04:00] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2021-07-19T01:35:48.939266549-04:00] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2021-07-19T01:35:48.940130876-04:00] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2021-07-19T01:35:49.118641961-04:00] parsed scheme: "unix"                         module=grpc
INFO[2021-07-19T01:35:49.119004273-04:00] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2021-07-19T01:35:49.119313282-04:00] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2021-07-19T01:35:49.119612192-04:00] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2021-07-19T01:35:49.231295186-04:00] [graphdriver] using prior storage driver: overlay2 
WARN[2021-07-19T01:35:49.365598889-04:00] Your system is running cgroup v2 (unsupported) 
INFO[2021-07-19T01:35:49.473426963-04:00] Loading containers: start.                   
WARN[2021-07-19T01:35:49.796082858-04:00] Running modprobe bridge br_netfilter failed with message: modprobe: ERROR: could not insert 'br_netfilter': Invalid argument
insmod /lib/modules/5.4.8-g888ecd6-dirty/kernel/net/bridge/br_netfilter.ko 
, error: exit status 1 
WARN[2021-07-19T01:35:54.025420394-04:00] Could not load necessary modules for IPSEC rules: protocol not supported 
INFO[2021-07-19T01:36:00.361439246-04:00] Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address 
INFO[2021-07-19T01:36:04.484179146-04:00] Loading containers: done.                    
WARN[2021-07-19T01:36:04.924443822-04:00] Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled  storage-driver=overlay2
INFO[2021-07-19T01:36:04.939272686-04:00] Docker daemon                                 commit=32b6afcaf4 graphdriver(s)=overlay2 version=dev
INFO[2021-07-19T01:36:04.948754482-04:00] Daemon has completed initialization          
INFO[2021-07-19T01:36:05.671269690-04:00] API listen on /var/run/docker.sock   

I can pull docker images, no errors reported there.
When I start the container I'm getting this error:

[root@fedora-riscv ~]# docker run -it carlosedp/debian:sid bash
docker: Error response from daemon: OCI runtime create failed: writing file `/sys/fs/cgroup/docker/cgroup.subtree_control`: Invalid argument: unknown.
ERRO[0002] error waiting for container: context canceled 

CONFIG_CGROUPS is enabled in my kernel configuration (so as all other options necessary option as per check-config.sh script.

I tried using docker 19, that you have built and docker 20 that I've built myself both resulted in the same error.
I was wondering if you have any suggestions what else I can try to get it working.
Thanks in advance for your help.

@zinovya
Copy link
Author

zinovya commented Aug 9, 2021

@carlosedp, I was wondering if you have seen my message above. We really want to get docker going in Fedora 32, but the issue described above stops us. Any help/advice will be highly appreciated.
Thanks in advance.

@carlosedp
Copy link
Owner

I see a couple of problems that might be related to the image... I believe you used the .tar.gz pack. Since it doesn't do the automatic install, some things are missing:

I'll check with David who maintains some Fedora infrastructure if they are already building Docker packages.

@carlosedp
Copy link
Owner

News about this?

@zinovya
Copy link
Author

zinovya commented Aug 27, 2021

Hi @carlosedp , thanks for you previous reply. After I have installed br_netfilet that error is no longer reported.
Unfortunately I still cannot run a container. I'm getting these errors now:
Dockerd:

dockerd
INFO[2021-08-27T06:29:42.596865606-04:00] Starting up                                  
INFO[2021-08-27T06:29:42.627765839-04:00] parsed scheme: "unix"                         module=grpc
INFO[2021-08-27T06:29:42.628096939-04:00] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2021-08-27T06:29:42.628548940-04:00] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2021-08-27T06:29:42.628768640-04:00] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2021-08-27T06:29:42.667856282-04:00] parsed scheme: "unix"                         module=grpc
INFO[2021-08-27T06:29:42.668446782-04:00] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2021-08-27T06:29:42.670315284-04:00] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2021-08-27T06:29:42.671283485-04:00] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2021-08-27T06:29:42.753555173-04:00] [graphdriver] using prior storage driver: overlay2 
WARN[2021-08-27T06:29:42.806953330-04:00] Your system is running cgroup v2 (unsupported) 
INFO[2021-08-27T06:29:42.809108432-04:00] Loading containers: start.                   
WARN[2021-08-27T06:29:46.736776822-04:00] Could not load necessary modules for IPSEC rules: protocol not supported 
INFO[2021-08-27T06:29:49.771889259-04:00] Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address 
INFO[2021-08-27T06:29:49.774876562-04:00] Found stale default bridge network 59bd1800f74d5eb82f09dca04187517082539f73bf1931f3d529a96f4a387bdf (docker0) 
WARN[2021-08-27T06:29:49.775451763-04:00] Stale default bridge network 59bd1800f74d5eb82f09dca04187517082539f73bf1931f3d529a96f4a387bdf 
INFO[2021-08-27T06:29:56.127020138-04:00] Loading containers: done.                    
WARN[2021-08-27T06:29:56.431361563-04:00] Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled  storage-driver=overlay2
INFO[2021-08-27T06:29:56.439710872-04:00] Docker daemon                                 commit=32b6afcaf4 graphdriver(s)=overlay2 version=dev
INFO[2021-08-27T06:29:56.440641173-04:00] Daemon has completed initialization          
INFO[2021-08-27T06:29:57.098689975-04:00] API listen on /var/run/docker.sock           
ERRO[2021-08-27T06:30:13.705133148-04:00] stream copy error: reading from a closed fifo 
ERRO[2021-08-27T06:30:14.000071604-04:00] 712d7e3ddca4cfbb0bd763d4bd9bef072532f33724c5f61134262b260aabf95e cleanup: failed to delete container from containerd: no such container 
ERRO[2021-08-27T06:30:14.003208302-04:00] Handler for POST /v1.41/containers/712d7e3ddca4cfbb0bd763d4bd9bef072532f33724c5f61134262b260aabf95e/start returned error: OCI runtime create failed: writing file `/sys/fs/cgroup/docker/cgroup.subtree_control`: Invalid argument: unknown 

Containerd:

[ 9854.332828] docker0: port 1(veth75be2c9) entered disabled state
[ 9896.091279] docker0: port 1(vethc2a84e2) entered blocking state
[ 9896.092140] docker0: port 1(vethc2a84e2) entered disabled state
[ 9896.093356] device vethc2a84e2 entered promiscuous mode
time="2021-08-27T06:55:24.331721051-04:00" level=info msg="starting signal loop" namespace=moby path=/run/containerd/
io.containerd.runtime.v2.task/moby/f5629a50daae2116ae0bfc983241b79c0bbe850af2e5cefeb13ac2e246155521 pid=71794
INFO[2021-08-27T06:55:24.591258419-04:00] shim disconnected                             id=f5629a50daae2116ae0bfc9832
41b79c0bbe850af2e5cefeb13ac2e246155521
[ 9896.986251] docker0: port 1(vethc2a84e2) entered disabled state
[ 9897.008114] device vethc2a84e2 left promiscuous mode
[ 9897.008638] docker0: port 1(vethc2a84e2) entered disabled state

Have you see errors like that when you worked on it?

Also you mentioned before that you will ask David about Fedora infrastructure, have you heard anything back?

Thanks in advance for you help!

@zinovya
Copy link
Author

zinovya commented Sep 2, 2021

I think I managed to get it working on Fedora. I ended up enabling a few more kernel config option: CONFIG_BRIDGE_NETFILTER as you suggested and a few more options that were reported by check-config.sh as optional. Unfortunately I did bulk change, so I'm not sure which option exactly fixed it.
After that I hit this problem docker/for-linux#219 and the workaround described there worked for me.
So now I can actually run your hello world risc-v container without errors reported:

docker run -it carlosedp/debian:sid bash
root@3176b0200a39:/# ls
bin  boot  dev	etc  home  lib	media  mnt  opt  proc  root  run  sbin	srv  sys  tmp  usr  var

Thanks a lot for your help @carlosedp

@zinovya zinovya closed this as completed Sep 2, 2021
@carlosedp
Copy link
Owner

Great it worked, usually chosing the kernel options is a bit tricky.
Moby project (upstream Docker) have a script to check if required parameters are enabled, just pass you .config: https://github.com/moby/moby/blob/master/contrib/check-config.sh

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants