Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cephfs: Cannot run systemd-run, assuming non-systemd OS #2890

Closed
discostur opened this issue Feb 18, 2022 · 3 comments
Closed

cephfs: Cannot run systemd-run, assuming non-systemd OS #2890

discostur opened this issue Feb 18, 2022 · 3 comments

Comments

@discostur
Copy link

Hi,

i'm running cephfs via ceph-csi and on some nodes i get the following errror in the ceph-csi-cephfs-nodeplugin container:

I0218 03:20:47.221224       1 utils.go:202] ID: 103 GRPC response: {"usage":[{"available":104857600,"total":104857600,"unit":1},{"total":43645,"unit":2,"used":43646}]}
I0218 03:20:49.927809       1 utils.go:191] ID: 104 GRPC call: /csi.v1.Node/NodeGetCapabilities
I0218 03:20:49.927865       1 utils.go:195] ID: 104 GRPC request: {}
I0218 03:20:49.927936       1 utils.go:202] ID: 104 GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]}
I0218 03:20:49.928764       1 utils.go:191] ID: 105 GRPC call: /csi.v1.Node/NodeGetVolumeStats
I0218 03:20:49.928809       1 utils.go:195] ID: 105 GRPC request: {"volume_id":"0001-0024-01578d80-6c97-46ba-9327-cb2b13980916-0000000000000001-8cb28e6b-2366-11eb-b0b4-0a580a2a02cd","volume_path":"/var/lib/kubelet/pods/84d81409-4c10-4763-82a4-3c1c234424bc/volumes/kubernetes.io~csi/pvc-05cdecd4-8155-4e34-8bf0-1a9f221931e3/mount"}
I0218 03:20:49.935100       1 mount_linux.go:218] Cannot run systemd-run, assuming non-systemd OS
I0218 03:20:49.935119       1 mount_linux.go:219] systemd-run output: System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to create bus connection: Host is down
, failed with: exit status 1

I get this error on some nodes - on others not. Not sure what the error is related to or if it has any impact on provisioning volumes.

So my question is if the error is OK or if anything is going wrong here?

Thanks

Environment details

  • Helm chart version : v3.4.0 / v3.5.1
  • Kernel version : AlmaLinux 8.5 / 4.18.0-348.2.1.el8_5.x86_64
  • Mounter used for mounting PVC (for cephFS its fuse or kernel. for rbd its
    krbd or rbd-nbd) : cephFS
  • Kubernetes cluster version : 1.20.6
  • Ceph cluster version : 14.2.22
@Madhu-1
Copy link
Collaborator

Madhu-1 commented Feb 21, 2022

@discostur this should not be a problem, it's coming from the kubernetes package we use for mount operation. this is just an info log.

@discostur
Copy link
Author

@Madhu-1 ok thanks - will close this ;)

@nixpanic
Copy link
Member

kubernetes/kubernetes#111083 has been proposed as a change to the k8s.io/mount-utils package. It will address the scary logging for NodeGetVolumeStats procedure calls.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants