-
Notifications
You must be signed in to change notification settings - Fork 537
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
efs-plugin error: Detected OS without systemd
#1245
Comments
After updating the Kubernetes cluster from 1.26, on one of the nodes I also noticed this error Environment Interesting is: I see that the logs were not included in the bug report, so I am posting the logs of the current container
|
As mentioned in my previous comment, we've observed that within our cluster, the EFS driver consumes more memory when attempting to connect EFS storage to pods on newer Kubernetes versions. This increase in memory usage leads to Out Of Memory (OOM) errors. Following these OOM errors, new created EFS driver container fails to perform correctly, returning an error message stating "Detected OS without systemd." Our current workaround involves setting a higher memory limit for the EFS driver container. This adjustment allows it to allocate more memory during node startup and successfully bind EFS storage to all pods. It's seems to work. |
In our case we increased the memory limit from |
I am closing this issue, as our problem is resolved. Please open the issue if your problem persists. |
/kind bug
What happened?
I have deployed
efs-csi-driver
to ourKops
managed kubernetes clusters. The chart version isaws-efs-csi-driver-2.5.0
. We useubuntu-22.04
for our worker and master nodes. Recently I upgraded our clusters from kubernetes version1.26.6
to1.27.8
. After this upgrade theefs-csi-node
have the above this error in the logs,Detected OS without systemd
. After this error, the pod is not able to mount efs volumes and the pods that have efs volumes are stuck incontainer creating
state.To resolve this issue, I have manually delete the
efs-csi-node
pod and at that point the issue will be resolved. However, after few minutes the error reappears in the logs.What you expected to happen?
I expected the pods would start normally and mounts should work without any issues.
How to reproduce it (as minimally and precisely as possible)?
perhaps running the same version of chart on kops managed k8s cluster having version
1.27.8
and on ubuntu 22.04Anything else we need to know?:
We still have a kops managed kubernetes cluster using the AMI for worker and master nodes, kubernetes version
1.26.6
running the same version ofefs-csi-driver
chart without any of the above issues.Environment
kubectl version
):1.27.8
aws-efs-csi-driver-2.5.0
App version:1.7.0
Please also attach debug logs to help us better diagnose
I couldn't find any logs in this location,
/var/log/amazon/efs
content of
/var/run/efs
Any help will be highly appreciated.
Thanks
The text was updated successfully, but these errors were encountered: