-
Notifications
You must be signed in to change notification settings - Fork 218
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cgroup failures with v1.8.0 #158
Comments
Tracked this down to mounting proc with |
Hmm. I’d rather not have to add this CAP. Maybe we can find a better way of determining the cgroupv2 version. |
I spent a little time thinking about this, and I think the correct fix is as follows. This ensures we have the same set of caps set (at the same time) as we did in the old cgroups implementation.
|
MR created to add this change: https://gitlab.com/nvidia/container-toolkit/libnvidia-container/-/merge_requests/137 |
Hi @NHellFire. We have just published NVIDIA Container Toolkit v1.8.1 which should address this issue. Please upgrade to the new version and let us know if the problem persists or close this issue otherwise. |
@elezar Tested and working, thanks! |
Since v1.8.0, I'm unable to start any docker containers that use the GPU:
Had a look around and found that the original error message is being overwritten, so after making
GetDeviceCGroupVersion
print the error itself, I get:Downgrading to v1.7.0 allows me to run containers again:
Docker info:
Other installed nvidia packages:
The text was updated successfully, but these errors were encountered: