-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create template for adding nvidia support to docker images #33
Comments
I agree that the multi-stage build seems fragile and seems to break the idea of things being non-reproducible. From what I can see it's just installing some libraries and a json file it seems like it might be simpler for us to just build them in a normal build/ release install pipeline. If they are reproducible that shouldn't be a problem. Hopefully by doing that we're not limited to the only 2 architectures they appear to be targeting? |
Looks like ubuntu 18.04 will provide packages for libglvnd, which would significantly simplify the dockerfile. |
I suppose Which package is minimally necessary in this case: |
For 16.04 there are not many options: do something like you have above. Or use our own images for the base.
Other suggestions @3XX0? |
What about just a debian package that we inject to support |
Just tried out GL Vendor-Neutral Dispatch library with nvidia-docker2 using 18.04. Works pretty slick: |
Nice @ruffsl! Are you going to target only 18.04 then? |
@flx42 , targeting melodic images on 18.04 using libglvnd for nvidia-docker2 sounds reasonable. As @tfoote mentioned however, would it be possible to backport libglvnd to say 16.04 so nvidia-docker2 could just easily be used on older mantined images? @tfoote , libglvnd appears to be available for debian stretch in backports. https://packages.debian.org/search?keywords=libglvnd&searchon=names&suite=all§ion=all |
We could backport it if there's not already a package for it available and it will let things work well on xenial. I see it's first available on artful: https://packages.ubuntu.com/source/artful/libglvnd |
+1 for 16.04 backport. Thanks for the hard work! |
It's not handled by us, so I created a backport request: |
I've been told a backport of this change for a LTS release carries risk, since you will need changes to the Mesa packages and our nvidia driver packages. Not saying it won't happen, but this use case is probably insufficient to convince them. So, you should use ubuntu 18.04 or debian stretch I guess. |
@tfoote , do you think it would make sense add the libglvnd install and ENVs for nvidia-docker to directly to @mikaelarguedas , last time I think you modified the templates to white list distro specific changes: |
Certainly adding the |
I have some WIP here: osrf/docker_images#165 |
Now that we have the ability to run docker images in nvidia docker without baking in the nvidia before hand using: https://github.com/osrf/rocker I think it makes sense to not bake those things into the core images. |
@tfoote , perhaps you could reference rocker in the docker GUI entries in the ros wiki? |
Context:
Supplementary Context:
Solution:
Perhaps we could add a template for adding the necessary configuration steps for supporting nvidia hardware acceleration in containers for users display forwarding
rviz
,gzviewer
, or other opengl dependent programs. This could be done by mimicking nvidia's own image build steps. Then we could simply add a new tags for the OSRF docker repos that use the template for new child images.Here is a rough but working example:
using
--runtime=nvidia
and existing X11 forwarding methods.Perhaps @flx42 would have a better recommendation for something cleaner.
Personally, I find the necessity for multi-stage builds and assortment of in image copies not as elegant at mounting such files from the host and setting a three lightweight variables in the child dockerfile, as originally described in our ROS wiki:
But as nvidia docker v1 is deprecated and will become harder and harder to build from old sources, it would be nice to adopt a supported solution.
The text was updated successfully, but these errors were encountered: