Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU? #2

Open
MTRNord opened this issue Oct 2, 2016 · 22 comments
Open

GPU? #2

MTRNord opened this issue Oct 2, 2016 · 22 comments

Comments

@MTRNord
Copy link

MTRNord commented Oct 2, 2016

Is it possible to run opencv in an task generated by boinc2docker? (Means an task that needs GPU power)

@MTRNord
Copy link
Author

MTRNord commented Oct 2, 2016

Am I right that it would be possible when I overwrite the plan_class_spec.xml File and define the gpu params?

@marius311
Copy link
Owner

So I've never done any GPU computing myself. I think in theory this could work, and I'd be super interested if you found a way to make it work, but it'll probably require some effort. I think first you'd probably want to just boot up the boinc2docker vm_isocontext.iso in Virtualbox by hand and see what it takes to get access to the GPU from in there (according to this it'll probably involve enabling PCI passthrough). Then you just need to figure out how to do that from inside the Docker container inside the VM, which seems possible (e.g. this)

@MTRNord
Copy link
Author

MTRNord commented Oct 2, 2016

I think serverside is activated by adding gpu in plan_class_spec.xml but at this time I can't get client to fetch it. If I am able to get it to run I will tell you and maybe (if possible) make an pull request for that ;) maybe other users need it also in the future

@MTRNord
Copy link
Author

MTRNord commented Oct 2, 2016

I think I got it to run with NVIDIA gpu (cuda) (at least client detects nvidia). GPU itself did I not yet test. Also some permission things do I have to fix and to add AMD support.

Fork can be found here:
https://github.com/Nordgedanken/boinc2docker-gpu/tree/gpu

Still work in progress but it seems to do its work. (Update: Somehow it does not anymore. It only turns down the CPU :%)

@MTRNord
Copy link
Author

MTRNord commented Oct 2, 2016

Some explenation: I first build another app for cuda which has cuda enabled. Than I made another work generator script for the cuda appname. 3. I added some gpu_ args for gpu suppport.

Todo:

  • AMD (I started with CUDA because I got an Nvidia card)
  • Remove some hardcoded things to be able to have only one woork generator file to keep it clean.
  • Hope that the things said in the wiki aren't true and the vboxwrapper is able to run GPU ^^

@MTRNord
Copy link
Author

MTRNord commented Oct 2, 2016

So it will be recongised as cuda task but does not use gpu yet :(

@MTRNord
Copy link
Author

MTRNord commented Oct 3, 2016

So it generates now ati and cuda workunits on different apps but stdon't seem to use GPU

@marius311
Copy link
Owner

marius311 commented Oct 3, 2016

Yea, I think getting the GPU recognized from inside Virtualbox/Docker will definitely require some tinkering. Any luck with the things mentioned in the links above?

@MTRNord
Copy link
Author

MTRNord commented Oct 3, 2016

The docker part works I think but the PCI part didn't I even see xD so I have to look at it. I concentrated first to get the Work Generators and configs running :) In my tests it does already some small face regognition but that is running over the CPU. So atleast the configs are running. I will as soon as possible look into the part how to get the GPU running in the VBox.

@MTRNord
Copy link
Author

MTRNord commented Oct 3, 2016

And If I understand it in the right way I have to add some code to the VBoxWrapper and figure out how to get the PCI Address that will be used. Also I have to check if the GPU will still be useable on the host system or will be disconnected.

@davidpanderson
Copy link
Collaborator

I can help with vboxwrapper changes if you point me to docs saying how to access GPUs from within a VM.

@MTRNord
Copy link
Author

MTRNord commented Oct 3, 2016

@davidpanderson that would be nice because I have to say that I have no idea about C lang :) marius did link it above: https://www.virtualbox.org/manual/ch09.html#pcipassthrough but that isn't any C-API Document.

@marius311
Copy link
Owner

marius311 commented Oct 5, 2016

Hm sorry perhaps I was a bit too optimistic without reading more first. So I suppose this is really limited to Linux hosts and only some cards that VBox supports PCI passthrough for?

If your end goal is BOINC GPU computing but you want to use Docker, maybe another interesting avenue to pursue is getting Docker BOINC apps running natively, à la, BOINC/boinc#1620 This should allow all Linux/Nvidia working I think somewhat painlessly, and maybe even AMD with some hacking. Possibly at some point in the future one will be able to distribute native Windows Docker containers (it already exists for Server 2016), which themselves likely will able to access the GPU. Anyway, this is all quite speculative and may not be interesting / useful to you at all, but just throwing out some ideas!

@MTRNord
Copy link
Author

MTRNord commented Oct 6, 2016

Ok :) so I will first just stick to CPU calculating until that works. Maybe in sometime it will work. :/

@MTRNord
Copy link
Author

MTRNord commented Oct 6, 2016

Just an question (offtopic for this issue): is there a way to define in the boinc2docker_create_work.py an expected time that will be needed to calculate? Because the time defined as default is way to short for the thing that is calculated.

@marius311
Copy link
Owner

You can pass any of the usual "create_work" args listed here to boinc2docker_create_work. The one you want is --rsc_fpops_est

@MTRNord
Copy link
Author

MTRNord commented Oct 7, 2016

ah ok thanks :) I didn't take a look at the original one and in the wiki couldn't I find it

@MTRNord
Copy link
Author

MTRNord commented Oct 7, 2016

(comment deleted because it does not change the task's/jobs GFlops but the workunits ones)

@cjmielke
Copy link

Hi guys - I know this is a really old thread, but I'm currently building a distributed computing project to fight the coronavirus : https://github.com/cjmielke/quarantineAtHome

I'm building the client in Docker, and I have both a GPU and non-GPU version. If either of you think its possible to use boinc2docker to allow windows users to run GPU jobs, that might help the project scale once it gets to that stage.

I figure'd I'd send a ping to you now, even though Im still getting a rough sketch of the code working.

Thanks! Sorry for the noise.

@marius311
Copy link
Owner

No worries, happy to discuss here and cool that you're working on this. I'm not sure how the situation has changed since these comments several years ago, but the fundamental challenge is that boinc2docker jobs run inside Docker inside a Virtualbox VM. Its getting access to the GPU from inside the VM that's the challenge. Do you know if there's any new developements there?

@cjmielke
Copy link

I'm unsure of that, but I think I'll recircle. Here's my rough set of steps for now.

  1. Gotta finish things so that linux users can run it on their own
  2. I can come back to boinc2docker to get a CPU-only version that windows user can run
  3. Maybe here we can try minor tweaks to get passthrough working.

So I'll circle back then! Thanks for your great work! Hope you are sheltering in place well over there in Berkeley.

@marius311
Copy link
Owner

Sounds good, and thanks! Also just to make sure, you have seen https://github.com/BOINC/boinc-client-docker ? A few GPU-capable containers exist there too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants