-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU? #2
Comments
Am I right that it would be possible when I overwrite the plan_class_spec.xml File and define the gpu params? |
So I've never done any GPU computing myself. I think in theory this could work, and I'd be super interested if you found a way to make it work, but it'll probably require some effort. I think first you'd probably want to just boot up the boinc2docker vm_isocontext.iso in Virtualbox by hand and see what it takes to get access to the GPU from in there (according to this it'll probably involve enabling PCI passthrough). Then you just need to figure out how to do that from inside the Docker container inside the VM, which seems possible (e.g. this) |
I think serverside is activated by adding gpu in plan_class_spec.xml but at this time I can't get client to fetch it. If I am able to get it to run I will tell you and maybe (if possible) make an pull request for that ;) maybe other users need it also in the future |
I think I got it to run with NVIDIA gpu (cuda) (at least client detects nvidia). GPU itself did I not yet test. Also some permission things do I have to fix and to add AMD support. Fork can be found here: Still work in progress but it seems to do its work. (Update: Somehow it does not anymore. It only turns down the CPU :%) |
Some explenation: I first build another app for cuda which has cuda enabled. Than I made another work generator script for the cuda appname. 3. I added some gpu_ args for gpu suppport. Todo:
|
So it will be recongised as cuda task but does not use gpu yet :( |
So it generates now ati and cuda workunits on different apps but stdon't seem to use GPU |
Yea, I think getting the GPU recognized from inside Virtualbox/Docker will definitely require some tinkering. Any luck with the things mentioned in the links above? |
The docker part works I think but the PCI part didn't I even see xD so I have to look at it. I concentrated first to get the Work Generators and configs running :) In my tests it does already some small face regognition but that is running over the CPU. So atleast the configs are running. I will as soon as possible look into the part how to get the GPU running in the VBox. |
And If I understand it in the right way I have to add some code to the VBoxWrapper and figure out how to get the PCI Address that will be used. Also I have to check if the GPU will still be useable on the host system or will be disconnected. |
I can help with vboxwrapper changes if you point me to docs saying how to access GPUs from within a VM. |
@davidpanderson that would be nice because I have to say that I have no idea about C lang :) marius did link it above: https://www.virtualbox.org/manual/ch09.html#pcipassthrough but that isn't any C-API Document. |
Hm sorry perhaps I was a bit too optimistic without reading more first. So I suppose this is really limited to Linux hosts and only some cards that VBox supports PCI passthrough for? If your end goal is BOINC GPU computing but you want to use Docker, maybe another interesting avenue to pursue is getting Docker BOINC apps running natively, à la, BOINC/boinc#1620 This should allow all Linux/Nvidia working I think somewhat painlessly, and maybe even AMD with some hacking. Possibly at some point in the future one will be able to distribute native Windows Docker containers (it already exists for Server 2016), which themselves likely will able to access the GPU. Anyway, this is all quite speculative and may not be interesting / useful to you at all, but just throwing out some ideas! |
Ok :) so I will first just stick to CPU calculating until that works. Maybe in sometime it will work. :/ |
Just an question (offtopic for this issue): is there a way to define in the boinc2docker_create_work.py an expected time that will be needed to calculate? Because the time defined as default is way to short for the thing that is calculated. |
You can pass any of the usual "create_work" args listed here to |
ah ok thanks :) I didn't take a look at the original one and in the wiki couldn't I find it |
(comment deleted because it does not change the task's/jobs GFlops but the workunits ones) |
Hi guys - I know this is a really old thread, but I'm currently building a distributed computing project to fight the coronavirus : https://github.com/cjmielke/quarantineAtHome I'm building the client in Docker, and I have both a GPU and non-GPU version. If either of you think its possible to use boinc2docker to allow windows users to run GPU jobs, that might help the project scale once it gets to that stage. I figure'd I'd send a ping to you now, even though Im still getting a rough sketch of the code working. Thanks! Sorry for the noise. |
No worries, happy to discuss here and cool that you're working on this. I'm not sure how the situation has changed since these comments several years ago, but the fundamental challenge is that boinc2docker jobs run inside Docker inside a Virtualbox VM. Its getting access to the GPU from inside the VM that's the challenge. Do you know if there's any new developements there? |
I'm unsure of that, but I think I'll recircle. Here's my rough set of steps for now.
So I'll circle back then! Thanks for your great work! Hope you are sheltering in place well over there in Berkeley. |
Sounds good, and thanks! Also just to make sure, you have seen https://github.com/BOINC/boinc-client-docker ? A few GPU-capable containers exist there too. |
Is it possible to run opencv in an task generated by boinc2docker? (Means an task that needs GPU power)
The text was updated successfully, but these errors were encountered: