Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU support through Work Queue #37

Open
RyanGutenkunst opened this issue Sep 26, 2022 · 0 comments
Open

GPU support through Work Queue #37

RyanGutenkunst opened this issue Sep 26, 2022 · 0 comments
Labels
enhancement New feature or request

Comments

@RyanGutenkunst
Copy link
Collaborator

It would be ideal to support GPU acceleration of optimization through Work Queue. But it will be challenging, and there are multiple obstacles.

We could use t.specify_gpus(1) to indicate tasks for GPU execution. But we don't a priori know the proper number of GPU vs CPU tasks for efficient use of all resources. So to be efficient we would need to dynamically create tasks to fill in the queue as existing tasks finished, specifying them as GPU or CPU as necessary. This would require a significant rework of the Work Queue dadi-cli implementation. And it would require specifying the available resources in the Work Queue pool ahead of time, which isn't necessary now and seems contrary to the Work Queue philosophy.

Note that having each task try dadi.cuda_enabled(True) seems likely to lead to competition for limited GPUs, potentially slowing overall performance.

Note also that PythonTasks don't preserve state, so we can't simply run dadi.cuda_enabled(True) ahead of time.

@RyanGutenkunst RyanGutenkunst added the enhancement New feature or request label Sep 26, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant