Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU Factorisation Solve #6

Closed
tyronerees opened this issue Mar 21, 2017 · 6 comments
Closed

GPU Factorisation Solve #6

tyronerees opened this issue Mar 21, 2017 · 6 comments
Assignees

Comments

@tyronerees
Copy link
Member

We were able to run SSIDS (v2.0) in GPU in the analyze phase. But for factorization phase and for solve phase, we were not able to use GPU yet. Please, how can we enable SSIDS to perform the factorization phase and the solve phase also in GPU?

(submitted via hsl@stfc.ac.uk)

@tyronerees
Copy link
Member Author

Closing the issue, as this behaviour was expected due to the size of the problem and the (default) options set.

@Poofee
Copy link

Poofee commented Mar 23, 2017

What does it mean? Can we use GPU in factorization phase and for solve phase?

@tyronerees
Copy link
Member Author

Yes, this should work. If you're not finding that it does, please reopen the issue and we can investigate further. The code is set up so that the GPU is only used if it's worthwhile to do so, so if you run on small problems the GPU may not be used, as the overhead of sending to the GPU is too expensive - this is (we think) what the user was experiencing. This can be controlled somewhat (but not completely) by changing the options min_gpu_work and gpu_perf_coeff.

@Poofee
Copy link

Poofee commented Mar 23, 2017

Ok, I will have a try. If I have some problem, I will leave it here.

@flipflapflop
Copy link
Collaborator

flipflapflop commented Mar 23, 2017

If you want more information about the options controlling the use of the GPU you can take a look at the following section in the documentation ssids_options.

Essentially, before the factorization phase, the workload is split between the CPUs and the GPU using the parameter gpu_perf_coeff which corresponds to of how much faster the GPU runs compare a CPU (including all the cores). Then the parts of the workload that are supposed to be run on the GPU are only processed on the GPU if the corresponding workload exceed min_gpu_work.

@Poofee
Copy link

Poofee commented Mar 31, 2017

Thesedays, I was using the ssids. However, I am still confused about the GPU setting. How can I set GPU threads in details?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants