Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MPI #106

Open
tkphd opened this issue Sep 16, 2017 · 4 comments
Open

MPI #106

tkphd opened this issue Sep 16, 2017 · 4 comments
Milestone

Comments

@tkphd
Copy link
Collaborator

tkphd commented Sep 16, 2017

An MPI implementation may be necessary, since MPI

  • is the standard distributed memory interface
  • supports threading sans OpenMP, as of MPI-3
  • is required for multinode architectures.

Such an implementation would enable HiPerC benchmarks of

  • OpenMP vs. MPI on multicore and SMP workstations
  • Raspberry Pi brambles
  • Heterogeneous supercomputers in hybrid (MPI+X) mode.
@amjokisaari
Copy link
Collaborator

👍 on the RasPi...

I know this would be a large scope increase, but I think it would be a really good one.

@tkphd
Copy link
Collaborator Author

tkphd commented Sep 18, 2017

I have mixed feelings about this.

  • Benchmarking my 6-RPi3 Bramble would be fun, but not especially useful. The ARM Cortex A53 does have floating-point hardware support, but is not fundamentally designed for numerically-intensive parallel workloads.
  • There's a lot of debate over whether hybrid MPI+OpenMP achieves better performance than pure MPI. It would definitely be useful, and within scope, for HiPerC to shed some quantitative light on it, using cluster hardware with high-core-count nodes.
  • Retrofitting HiPerC for MPI is a rabbit hole, especially if users' hardware spans single-RPi3 to PFLOP supercomputers.

The MPI implementation remains an interest, but it's not a priority in the near term.

@amjokisaari
Copy link
Collaborator

amjokisaari commented Sep 18, 2017

Ha! I think we are actually in complete agreement on all your points.

I am not actually serious about RPi... sorry if that caused any confusion. While it's one of those "fun challenge problems," I don't envision that it would ever be relevant for actual modeling and simulation.

I am, however, curious about shared vs distributed memory and also those vs the hybrid model. That DOES have relevance. Though the results are likely specific to the mathematical approach (finite difference, explicit time stepping), it would definitely be a useful thing to know.

Maybe MPI could go into HiPerC v2.0 or something... long term strategic vision. Obviously there's existing project development to perform first before increasing the scope.

@tkphd
Copy link
Collaborator Author

tkphd commented Sep 18, 2017

Setting it aside for HiPerC v2.0 is a pretty good idea: focus on shared memory (as the README states) for the first round of benchmarks, work hard to get the word out on that front and maybe incorporate more interesting numerical methods, and then do hybrid implementations with MPI when there's community demand for it, or a compelling usage case.

@tkphd tkphd added this to the v2.0 milestone Sep 18, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants