Replies: 5 comments 2 replies
-
The async_mpi API follows closely the underlying MPI functions. For instance:
turns into:
where the future will hold the returned status (once the operation has finished executing). The same scheme can be applied to any MPI call that expects a communicator and a status as their last arguments. Also, please note that you need to enable the integration of the async MPI functionalities with the HPX scheduler once before doing any async MPI calls:
You may want to look at this test to see everything in action: https://github.com/STEllAR-GROUP/hpx/blob/master/libs/core/async_mpi/tests/unit/mpi_ring_async_executor.cpp. Also, please report back on the performance results you gathered. This will be interesting for others as well. |
Beta Was this translation helpful? Give feedback.
-
I have built an example to test the HPX_async_MPI functionalities comparing with non-blocking p2p communication for matrix multiplication. At present I just take async_mpi instead of MPI_Isend and MPI_Irecv. Before port into my CFD code, I have to study the usage of HPX_async_mpi. If I gather some information, I will put here. |
Beta Was this translation helpful? Give feedback.
-
Dear Sir: // If I chaged above MPI_Irecv into HPX_Async_mpi API, and the results will be wrong. } |
Beta Was this translation helpful? Give feedback.
-
Dear Sir: |
Beta Was this translation helpful? Give feedback.
-
Dear Sir: |
Beta Was this translation helpful? Give feedback.
-
I notice the HPX_async_mpi in HPX is experimental, so is there any example code teaching how to convert non-blocking MPI API into async_mpi? I want to compare the efficiency between them. Thanks
Li Jian
Beta Was this translation helpful? Give feedback.
All reactions