Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incorrect usage of non blocking MPI collectives #17

Open
lukasm91 opened this issue Feb 29, 2024 · 6 comments
Open

Incorrect usage of non blocking MPI collectives #17

lukasm91 opened this issue Feb 29, 2024 · 6 comments

Comments

@lukasm91
Copy link

We found an issue in the usage of collectives, for example:

https://github.com/ecmwf-ifs/fiat/blob/main/src/fiat/mpl/internal/mpl_alltoallv_mod.F90#L252

The variables IRECVDISPL and ISENDDISPL are local variables and do not outlive the end of the function. These variables must be valid until the collective completes, see https://www.mpi-forum.org/docs/mpi-3.1/mpi31-report/node126.htm

Once initiated, all associated send buffers and buffers associated with input arguments (such as arrays of counts, displacements, or datatypes in the vector versions of the collectives) should not be modified, and all associated receive buffers should not be accessed, until the collective operation completes.

Other routines such as MPL_ALLGATHERV have the same issue.

In the latest HPC-X version, we get a segfault due to this. A quick workaround is to disable non blocking communication for collectives, and a proper fix is probably to make these displacements not ALLOCATABLE, but POINTER and require the optional input if we use non blocking communication.

@wdeconinck
Copy link
Collaborator

Thanks @lukasm91 for this report! So I understand now that MPI_IALLTOALLV for most MPI implementations is internally calling the blocking version MPI_ALLTOALLV, which is why this problem went under the radar for the past decades. It makes sense that the (optional) arguments KRECVDISPL and KSENDDISPL should stay alive as long as MPI_WAIT is not called.
I would therefore suggest to abort if the optional arguments are not present (breaking change for IFS/Arpege).
This will require changes in ifs-source to make sure this is indeed the case.

An alternative backward-compatible fix (perhaps temporarily, but these things stick) is to manually call the blocking MPI_ALLTOALLV when the arguments are missing, but that would just hide the problem and we'd all mistakenly think we're using the non-blocking version.

Two cents @ioanhadade @marsdeno ?

@ioanhadade
Copy link
Collaborator

Good catch. We need to work on a fix as we started using non-blocking collectives in some places and plan to use them more. I believe one was introduced (igather) recently in opdis.F90 and in most of the cases, we use local arrays from the calling method as arguments to MPL collective which means that if the wait for completion for the collective is not done in the same scope (e.g., calling routine) but somewhere later, the calling arguments are out of scope.

Thanks @lukasm91.

@lukasm91
Copy link
Author

lukasm91 commented Nov 1, 2024

FYI: Alexey from BSC also ran into this when running IFS. For the moment we just disable UCC (OMPI_MCA_coll_ucc_enable=0).

@a-v-medvedev
Copy link

Thanks @lukasm91 for highlighting this. I ran into this issue while working on DestinE/ClimateDT code to make it run properly on MareNostrum5 machine (we use fiat-1.2.0 there for now).

Out of my previous experience, I'd say that this wrong semantics issue for MPI_IAlltoallv() is quite severe: many MPI implementations move nowadays to more advanced algorithms of non-blocking collectives, so this will lead to random/occasional/persistent crashes (depending on how lucky you are) on many new machines and runtime environments.

I'd suggest considering to implement an urgent hotfix ASAP. I think I have no permission to create branches in this project, so instead of a pull request, I'm attaching here hotfix-alltoall-issue-github#17.patch.gz a variant of a hotfix patch. This is just an idea of a fix (even though I tested it in my environment recently) -- one can implement another type of a fix based on this or similar idea. Any comments and corrections are welcome.
@wdeconinck

@ioanhadade
Copy link
Collaborator

@a-v-medvedev can you fork fiat, add your patch on top develop/master, and then raise a PR to develop?

@wdeconinck
Copy link
Collaborator

The hotfix in #29 (thanks @a-v-medvedev) is sufficient to urgently patch the crashing behaviour.
A proper fix will internally be worked on with priority. This issue should not be closed until this proper fix is in place.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants