Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reduce summary variables using MPI #241

Closed

Conversation

conradtchan
Copy link
Collaborator

Type of PR:
Bug fix

Description:
The summary variables wake, floor, and tolv were not reduced across MPI tasks, resulting in incorrect printout. This did not seem to have any effect on the actual results.

Testing:
Ran test problem to make sure summary variables were the same across all tasks

Did you run the bots? no, not required

@conradtchan conradtchan mentioned this pull request Feb 21, 2022
@conradtchan
Copy link
Collaborator Author

phantomSPH/phantom-benchmarks#17 needs to be resolved before any new PRs can be merged. This PR has been closed and included in #243 along with other MPI-related fixes to reduce the testing load because there is a large backlog waiting to be tested and merged once the benchmark suite has been fixed.

@conradtchan conradtchan deleted the mpi-summary-variables branch February 21, 2022 01:23
conradtchan added a commit that referenced this pull request Feb 22, 2022
Type of PR: 
Bug fix, modifications to existing code

Description:
This PR includes several bugfixes. 3 fixes have been bundled from other PRs to reduce the testing load once the phantom benchmarks suite has been fixed.

#239: 
Bug introduced in #96 , particles are assigned new IDs if a file without ID is read. IDs are erroneously duplicated across MPI tasks. This is fixed by using an offset to ensure all IDs are unique.

#240:
`npartoftypetopt=reduceall_mpi('+',npartoftype)` is used in many places in the code, but it is always calculated on the fly. It only needs to be calculated after the number of any particle type on a task has changed. This PR stores it as a global variable in `part`, and defines a subroutine to recalculate it from `npartoftype`.

#241:
The summary variables `wake`, `floor`, and `tolv` were not reduced across MPI tasks, resulting in incorrect printout. This did not seem to have any effect on the actual results.

Other small bugfixes:
- replace `MPI_TYPE_STRUCT` with `MPI_TYPE_CREATE_STRUCT`: `MPI_TYPE_STRUCT` was deprecated since MPI 2.0 and replaced with `MPI_TYPE_CREATE_STRUCT`. It has exactly the same interface and does exactly the same thing. And then `MPI_TYPE_STRUCT` was removed from the standard in MPI 3.0 But `MPI_TYPE_STRUCT` still works, but gives the incorrect result
- reduce `dtextforce` across mpi tasks, and only print to log from the master task
- remove unused variable declaration (`dtype_old`)
- reduce box size across MPI tasks in `initial`
- use `ntot` instead of `npart` for `nskip`, since `npart` is only the local particle count
- use `ntot` instead of `npart` to print dtlog
- change particle memory allocation ratio, and make consistent between fortran and hdf5 outputs
- optimisations to step leapfrog: don't reduce ptmass variables unless necessary
- use kind=8 for `np` in timestep routines

Testing:
Full test suite

Did you run the bots? yes
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant