-
Notifications
You must be signed in to change notification settings - Fork 189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add short range neighbor methods #4401
Conversation
The most recent version can now be found here: https://github.com/jhossbach/espresso/tree/short_range_neighbors_patch |
Left ToDo's :
|
One of the changes introduced an error with the mpi_... functions, I will check and push an updated version. |
2d3cfa8
to
7cefa16
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me! Once the remaining proposed changes are in, we can move forward with the merge.
There is a problem with getting the right distances when using more than one MPI rank, this needs to be adressed first |
@RudolfWeeber could you please have a look at this PR? Somehow running a short-range loop (e.g. to get the list of neighbors) updates the ghost information, but only in the cellsystem, so we end up with a corrupted particle node cache. This is why running analysis functions that act on particle positions after particles have been moved return results based on the old position, unless we run the integrator with 0 steps. To reproduce the issue, run the test on 3 MPI ranks. This triggers a segfault due to a corrupted particle node cache. When replacing if (pnode == this_node) {
assert(cell_structure.get_local_particle(part));
return *cell_structure.get_local_particle(part);
} by if (pnode == this_node) {
auto const p_ptr = cell_structure.get_local_particle(part);
if (p_ptr) {
return *p_ptr;
}
} in if (cell_structure.get_resort_particles()) {
cells_update_ghosts(global_ghost_flags());
} that was introduced by this PR in The more fundamental issue here is that doing system.part.all().pos = np.random.random((20, 3)) * system.box_l does not update the particle properties, but instead queues the change until the next integration loop. So at the minimum one has to do system.integrator.run(0, recalc_forces=False) before attempting to update particle positions again. This also explains why the |
Please try |
The resort which was already present means that particles can move from one node to an other. Then the particle node cache needs to be invalidated, which is done in the above mentioned funciton. |
The assertion in the |
I added a sanity check to prevent looking for particles for distances further than the cell size. |
The |
f5bba5b
to
9a0808a
Compare
Implement parallel kernels to find particle pairs within a cutoff distance of a specific particle and to calculate the short-range non-bonded energy of a specific particle. Co-authored-by: Jean-Noël Grad <jgrad@icp.uni-stuttgart.de> Co-authored-by: Rudolf Weeber <weeber@icp.uni-stuttgart.de>
9a0808a
to
a257acf
Compare
Fixes #4399, fixes #4438
Description of changes:
CellStructure.cpp
: Added theCellStructure::run_on_particle_short_range_neighbors()
method which runs a kernel function over all particles inside the cell and its neighbors of a given particlecells.hpp
: Added thempi_get_short_range_neighbors()
function to execute a parallel searchsystem.cell_system.get_neighbors.get_neighbors(p, distance)
energy.hpp
: Added thecompute_non_bonded_pair_energy()
function which returns both the short-range Coulomb interactions plus the non-bonded energy contributions of two particlessystem.analysis.particle_energy(p)
search_algorithm="parallel"
(default issearch_algorithm="order_n"
); on 1 MPI rank the original order N algorithm is faster since the parallel algorithm introduces some overhead due to the ghost update (this overhead is negligible with 2 or more MPI ranks)