You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, we don't set any binding options explicitly for the GROMACS test. This may or may not result in 'sensible' binding. We should check how we can make sure that it binds in a reasonable way. Also, we should check that it launches correct amounts of tasks on hyperthreading nodes (1 per physical core, not 1 per thread).
For binding, we'd want processes to be bound to physical cores for pure MPI (i.e. CPU runs of GROMACS). For GPU runs (essentially hybrid OpenMP+MPI) we should bind at least the tasks to 1-xth of the nodes CPU cores (if there are x GPUs in the node). Even better would be to also add thread binding to this.
The text was updated successfully, but these errors were encountered:
Currently, we don't set any binding options explicitly for the GROMACS test. This may or may not result in 'sensible' binding. We should check how we can make sure that it binds in a reasonable way. Also, we should check that it launches correct amounts of tasks on hyperthreading nodes (1 per physical core, not 1 per thread).
For binding, we'd want processes to be bound to physical cores for pure MPI (i.e. CPU runs of GROMACS). For GPU runs (essentially hybrid OpenMP+MPI) we should bind at least the tasks to 1-xth of the nodes CPU cores (if there are x GPUs in the node). Even better would be to also add thread binding to this.
The text was updated successfully, but these errors were encountered: