Compile C++ interface on multiple GPU cluster #2409
-
The conda installed lammps could not use multiple GPUs on our small local cluster and large HPC properly, which I assume due to the conda uses its own MPI instead of the MPI from the HPC. Therefore, I am trying to compile all the C++ interface from the beginning on our local cluster, plug in the proper MPI during the lammps compilation. I followed the compile instruction here: https://docs.deepmodeling.com/projects/deepmd/en/latest/install/install-from-source.html systemOS: Ubuntu 18.04.6 LTS x86_64 Tensorflow C++ (finished correctly)
DeePMD-kit C++ (have error)
The error message
lammpsUpdate: The DeePMD C++ is now compile, the error message above was due to the version of cmake. I start compiling the lammps with CUDA-awareness, using the Nvidia HPC sdk 23.1, https://docs.nvidia.com/hpc-sdk/archive/23.1/ The compilation was performed successfully until pppm_dplr.cpp, report couldn't find grid3d.h , with following error message:
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 22 replies
-
This is due to lammps version issue, solved by using the recent release. Also set All the cpp files are compile but could not generate the |
Beta Was this translation helpful? Give feedback.
You did not observe any acceleration by using 2 GPUs, because the system you tested (192 atoms) was too small to even use the computational power of 1 GPU. Please increase the size of your system to more than 1000 atoms (more than 500 atoms per GPU as rule of thumb) to check if you could achieve acceleration.