LAMMPS compilation issue with pair_allegro #22
-
Hi All! I have been having some issues during the compilation of LAMMPS with pair_allegro. I am not familiar with cmake or C so I am a bit clueless now as of what might be causing the issue. Details below: I am compiling the packages on a remote server using a GPU. Using the latest version of pair_allegro and the stable release version of LAMMPS (23 Jun 2022). OpenMPI version 4.0.4 and Cuda version 11.0.2. Following the installation instructions I also installed my own copy of libtorch-1.10.1 (cxx11-abi version) and cudnn v8.0.1.13. And I am compiling LAMMPS with libtorch + kokkos. Here is my script for patching and compiling LAMMPS:
The patching and configuration works fine, but I get an error during the make LAMMPS command with the following error message:
It seems that the pair_allegro.cpp code fails to access some MPI commands that should have been available in my Openmpi package, which ultimately caused this issue during compilation. But I have tried changing the versions of openmpi package loaded and it couldn't resolve the issue. Can someone let me know what I have been doing wrong here? Thanks a ton! Here is the build configuration in case needed:
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 4 replies
-
Hi all,
Right now I can compile my LAMMPS executable without problems but once I call my lmp executable I get the following error message:
Does anyone have any idea on why this would occur? Thanks a lot. |
Beta Was this translation helpful? Give feedback.
Hi Alby,
I have tried various versions libtorch, since I was using a system CUDA/11.0.2, I have tried libtorch of version cu102, cu110, cu113 etc. I think all of the cu110 versions and above failed during LAMMPS compilation.
However, I was able to look into the above error message for the libtorch_1.12.1_cu102 version of executable. It actually works after I reduce the number of parameters in my potential. I'm not sure whether reducing the batch size of the allegro potential during training would actually decrease its memory requirements when running LAMMPS. But reducing number of parameters surely does.
Best,
Yifan