Double/Float error deploying pair_allegro #20
Replies: 2 comments 16 replies
-
Hi @apoletayev , Hm, at least from looking over this now everything looks correct... the versions and precisions match up. My first guess is that one of the versions isn't what you think it is ( You could try to run the unit tests on As far as I know we've never tested |
Beta Was this translation helpful? Give feedback.
-
I'm really sorry!
Most recently tested LAMMPS builds (downloaded afresh every time, pair_allegro @ stress downloaded fresh every time) are 23Jun2022 updates 2 and 4, 28Mar2023 update 1, still with libtorch 1.11.0 and CUDA 11.3. The libtorch is a difference between training and lammps - but seems required for kokkos. Typical error remains as in the first post:
It confuses me a bit why it appears to involve |
Beta Was this translation helpful? Give feedback.
-
Hi folks! I've been trying to use
pair_allegro
with NPT simulations. I am getting a float vs double error which I'm guessing has to do with either the LAMMPS version, or my building LAMMPS, or the model's float/double configuration.nequip@develop (0.6.0) commit 8310052ccc022fe4ddf8231933f037c525051a5c
allegro@main commit 71ea80992f70a84fc65ecfb5a5ca45e8a82c6031
pair_allegro@stress (all following this comment) commit 176db81
for both training and building LAMMPS: CUDA 11.3.1, libtorch 1.11.0
LAMMPS 28Mar2023 (latest) patched with pair_allegro and built with libtorch + kokkos, tested without pair_allegro.
Posisbly relevant parts of the pytorch training config:
LAMMPS call :
lmp -sf kk -k on gpus 1 -pk kokkos newton on neigh full -in in.LPS -var structure structure.lmp -var model model.pth
LAMMPS input file :
Error trace from LAMMPS:
Very nearly same or exactly the same error shows up if I use a model trained without stress with nequip 0.5.5 having
default_dtype: float32
calling it asallegro3232
from LAMMPS. Building 29Sep2021_update1 release of lammps patched with pair_allegro@master works.I would really appreciate any suggestions around what I could tune to test/eliminate this. Currently testing other releases of LAMMPS and commits of the stress branch. Hopefully there is a combination of both that should work.
Beta Was this translation helpful? Give feedback.
All reactions