You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I have found that with pytorch 1.13 and 2.0 (not with pytorch<=1.12) the torch.jit.script profile guided optimisations (that are on by default) cause significant errors in the position gradients calculated via backpropagation of aev_computer when using a CUDA device. This is demonstrated in issue openmm/openmm-ml#50.
An example is shown below, manually turning off the jit optimizations gives accurate forces:
Hi, I have found that with pytorch 1.13 and 2.0 (not with pytorch<=1.12) the torch.jit.script profile guided optimisations (that are on by default) cause significant errors in the position gradients calculated via backpropagation of aev_computer when using a CUDA device. This is demonstrated in issue openmm/openmm-ml#50.
An example is shown below, manually turning off the jit optimizations gives accurate forces:
output I get on an RTX3090 is:
I have found a workaround to remove the errors is to replace a
**
operation with atorch.float_power
: 172b6fe,The text was updated successfully, but these errors were encountered: