Skip to content

Commit

Permalink
update data
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed May 19, 2024
1 parent 37865f7 commit 4aceffa
Show file tree
Hide file tree
Showing 59 changed files with 1,482 additions and 114 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-05-19 10:31:19 UTC</i>
<i class="footer">This page was last updated on 2024-05-19 12:37:27 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -97,7 +97,7 @@ hide:
Jiafei Duan, Samson Yu, Soujanya Poria, B. Wen, Cheston Tan
</td>
<td>2021-09-10</td>
<td>ArXiv, DBLP</td>
<td>DBLP, ArXiv</td>
<td>2</td>
<td>65</td>
</tr>
Expand All @@ -121,7 +121,7 @@ hide:
Nicholas Watters, Daniel Zoran, T. Weber, P. Battaglia, Razvan Pascanu, A. Tacchetti
</td>
<td>2017-06-05</td>
<td>MAG, ArXiv, DBLP</td>
<td>DBLP, ArXiv, MAG</td>
<td>250</td>
<td>66</td>
</tr>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-05-19 10:31:21 UTC</i>
<i class="footer">This page was last updated on 2024-05-19 12:37:30 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down
98 changes: 97 additions & 1 deletion docs/recommendations/2d3000d245988a02d3c1060211e9d89c67147b49.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-05-19 14:32:06 UTC</i>
<i class="footer">This page was last updated on 2024-05-19 12:37:38 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -54,6 +54,102 @@ hide:
<td>77</td>
</tr>

<tr id="Simulations are vital for understanding and predicting the evolution of complex molecular systems. However, despite advances in algorithms and special purpose hardware, accessing the time scales necessary to capture the structural evolution of biomolecules remains a daunting task. In this work, we present a novel framework to advance simulation time scales by up to 3 orders of magnitude by learning the effective dynamics (LED) of molecular systems. LED augments the equation-free methodology by employing a probabilistic mapping between coarse and fine scales using mixture density network (MDN) autoencoders and evolves the non-Markovian latent dynamics using long short-term memory MDNs. We demonstrate the effectiveness of LED in the Müller-Brown potential, the Trp cage protein, and the alanine dipeptide. LED identifies explainable reduced-order representations, i.e., collective variables, and can generate, at any instant, all-atom molecular trajectories consistent with the collective variables. We believe that the proposed framework provides a dramatic increase to simulation capabilities and opens new horizons for the effective modeling of complex molecular systems.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/68c0f4d535078674fd2ce738306eb72171a0677d" target='_blank'>Accelerated Simulations of Molecular Systems through Learning of Effective Dynamics.</a></td>
<td>
Pantelis R. Vlachas, J. Zavadlav, M. Praprotnik, P. Koumoutsakos
</td>
<td>2021-12-10</td>
<td>Journal of chemical theory and computation</td>
<td>26</td>
<td>77</td>
</tr>

<tr id="Synthetic molecular dynamics (synMD) trajectories from learned generative models have been proposed as a useful addition to the biomolecular simulation toolbox. The computational expense of explicitly integrating the equations of motion in molecular dynamics currently is a severe limit on the number and length of trajectories which can be generated for complex systems. Approximate, but more computationally efficient, generative models can be used in place of explicit integration of the equations of motion, and can produce meaningful trajectories at greatly reduced computational cost. Here, we demonstrate a very simple synMD approach using a fine-grained Markov state model (MSM) with states mapped to specific atomistic configurations, which provides an exactly solvable reference. We anticipate this simple approach will enable rapid, effective testing of enhanced sampling algorithms in highly non-trivial models for both equilibrium and non-equilibrium problems. We demonstrate the use of a MSM to generate atomistic synMD trajectories for the fast-folding miniprotein Trp-cage, at a rate of over 200 milliseconds per day on a standard workstation. We employ a non-standard clustering for MSM generation that appears to better preserve kinetic properties at shorter lag times than a conventional MSM. We also show a parallelizable workflow that backmaps discrete synMD trajectories to full-coordinate representations at dynamic resolution for efficient analysis.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/d09f8cd520789fbd378a7bd90dcafc2544b6fcfc" target='_blank'>Simple synthetic molecular dynamics for efficient trajectory generation</a></td>
<td>
John D. Russo, D. Zuckerman
</td>
<td>2022-04-09</td>
<td>ArXiv</td>
<td>1</td>
<td>38</td>
</tr>

<tr id="We propose a deep generative Markov State Model (DeepGenMSM) learning framework for inference of metastable dynamical systems and prediction of trajectories. After unsupervised training on time series data, the model contains (i) a probabilistic encoder that maps from high-dimensional configuration space to a small-sized vector indicating the membership to metastable (long-lived) states, (ii) a Markov chain that governs the transitions between metastable states and facilitates analysis of the long-time dynamics, and (iii) a generative part that samples the conditional distribution of configurations in the next time step. The model can be operated in a recursive fashion to generate trajectories to predict the system evolution from a defined starting state and propose new configurations. The DeepGenMSM is demonstrated to provide accurate estimates of the long-time kinetics and generate valid distributions for molecular dynamics (MD) benchmark systems. Remarkably, we show that DeepGenMSMs are able to make long time-steps in molecular configuration space and generate physically realistic structures in regions that were not seen in training data.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/5c6ee980b791766ba036cba282bcbcbcb2e5309e" target='_blank'>Deep Generative Markov State Models</a></td>
<td>
Hao Wu, Andreas Mardt, Luca Pasquali, F. Noé
</td>
<td>2018-05-19</td>
<td>ArXiv</td>
<td>53</td>
<td>61</td>
</tr>

<tr id="None">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/58912e2c2aaa77d1448d51e9d9460e06a5b924b9" target='_blank'>VAMPnets for deep learning of molecular kinetics</a></td>
<td>
Andreas Mardt, Luca Pasquali, Hao Wu, F. Noé
</td>
<td>2017-10-16</td>
<td>Nature Communications</td>
<td>452</td>
<td>61</td>
</tr>

<tr id="Computing equilibrium states in condensed-matter many-body systems, such as solvated proteins, is a long-standing challenge. Lacking methods for generating statistically independent equilibrium samples in "one shot", vast computational effort is invested for simulating these system in small steps, e.g., using Molecular Dynamics. Combining deep learning and statistical mechanics, we here develop Boltzmann Generators, that are shown to generate unbiased one-shot equilibrium samples of representative condensed matter systems and proteins. Boltzmann Generators use neural networks to learn a coordinate transformation of the complex configurational equilibrium distribution to a distribution that can be easily sampled. Accurate computation of free energy differences and discovery of new configurations are demonstrated, providing a statistical mechanics tool that can avoid rare events during sampling without prior knowledge of reaction coordinates.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/9896835f050121358372254a9bce57ca117672c8" target='_blank'>Boltzmann Generators - Sampling Equilibrium States of Many-Body Systems with Deep Learning</a></td>
<td>
F. Noé, Hao Wu
</td>
<td>2018-12-04</td>
<td>ArXiv</td>
<td>20</td>
<td>61</td>
</tr>

<tr id="None">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/13b52cd927d3daa5c95528b987734d97474e790c" target='_blank'>Markov field models: Scaling molecular kinetics approaches to large molecular machines.</a></td>
<td>
Tim Hempel, Simon Olsson, Frank No'e
</td>
<td>2022-06-23</td>
<td>Current opinion in structural biology</td>
<td>6</td>
<td>9</td>
</tr>

<tr id="All-atom and coarse-grained molecular dynamics are two widely used computational tools to study the conformational states of proteins. Yet, these two simulation methods suffer from the fact that without access to supercomputing resources, the time and length scales at which these states become detectable are difficult to achieve. One alternative to such methods is based on encoding the atomistic trajectory of molecular dynamics as a shorthand version devoid of physical particles, and then learning to propagate the encoded trajectory through the use of artificial intelligence. Here we show that a simple textual representation of the frames of molecular dynamics trajectories as vectors of Ramachandran basin classes retains most of the structural information of the full atomistic representation of a protein in each frame, and can be used to generate equivalent atom-less trajectories suitable to train different types of generative neural networks. In turn, the trained generative models can be used to extend indefinitely the atom-less dynamics or to sample the conformational space of proteins from their representation in the models latent space. We define intuitively this methodology as molecular dynamics without molecules, and show that it enables to cover physically relevant states of proteins that are difficult to access with traditional molecular dynamics.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/31d62c95e128caa934ca6a6c5cd5c766c6a9d541" target='_blank'>Molecular dynamics without molecules: searching the conformational space of proteins with generative neural networks</a></td>
<td>
Gregory Schwing, L. Palese, Ariel Fern'andez, L. Schwiebert, D. Gatti
</td>
<td>2022-06-09</td>
<td>ArXiv</td>
<td>1</td>
<td>33</td>
</tr>

<tr id="Markov state models (MSMs) are valuable for studying dynamics of protein conformational changes via statistical analysis of molecular dynamics (MD) simulations. In MSMs, the complex configuration space is coarse-grained into conformational states, with the dynamics modeled by a series of Markovian transitions among these states at discrete lag times. Constructing the Markovian model at a specific lag time requires state defined without significant internal energy barriers, enabling internal dynamics relaxation within the lag time. This process coarse grains time and space, integrating out rapid motions within metastable states. This work introduces a continuous embedding approach for molecular conformations using the state predictive information bottleneck (SPIB), which unifies dimensionality reduction and state space partitioning via a continuous, machine learned basis set. Without explicit optimization of VAMP-based scores, SPIB demonstrates state-of-the-art performance in identifying slow dynamical processes and constructing predictive multi-resolution Markovian models. When applied to mini-proteins trajectories, SPIB showcases unique advantages compared to competing methods. It automatically adjusts the number of metastable states based on a specified minimal time resolution, eliminating the need for manual tuning. While maintaining efficacy in dynamical properties, SPIB excels in accurately distinguishing metastable states and capturing numerous well-populated macrostates. Furthermore, SPIB's ability to learn a low-dimensional continuous embedding of the underlying MSMs enhances the interpretation of dynamic pathways. Accordingly, we propose SPIB as an easy-to-implement methodology for end-to-end MSM construction.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/cb4d8f8b2b1169d210d289c6b2deac0ccbbc34fe" target='_blank'>An Information Bottleneck Approach for Markov Model Construction</a></td>
<td>
Dedi Wang, Yunrui Qiu, E. Beyerle, Xuhui Huang, P. Tiwary
</td>
<td>2024-04-03</td>
<td>ArXiv</td>
<td>0</td>
<td>30</td>
</tr>

</tbody>
<tfoot>
<tr>
Expand Down
Loading

0 comments on commit 4aceffa

Please sign in to comment.