Skip to content

Commit

Permalink
update data
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed Jan 6, 2025
1 parent c0a3bb9 commit 662c754
Show file tree
Hide file tree
Showing 64 changed files with 691 additions and 664 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-12-30 06:05:43 UTC</i>
<i class="footer">This page was last updated on 2025-01-06 06:05:58 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -110,7 +110,7 @@ hide:
</td>
<td>2022-06-01</td>
<td>Nonlinear Dynamics</td>
<td>31</td>
<td>32</td>
<td>93</td>
</tr>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-12-30 06:05:46 UTC</i>
<i class="footer">This page was last updated on 2025-01-06 06:06:10 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -74,7 +74,7 @@ hide:
</td>
<td>2015-09-11</td>
<td>Proceedings of the National Academy of Sciences</td>
<td>3476</td>
<td>3490</td>
<td>68</td>
</tr>

Expand All @@ -98,7 +98,7 @@ hide:
</td>
<td>2020-05-05</td>
<td>Nature Communications</td>
<td>301</td>
<td>305</td>
<td>13</td>
</tr>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-12-30 06:05:19 UTC</i>
<i class="footer">This page was last updated on 2025-01-06 06:05:39 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down
10 changes: 5 additions & 5 deletions docs/recommendations/123acfbccca0460171b6b06a4012dbb991cde55b.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-12-30 06:05:20 UTC</i>
<i class="footer">This page was last updated on 2025-01-06 06:05:40 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -50,7 +50,7 @@ hide:
</td>
<td>2024-03-12</td>
<td>ArXiv</td>
<td>82</td>
<td>85</td>
<td>18</td>
</tr>

Expand All @@ -62,7 +62,7 @@ hide:
</td>
<td>2024-02-04</td>
<td>ArXiv</td>
<td>6</td>
<td>7</td>
<td>67</td>
</tr>

Expand Down Expand Up @@ -109,7 +109,7 @@ hide:
Yong Liu, Haoran Zhang, Chenyu Li, Xiangdong Huang, Jianmin Wang, Mingsheng Long
</td>
<td>2024-02-04</td>
<td>ArXiv, DBLP</td>
<td>DBLP, ArXiv</td>
<td>26</td>
<td>67</td>
</tr>
Expand All @@ -122,7 +122,7 @@ hide:
</td>
<td>2023-10-05</td>
<td>ArXiv</td>
<td>65</td>
<td>66</td>
<td>5</td>
</tr>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-12-30 06:05:24 UTC</i>
<i class="footer">This page was last updated on 2025-01-06 06:05:42 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -86,7 +86,7 @@ hide:
</td>
<td>2022-09-20</td>
<td>IEEE Transactions on Knowledge and Data Engineering</td>
<td>90</td>
<td>91</td>
<td>18</td>
</tr>

Expand All @@ -110,7 +110,7 @@ hide:
</td>
<td>2024-05-03</td>
<td>ArXiv</td>
<td>13</td>
<td>14</td>
<td>47</td>
</tr>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-12-30 06:05:21 UTC</i>
<i class="footer">This page was last updated on 2025-01-06 06:05:41 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -62,7 +62,7 @@ hide:
</td>
<td>2024-02-13</td>
<td>ArXiv, DBLP</td>
<td>43</td>
<td>42</td>
<td>9</td>
</tr>

Expand All @@ -86,7 +86,7 @@ hide:
</td>
<td>2024-06-09</td>
<td>ArXiv</td>
<td>4</td>
<td>5</td>
<td>4</td>
</tr>

Expand Down
44 changes: 24 additions & 20 deletions docs/recommendations/279cd637b7e38bba1dd8915b5ce68cbcacecbe68.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-12-30 06:05:28 UTC</i>
<i class="footer">This page was last updated on 2025-01-06 06:05:45 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -49,7 +49,7 @@ hide:
Andreas Doerr, Christian Daniel, Martin Schiegg, D. Nguyen-Tuong, S. Schaal, Marc Toussaint, Sebastian Trimpe
</td>
<td>2018-01-31</td>
<td>ArXiv, DBLP, MAG</td>
<td>DBLP, ArXiv, MAG</td>
<td>115</td>
<td>93</td>
</tr>
Expand Down Expand Up @@ -102,18 +102,6 @@ hide:
<td>50</td>
</tr>

<tr id="Forecasting the behaviour of complex dynamical systems such as interconnected sensor networks characterized by high-dimensional multivariate time series(MTS) is of paramount importance for making informed decisions and planning for the future in a broad spectrum of applications. Graph forecasting networks(GFNs) are well-suited for forecasting MTS data that exhibit spatio-temporal dependencies. However, most prior works of GFN-based methods on MTS forecasting rely on domain-expertise to model the nonlinear dynamics of the system, but neglect the potential to leverage the inherent relational-structural dependencies among time series variables underlying MTS data. On the other hand, contemporary works attempt to infer the relational structure of the complex dependencies between the variables and simultaneously learn the nonlinear dynamics of the interconnected system but neglect the possibility of incorporating domain-specific prior knowledge to improve forecast accuracy. To this end, we propose a hybrid architecture that combines explicit prior knowledge with implicit knowledge of the relational structure within the MTS data. It jointly learns intra-series temporal dependencies and inter-series spatial dependencies by encoding time-conditioned structural spatio-temporal inductive biases to provide more accurate and reliable forecasts. It also models the time-varying uncertainty of the multi-horizon forecasts to support decision-making by providing estimates of prediction uncertainty. The proposed architecture has shown promising results on multiple benchmark datasets and outperforms state-of-the-art forecasting methods by a significant margin. We report and discuss the ablation studies to validate our forecasting architecture.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/05cd3afc8208f1c0dd61b09a90f35dd42497e175" target='_blank'>Multi-Knowledge Fusion Network for Time Series Representation Learning</a></td>
<td>
Sakhinana Sagar Srinivas, Shivam Gupta, Krishna Sai Sudhir Aripirala, Venkataramana Runkana
</td>
<td>2024-08-22</td>
<td>ArXiv</td>
<td>0</td>
<td>3</td>
</tr>

<tr id="Real-world dynamical systems often consist of multiple stochastic subsystems that interact with each other. Modeling and forecasting the behavior of such dynamics are generally not easy, due to the inherent hardness in understanding the complicated interactions and evolutions of their constituents. This paper introduces the relational state-space model (R-SSM), a sequential hierarchical latent variable model that makes use of graph neural networks (GNNs) to simulate the joint state transitions of multiple correlated objects. By letting GNNs cooperate with SSM, R-SSM provides a flexible way to incorporate relational information into the modeling of multi-object dynamics. We further suggest augmenting the model with normalizing flows instantiated for vertex-indexed random variables and propose two auxiliary contrastive objectives to facilitate the learning. The utility of R-SSM is empirically evaluated on synthetic and real time series datasets.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/7a1e5377b08489c2969f73c56efc557e34f578e1" target='_blank'>Relational State-Space Model for Stochastic Multi-Object Systems</a></td>
Expand All @@ -126,16 +114,32 @@ hide:
<td>61</td>
</tr>

<tr id="Over the past few years, research on deep graph learning has shifted from static graphs to temporal graphs in response to real-world complex systems that exhibit dynamic behaviors. In practice, temporal graphs are formalized as an ordered sequence of static graph snapshots observed at discrete time points. Sequence models such as RNNs or Transformers have long been the predominant backbone networks for modeling such temporal graphs. Yet, despite the promising results, RNNs struggle with long-range dependencies, while transformers are burdened by quadratic computational complexity. Recently, state space models (SSMs), which are framed as discretized representations of an underlying continuous-time linear dynamical system, have garnered substantial attention and achieved breakthrough advancements in independent sequence modeling. In this work, we undertake a principled investigation that extends SSM theory to temporal graphs by integrating structural information into the online approximation objective via the adoption of a Laplacian regularization term. The emergent continuous-time system introduces novel algorithmic challenges, thereby necessitating our development of GraphSSM, a graph state space model for modeling the dynamics of temporal graphs. Extensive experimental results demonstrate the effectiveness of our GraphSSM framework across various temporal graph benchmarks.">
<tr id="Time series modeling is a well-established problem, which often requires that methods (1) expressively represent complicated dependencies, (2) forecast long horizons, and (3) efficiently train over long sequences. State-space models (SSMs) are classical models for time series, and prior works combine SSMs with deep learning layers for efficient sequence modeling. However, we find fundamental limitations with these prior approaches, proving their SSM representations cannot express autoregressive time series processes. We thus introduce SpaceTime, a new state-space time series architecture that improves all three criteria. For expressivity, we propose a new SSM parameterization based on the companion matrix -- a canonical representation for discrete-time processes -- which enables SpaceTime's SSM layers to learn desirable autoregressive processes. For long horizon forecasting, we introduce a"closed-loop"variation of the companion SSM, which enables SpaceTime to predict many future time-steps by generating its own layer-wise inputs. For efficient training and inference, we introduce an algorithm that reduces the memory and compute of a forward pass with the companion matrix. With sequence length $\ell$ and state-space size $d$, we go from $\tilde{O}(d \ell)$ na\"ively to $\tilde{O}(d + \ell)$. In experiments, our contributions lead to state-of-the-art results on extensive and diverse benchmarks, with best or second-best AUROC on 6 / 7 ECG and speech time series classification, and best MSE on 14 / 16 Informer forecasting tasks. Furthermore, we find SpaceTime (1) fits AR($p$) processes that prior deep SSMs fail on, (2) forecasts notably more accurately on longer horizons than prior state-of-the-art, and (3) speeds up training on real-world ETTh1 data by 73% and 80% relative wall-clock time over Transformers and LSTMs.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/919e5db29c7b7be4468b975eb4c0fa4a543165fc" target='_blank'>State Space Models on Temporal Graphs: A First-Principles Study</a></td>
<td><a href="https://www.semanticscholar.org/paper/a7d68b1702af08ce4dbbf2cd0b083e744ae5c6be" target='_blank'>Effectively Modeling Time Series with Simple Discrete State Spaces</a></td>
<td>
Jintang Li, Ruofan Wu, Xinzhou Jin, Boqun Ma, Liang Chen, Zibin Zheng
Michael Zhang, Khaled Kamal Saab, Michael Poli, Tri Dao, Karan Goel, Christopher Ré
</td>
<td>2024-06-03</td>
<td>2023-03-16</td>
<td>ArXiv</td>
<td>2</td>
<td>12</td>
<td>37</td>
<td>46</td>
</tr>

<tr id="

Gaussian state space models have been used for decades as generative models of sequential data. They admit an intuitive probabilistic interpretation, have a simple functional form, and enjoy widespread adoption. We introduce a unified algorithm to efficiently learn a broad class of linear and non-linear state space models, including variants where the emission and transition distributions are modeled by deep neural networks. Our learning algorithm simultaneously learns a compiled inference network and the generative model, leveraging a structured variational approximation parameterized by recurrent neural networks to mimic the posterior distribution. We apply the learning algorithm to both synthetic and real-world datasets, demonstrating its scalability and versatility. We find that using the structured approximation to the posterior results in models with significantly higher held-out likelihood.

">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/2af17f153e3fd71e15db9216b972aef222f46617" target='_blank'>Structured Inference Networks for Nonlinear State Space Models</a></td>
<td>
R. G. Krishnan, Uri Shalit, D. Sontag
</td>
<td>2016-09-30</td>
<td>DBLP, ArXiv, MAG</td>
<td>440</td>
<td>48</td>
</tr>

</tbody>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-12-30 06:05:44 UTC</i>
<i class="footer">This page was last updated on 2025-01-06 06:05:59 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -122,7 +122,7 @@ hide:
</td>
<td>2020-07-02</td>
<td>ArXiv</td>
<td>140</td>
<td>142</td>
<td>23</td>
</tr>

Expand All @@ -134,7 +134,7 @@ hide:
</td>
<td>2020-03-17</td>
<td>Autom.</td>
<td>18</td>
<td>19</td>
<td>5</td>
</tr>

Expand All @@ -159,7 +159,7 @@ hide:
<td>2022-06-20</td>
<td>Philosophical transactions. Series A, Mathematical, physical, and engineering sciences</td>
<td>57</td>
<td>33</td>
<td>34</td>
</tr>

</tbody>
Expand Down
18 changes: 9 additions & 9 deletions docs/recommendations/35e2571c17246577e0bc1b9de57a314c3b60e220.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-12-30 06:05:46 UTC</i>
<i class="footer">This page was last updated on 2025-01-06 06:06:00 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -109,7 +109,7 @@ hide:
Philipp Holl, V. Koltun, Nils Thuerey
</td>
<td>2021-09-30</td>
<td>ArXiv, DBLP</td>
<td>DBLP, ArXiv</td>
<td>6</td>
<td>106</td>
</tr>
Expand Down Expand Up @@ -150,16 +150,16 @@ hide:
<td>17</td>
</tr>

<tr id="None">
<tr id="Big data is transforming scientific progress by enabling the discovery of novel models, enhancing existing frameworks, and facilitating precise uncertainty quantification, while advancements in scientific machine learning complement this by providing powerful tools to solve inverse problems to identify the complex systems where traditional methods falter due to sparse or noisy data. We introduce two innovative neural operator frameworks tailored for discovering hidden physics and identifying unknown system parameters from sparse measurements. The first framework integrates a popular neural operator, DeepONet, and a physics-informed neural network to capture the relationship between sparse data and the underlying physics, enabling the accurate discovery of a family of governing equations. The second framework focuses on system parameter identification, leveraging a DeepONet pre-trained on sparse sensor measurements to initialize a physics-constrained inverse model. Both frameworks excel in handling limited data and preserving physical consistency. Benchmarking on the Burgers' equation and reaction-diffusion system demonstrates state-of-the-art performance, achieving average $L_2$ errors of $\mathcal{O}(10^{-2})$ for hidden physics discovery and absolute errors of $\mathcal{O}(10^{-3})$ for parameter identification. These results underscore the frameworks' robustness, efficiency, and potential for solving complex scientific problems with minimal observational data.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/55be4b71d7c8366b0f095e769f4586ce664c7415" target='_blank'>Discovering conservation laws using optimal transport and manifold learning</a></td>
<td><a href="https://www.semanticscholar.org/paper/fac52e25998ecb2657489d5358e6d0ca9064f2f7" target='_blank'>Learning Hidden Physics and System Parameters with Deep Operator Networks</a></td>
<td>
Peter Y. Lu, Rumen Dangovski, M. Soljavci'c
Vijay Kag, Dibakar Roy Sarkar, Birupaksha Pal, Somdatta Goswami
</td>
<td>2022-08-31</td>
<td>Nature Communications</td>
<td>12</td>
<td>13</td>
<td>2024-12-06</td>
<td>ArXiv</td>
<td>0</td>
<td>0</td>
</tr>

</tbody>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-12-30 06:05:39 UTC</i>
<i class="footer">This page was last updated on 2025-01-06 06:05:57 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -50,7 +50,7 @@ hide:
</td>
<td>2015-09-11</td>
<td>Proceedings of the National Academy of Sciences</td>
<td>3476</td>
<td>3490</td>
<td>68</td>
</tr>

Expand All @@ -62,7 +62,7 @@ hide:
</td>
<td>2020-05-05</td>
<td>Nature Communications</td>
<td>301</td>
<td>305</td>
<td>13</td>
</tr>

Expand Down
Loading

0 comments on commit 662c754

Please sign in to comment.