Skip to content

Commit

Permalink
fix: update recommended articles
Browse files Browse the repository at this point in the history
  • Loading branch information
gurdeep330 committed Dec 25, 2024
1 parent 756b097 commit c3c8232
Show file tree
Hide file tree
Showing 30 changed files with 374 additions and 386 deletions.
10 changes: 5 additions & 5 deletions docs/recommendations/06a0ba437d41a7c82c08a9636a4438c1b5031378.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-11-11 06:05:45 UTC</i>
<i class="footer">This page was last updated on 2024-12-25 11:30:07 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -110,8 +110,8 @@ hide:
</td>
<td>2022-06-01</td>
<td>Nonlinear Dynamics</td>
<td>30</td>
<td>92</td>
<td>31</td>
<td>93</td>
</tr>

<tr id="Systems biology is a new discipline built upon the premise that an understanding of how cells and organisms carry out their functions cannot be gained by looking at cellular components in isolation. Instead, consideration of the interplay between the parts of systems is indispensable for analyzing, modeling, and predicting systems' behavior. Studying biological processes under this premise, systems biology combines experimental techniques and computational methods in order to construct predictive models. Both in building and utilizing models of biological systems, inverse problems arise at several occasions, for example, (i) when experimental time series and steady state data are used to construct biochemical reaction networks, (ii) when model parameters are identified that capture underlying mechanisms or (iii) when desired qualitative behavior such as bistability or limit cycle oscillations is engineered by proper choices of parameter combinations. In this paper we review principles of the modeling process in systems biology and illustrate the ill-posedness and regularization of parameter identification problems in that context. Furthermore, we discuss the methodology of qualitative inverse problems and demonstrate how sparsity enforcing regularization allows the determination of key reaction mechanisms underlying the qualitative behavior.">
Expand Down Expand Up @@ -140,13 +140,13 @@ hide:

<tr id="None">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/1626b462b65d4084a24fb31c4e4ca3fc212a307a" target='_blank'>Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks</a></td>
<td><a href="https://www.semanticscholar.org/paper/8474c9ab1318680cf674c97fe5af668db0c267cc" target='_blank'>Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks</a></td>
<td>
Philipp Rumschinski, S. Borchers, S. Bosio, R. Weismantel, R. Findeisen
</td>
<td>2010-05-25</td>
<td>BMC Systems Biology</td>
<td>67</td>
<td>68</td>
<td>48</td>
</tr>

Expand Down
12 changes: 6 additions & 6 deletions docs/recommendations/0acd117521ef5aafb09fed02ab415523b330b058.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-11-11 06:05:51 UTC</i>
<i class="footer">This page was last updated on 2024-12-25 11:30:16 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -63,7 +63,7 @@ hide:
<td>2019-09-15</td>
<td>ArXiv</td>
<td>9</td>
<td>107</td>
<td>108</td>
</tr>

<tr id="Significance Understanding dynamic constraints and balances in nature has facilitated rapid development of knowledge and enabled technology, including aircraft, combustion engines, satellites, and electrical power. This work develops a novel framework to discover governing equations underlying a dynamical system simply from data measurements, leveraging advances in sparsity techniques and machine learning. The resulting models are parsimonious, balancing model complexity with descriptive ability while avoiding overfitting. There are many critical data-driven problems, such as understanding cognition from neural recordings, inferring climate patterns, determining stability of financial markets, predicting and suppressing the spread of disease, and controlling turbulence for greener transportation and energy. With abundant data and elusive laws, data-driven discovery of dynamics will continue to play an important role in these efforts. Extracting governing equations from data is a central challenge in many diverse areas of science and engineering. Data are abundant whereas models often remain elusive, as in climate science, neuroscience, ecology, finance, and epidemiology, to name only a few examples. In this work, we combine sparsity-promoting techniques and machine learning with nonlinear dynamical systems to discover governing equations from noisy measurement data. The only assumption about the structure of the model is that there are only a few important terms that govern the dynamics, so that the equations are sparse in the space of possible functions; this assumption holds for many physical systems in an appropriate basis. In particular, we use sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data. This results in parsimonious models that balance accuracy with model complexity to avoid overfitting. We demonstrate the algorithm on a wide range of problems, from simple canonical systems, including linear and nonlinear oscillators and the chaotic Lorenz system, to the fluid vortex shedding behind an obstacle. The fluid example illustrates the ability of this method to discover the underlying dynamics of a system that took experts in the community nearly 30 years to resolve. We also show that this method generalizes to parameterized systems and systems that are time-varying or have external forcing.">
Expand All @@ -74,8 +74,8 @@ hide:
</td>
<td>2015-09-11</td>
<td>Proceedings of the National Academy of Sciences</td>
<td>3408</td>
<td>67</td>
<td>3467</td>
<td>68</td>
</tr>

<tr id="None">
Expand All @@ -98,7 +98,7 @@ hide:
</td>
<td>2020-05-05</td>
<td>Nature Communications</td>
<td>289</td>
<td>300</td>
<td>13</td>
</tr>

Expand Down Expand Up @@ -135,7 +135,7 @@ hide:
<td>2017-12-01</td>
<td>2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)</td>
<td>12</td>
<td>67</td>
<td>68</td>
</tr>

</tbody>
Expand Down
16 changes: 8 additions & 8 deletions docs/recommendations/0d01d21137a5af9f04e4b16a55a0f732cb8a540b.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-11-11 06:05:10 UTC</i>
<i class="footer">This page was last updated on 2024-12-25 11:29:31 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -63,7 +63,7 @@ hide:
<td>2023-06-29</td>
<td>ArXiv</td>
<td>0</td>
<td>7</td>
<td>8</td>
</tr>

<tr id="Time series forecasting lies at the core of important real-world applications in many fields of science and engineering. The abundance of large time series datasets that consist of complex patterns and long-term dependencies has led to the development of various neural network architectures. Graph neural network approaches, which jointly learn a graph structure based on the correlation of raw values of multivariate time series while forecasting, have recently seen great success. However, such solutions are often costly to train and difficult to scale. In this paper, we propose TimeGNN, a method that learns dynamic temporal graph representations that can capture the evolution of inter-series patterns along with the correlations of multiple series. TimeGNN achieves inference times 4 to 80 times faster than other state-of-the-art graph-based methods while achieving comparable forecasting performance">
Expand All @@ -74,7 +74,7 @@ hide:
</td>
<td>2023-07-27</td>
<td>ArXiv</td>
<td>1</td>
<td>2</td>
<td>55</td>
</tr>

Expand All @@ -86,7 +86,7 @@ hide:
</td>
<td>2022-01-24</td>
<td>ArXiv</td>
<td>0</td>
<td>1</td>
<td>11</td>
</tr>

Expand All @@ -110,8 +110,8 @@ hide:
</td>
<td>2022-07-01</td>
<td>ArXiv, DBLP</td>
<td>44</td>
<td>35</td>
<td>45</td>
<td>36</td>
</tr>

<tr id="Time series forecasting is an extensively studied subject in statistics, economics, and computer science. Exploration of the correlation and causation among the variables in a multivariate time series shows promise in enhancing the performance of a time series model. When using deep neural networks as forecasting models, we hypothesize that exploiting the pairwise information among multiple (multivariate) time series also improves their forecast. If an explicit graph structure is known, graph neural networks (GNNs) have been demonstrated as powerful tools to exploit the structure. In this work, we propose learning the structure simultaneously with the GNN if the graph is unknown. We cast the problem as learning a probabilistic graph model through optimizing the mean performance over the graph distribution. The distribution is parameterized by a neural network so that discrete graphs can be sampled differentiably through reparameterization. Empirical evaluations show that our method is simpler, more efficient, and better performing than a recently proposed bilevel learning approach for graph structure learning, as well as a broad array of forecasting models, either deep or non-deep learning based, and graph or non-graph based.">
Expand All @@ -122,7 +122,7 @@ hide:
</td>
<td>2021-01-18</td>
<td>ArXiv</td>
<td>199</td>
<td>206</td>
<td>38</td>
</tr>

Expand All @@ -135,7 +135,7 @@ hide:
<td>2024-07-02</td>
<td>ArXiv</td>
<td>0</td>
<td>9</td>
<td>10</td>
</tr>

</tbody>
Expand Down
40 changes: 20 additions & 20 deletions docs/recommendations/123acfbccca0460171b6b06a4012dbb991cde55b.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-11-11 06:05:12 UTC</i>
<i class="footer">This page was last updated on 2024-12-25 11:29:33 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -50,7 +50,7 @@ hide:
</td>
<td>2024-03-12</td>
<td>ArXiv</td>
<td>69</td>
<td>82</td>
<td>18</td>
</tr>

Expand All @@ -62,7 +62,7 @@ hide:
</td>
<td>2024-02-04</td>
<td>ArXiv</td>
<td>5</td>
<td>6</td>
<td>67</td>
</tr>

Expand All @@ -86,10 +86,22 @@ hide:
</td>
<td>2024-06-22</td>
<td>ArXiv</td>
<td>13</td>
<td>18</td>
<td>3</td>
</tr>

<tr id="Encoding time series into tokens and using language models for processing has been shown to substantially augment the models' ability to generalize to unseen tasks. However, existing language models for time series forecasting encounter several obstacles, including aliasing distortion and prolonged inference times, primarily due to the limitations of quantization processes and the computational demands of large models. This paper introduces Apollo-Forecast, a novel framework that tackles these challenges with two key innovations: the Anti-Aliasing Quantization Module (AAQM) and the Race Decoding (RD) technique. AAQM adeptly encodes sequences into tokens while mitigating high-frequency noise in the original signals, thus enhancing both signal fidelity and overall quantization efficiency. RD employs a draft model to enable parallel processing and results integration, which markedly accelerates the inference speed for long-term predictions, particularly in large-scale models. Extensive experiments on various real-world datasets show that Apollo-Forecast outperforms state-of-the-art methods by 35.41\% and 18.99\% in WQL and MASE metrics, respectively, in zero-shot scenarios. Furthermore, our method achieves a 1.9X-2.7X acceleration in inference speed over baseline methods.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/a09124813cb1d71da89d8e30719206c0668c5330" target='_blank'>Apollo-Forecast: Overcoming Aliasing and Inference Speed Challenges in Language Models for Time Series Forecasting</a></td>
<td>
Tianyi Yin, Jingwei Wang, Yunlong Ma, Han Wang, Chenze Wang, Yukai Zhao, Min Liu, Weiming Shen, Yufeng Chen
</td>
<td>2024-12-16</td>
<td>ArXiv</td>
<td>0</td>
<td>7</td>
</tr>

<tr id="Deep learning has contributed remarkably to the advancement of time series analysis. Still, deep models can encounter performance bottlenecks in real-world data-scarce scenarios, which can be concealed due to the performance saturation with small models on current benchmarks. Meanwhile, large models have demonstrated great powers in these scenarios through large-scale pre-training. Continuous progress has been achieved with the emergence of large language models, exhibiting unprecedented abilities such as few-shot generalization, scalability, and task generality, which are however absent in small deep models. To change the status quo of training scenario-specific small models from scratch, this paper aims at the early development of large time series models (LTSM). During pre-training, we curate large-scale datasets with up to 1 billion time points, unify heterogeneous time series into single-series sequence (S3) format, and develop the GPT-style architecture toward LTSMs. To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task. The outcome of this study is a Time Series Transformer (Timer), which is generative pre-trained by next token prediction and adapted to various downstream tasks with promising capabilities as an LTSM. Code and datasets are available at: https://github.com/thuml/Large-Time-Series-Model.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/73f58b90697f957832f5090946894480849dea3a" target='_blank'>Timer: Generative Pre-trained Transformers Are Large Time Series Models</a></td>
Expand All @@ -98,7 +110,7 @@ hide:
</td>
<td>2024-02-04</td>
<td>ArXiv, DBLP</td>
<td>21</td>
<td>26</td>
<td>67</td>
</tr>

Expand All @@ -110,7 +122,7 @@ hide:
</td>
<td>2023-10-05</td>
<td>ArXiv</td>
<td>60</td>
<td>65</td>
<td>5</td>
</tr>

Expand All @@ -122,7 +134,7 @@ hide:
</td>
<td>2024-05-13</td>
<td>2024 IEEE 40th International Conference on Data Engineering Workshops (ICDEW)</td>
<td>0</td>
<td>2</td>
<td>7</td>
</tr>

Expand All @@ -134,22 +146,10 @@ hide:
</td>
<td>2024-02-16</td>
<td>ArXiv</td>
<td>5</td>
<td>7</td>
<td>8</td>
</tr>

<tr id="Time series forecasting holds significant importance in many real-world dynamic systems and has been extensively studied. Unlike natural language process (NLP) and computer vision (CV), where a single large model can tackle multiple tasks, models for time series forecasting are often specialized, necessitating distinct designs for different tasks and applications. While pre-trained foundation models have made impressive strides in NLP and CV, their development in time series domains has been constrained by data sparsity. Recent studies have revealed that large language models (LLMs) possess robust pattern recognition and reasoning abilities over complex sequences of tokens. However, the challenge remains in effectively aligning the modalities of time series data and natural language to leverage these capabilities. In this work, we present Time-LLM, a reprogramming framework to repurpose LLMs for general time series forecasting with the backbone language models kept intact. We begin by reprogramming the input time series with text prototypes before feeding it into the frozen LLM to align the two modalities. To augment the LLM's ability to reason with time series data, we propose Prompt-as-Prefix (PaP), which enriches the input context and directs the transformation of reprogrammed input patches. The transformed time series patches from the LLM are finally projected to obtain the forecasts. Our comprehensive evaluations demonstrate that Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models. Moreover, Time-LLM excels in both few-shot and zero-shot learning scenarios.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/16f01c1b3ddd0b2abd5ddfe4fdb3f74767607277" target='_blank'>Time-LLM: Time Series Forecasting by Reprogramming Large Language Models</a></td>
<td>
Ming Jin, Shiyu Wang, Lintao Ma, Zhixuan Chu, James Y. Zhang, X. Shi, Pin-Yu Chen, Yuxuan Liang, Yuan-Fang Li, Shirui Pan, Qingsong Wen
</td>
<td>2023-10-03</td>
<td>ArXiv</td>
<td>195</td>
<td>9</td>
</tr>

</tbody>
<tfoot>
<tr>
Expand Down
Loading

0 comments on commit c3c8232

Please sign in to comment.