COMP6248 Reproducibility Challenge: MixSeq: Connecting Macroscopic Time Series Forecasting with Microscopic Time Series Data
This repository is an attempt to reproduce the 2021 NeurIPS paper "MixSeq: Connecting Macroscopic Time Series Forecasting with Microscopic Time Series Data".
Original paper: https://arxiv.org/pdf/2110.14354.pdf
This report analyses and describes the attempt to reproduce the paper, MixSeq: Connecting Macroscopic Time Series Forecasting with Microscopic Time Series Data. Under the assumption that macroscopic time series follow a mixture distribution, the authors hypothesise that lower variance of constituting latent mixture components could improve the estimation of macroscopic time series. We learned the challenges of reimplementing the proposed model, and as a result, we developed our own implementation based on this conjecture to prove its validity.
This paper proposes a variety of training scenarios to train a Variational Recurrent Auto-Encoder (VRAE) on time series data according to the MixSeq paper. We evaluate the clustering capability of VAEs on microscopic time series data using synthetic data generated by ARMA, and show good performance in this area. As we found the paper not reproducible in several areas, we propose further architectures from ideas put forward in the paper but did not have enough information to reproduce the results of the paper exactly.
Training
Folder: Vae_clustering
main.py - Trains VRAE - (!) Could overwrite pretrained weights.
Run
Folder: Vae_clustering
generate_plot.py - Generates clustering plots and Sensitivity analysis plots similar to those found in Figure 1 of the report.
Synthetic data with DeepAR
Folder: DeepAR
Multihead Attention Structure
Folder: Multihead Attention