A PyTorch implementation of LSTM-Shuttle
LSTM-Shuttle is an implementation of
"Speed Reading: Learning to Read ForBackward via Shuttle"
Tsu-Jui Fu and Wei-Yun Ma
in Conference on Empirical Methods in Natural Language Processing (EMNLP) 2018 (Long)
LSTM-Shuttle not only reads shuttling forward but also goes back. Shuttling forward enables high efficiency, and going backward gives the model a chance to recover lost information, ensuring better prediction. It first reads a fixed number of words sequentially and outputs the hidden state. Then, based on the hidden state, LSTM-Shuttle computes the shuttle softmax distribution over the forward or backward steps.
This code is implemented under Python3 and PyTorch.
Following libraries are also required:
We use spaCy as pre-trained word embedding.
- LSTM-Vanilla
model_lstm-vanilla.ipynb
- LSTM-Jump
model_lstm-jump.ipynb
- LSTM-Shuttle
model_lstm-shuttle.ipynb
@inproceedings{fu2018lstm-shuttle,
author = {Tsu-Jui Fu and Wei-Yun Ma},
title = {{Speed Reading: Learning to Read ForBackward via Shuttle}},
booktitle = {Conference on Empirical Methods in Natural Language Processing (EMNLP)},
year = {2018}
}