PyTorch Implementation of "Monotonic Chunkwise Attention" (ICLR 2018)
-
Updated
Apr 2, 2018 - Python
PyTorch Implementation of "Monotonic Chunkwise Attention" (ICLR 2018)
Repository for Attention Algorithm
Transliteration using Sequence to Sequence transduction using Hard Monotonic Attention, based on our EMNLP 2018 paper
Feature Selection Gates with Gradient Routing
45k context transformer for splice site prediction implemented with PyTorch. The code will be added soon.
Recurrent Visual Attention using PyTorch; Catch and MNIST classification
End-to-end trainable autoregressive and non-autoregressive transducers using hard attention
Add a description, image, and links to the hard-attention topic page so that developers can more easily learn about it.
To associate your repository with the hard-attention topic, visit your repo's landing page and select "manage topics."