A clean PyTorch implementation of the original Transformer model + A German -> English translation example
-
Updated
Jan 24, 2022 - Python
A clean PyTorch implementation of the original Transformer model + A German -> English translation example
Cross-lingual Visual Pre-training for Multimodal Machine Translation
Neural Machine Tranlation using Local Attention
[KDD 2024] Improving the Consistency in Cross-Lingual Cross-Modal Retrieval with 1-to-K Contrastive Learning
The Transformer model implemented from scratch using PyTorch. The model uses weight sharing between the embedding layers and the pre-softmax linear layer. Training on the Multi30k machine translation task is shown.
Full training pipeline of attention model for machine translation task with PyTorch
English-To-German Neural Machine Translation Using Transformer
A basic seq2seq transformers model trained and validated on the Multi30k dataset.
Add a description, image, and links to the multi30k topic page so that developers can more easily learn about it.
To associate your repository with the multi30k topic, visit your repo's landing page and select "manage topics."