This is the official repository accompanying the EMNLP 2020 long paper Reformulating Unsupervised Style Transfer as Paraphrase Generation. This repository contains the accompanying dataset and codebase.
-
Outputs from baseline models (DLSM; UNMT; Transforming Delete, Retrieve, Generate) have been added to the Google Drive link. Please see
style_paraphrase/evaluation/README.md
for a script to run evaluation on baselines. -
Thanks to David Dale, our CoLA fluency classifier is now available in HuggingFace. They found this classifier had higher correlation with human judgements compared to other CoLA models (details in issue #36).
-
Thanks to Filip Cornell, STRAP models are now available on HuggingFace and accepted to NL-Augmenter!
-
We have open-sourced multilingual classifiers for formality evaluation. Please see
README-multilingual.md
for more details.
The web demo for the system can be found here. The code and setup for the webpage can be found in web-demo/README.md
. We also have a command-line demo for the paraphrase model. For more details, check README_terminal_demo.md
.
All outputs generated by our model: outputs
. Contact me at kalpesh@cs.umass.edu for outputs from the Formality dataset (both our model and baselines) once you have received the GYAFC dataset. The outputs from baseline models have been added to outputs/baselines
. Please see style_paraphrase/evaluation/README.md
for a script to run evaluation on baselines.
The code uses PyTorch 1.4+, HuggingFace's transformers
library for training GPT2 models, and Facebook AI Research's fairseq
for evaluation using RoBERTa classifiers. To install PyTorch, look for the Python package compatible with your local CUDA setup here.
virtualenv style-venv
source style-venv/bin/activate
pip install torch torchvision
pip install -r requirements.txt
pip install --editable .
cd fairseq
pip install --editable .
To process custom datasets and run the classifier, you will need to download RoBERTA. Download the RoBERTa checkpoints from here. Alternatively, you could follow the commands below. If you want a smaller model, you can also setup a ROBERTA_BASE variable using a similar process.
wget https://dl.fbaipublicfiles.com/fairseq/models/roberta.large.tar.gz
tar -xzvf roberta.large.tar.gz
# Add the following to your .bashrc file, feel free to store the model elsewhere on the hard disk
export ROBERTA_LARGE=$PWD/roberta.large
All datasets will be added to this Google Drive link. Download the datasets and place them under datasets
. The datasets currently available are (with their folder names),
- ParaNMT-50M filtered down to 75k pairs -
datasets/paranmt_filtered
- Shakespeare style transfer -
datasets/shakespeare
- Formality transfer - Please follow the instructions here. Once you have access to the corpus, you could email me (kalpesh@cs.umass.edu) to get access to the preprocessed version. We will also add scripts to preprocess the raw data.
- Corpus of Diverse Styles -
datasets/cds
. Samples can be found insamples/data_samples
. Please cite the original sources as well if you plan to use this dataset.
-
To train the paraphrase model, run
style_paraphrase/examples/run_finetune_paraphrase.sh
. -
To train the inverse paraphrasers for Shakespeare, check the two scripts in
style_paraphrase/examples/shakespeare
. -
To train the inverse paraphrasers for Formality, check the two scripts in
style_paraphrase/examples/formality
. Note that you will need to email me asking for the preprocessed dataset once you have access to the GYAFC corpus (see instructions in Datasets section). -
To train models on CDS, please follow step #2 and #5 below in "Custom Datasets".
All the main pretrained models have been added to the Google Drive link.
To run a fine-tuning and evaluation script simultaneously with support for hyperparameter tuning, please see the code in style_paraphrase/schedule.py
and style_paraphrase/hyperparameters_config.py
. This is customized to SLURM, you might need to mkae minor adjustments for it to work on your cluster.
Classifiers are needed to evaluate style transfer performance. To train the classifiers follow the steps:
-
Install the local fork of
fairseq
, as discussed above in "Setup". -
Download the RoBERTa checkpoints as discussed above in "Setup".
-
For training classifiers on Shakespeare, CoLA or CDS datasets, download the
shakespeare-bin
,cola-bin
orcds-bin
folders from the Drive link here and place them underdatasets
. I can provide similar files for the Formality dataset once you have access to the original corpus. -
To train the classifiers, see the examples in
style_paraphrase/style_classify/examples
. You can also use a grid search (with a Slurm scheduler) by using the code instyle_paraphrase/style_classify/schedule.py
. We also have a light-weight Flask interface to plot performance with epochs which works well with the Slurm grid search automation, checkstyle_paraphrase/style_classify/webapp/run.sh
. -
For training on custom datasets, run the commands under "Custom Datasets" to create
fairseq
binary files for your dataset (Step 1 and 2). Then, you can either modify the example scripts to point to your dataset or you could add an entry tostyle_paraphrase/style_classify/schedule.py
. You will need to specify the number of classes and the total length of the dataset in the file, which is used to calculate the number of warmup steps.
Please check style_paraphrase/evaluation/README.md
for more details.
Create a folder in datasets
which will contain new_dataset
as datasets/new_dataset
. Paste your plaintext train/dev/test splits into this folder as train.txt
, dev.txt
, test.txt
. Use one instance per line (note that the model truncates sequences longer than 50 subwords). Add train.label
, dev.label
, test.label
files (with same number of lines as train.txt
, dev.txt
, test.txt
). These files will contain the style label of the corresponding instance. See this folder for examples of label files.
- To convert a plaintext dataset into it's BPE form run the command,
python datasets/dataset2bpe.py --dataset datasets/new_dataset
Note that this process is reversible. To convert a BPE file back into its raw text form: python datasets/bpe2text.py --input <input> --output <output>
.
- Next, for converting the BPE codes to
fairseq
binaries and building a label dictionary, first make sure you have downloaded RoBERTa and setup the$ROBERTA_LARGE
global variable in your.bashrc
(see "Setup" for more details). Then run,
datasets/bpe2binary.sh datasets/new_dataset
- To train inverse paraphrasers you will need to paraphrase the dataset. First, download the pretrained model
paraphraser_gpt2_large
from here. After downloading the pretrained paraphrase model run the command,
python datasets/paraphrase_splits.py --dataset datasets/new_dataset
- Add an entry to the
DATASET_CONFIG
dictionary instyle_paraphrase/dataset_config.py
, customizing configuration if needed.
"datasets/new_dataset": BASE_CONFIG
- Enter your dataset in the hyperparameters file and run
python style_paraphrase/schedule.py
.
You can preprocess a TSV data of sentence pairs to a compatible format using,
python datasets/prepare_paraphrase_data.py \
--input_file input.tsv \
--output_folder datasets/custom_paraphrase_data \
--train_fraction 0.95
If you find this repository useful, please cite us:
@inproceedings{style20,
author={Kalpesh Krishna and John Wieting and Mohit Iyyer},
Booktitle = {Empirical Methods in Natural Language Processing},
Year = "2020",
Title={Reformulating Unsupervised Style Transfer as Paraphrase Generation},
}