This repository accompanies the paper, "HIT-SCIR at MRP 2019: A Unified Pipeline for Meaning Representation Parsing via Efficient Training and Effective Encoding", providing codes to train models and pre/post-precessing mrp dataset.
CoNLL2019 Shared Task Official Website: http://mrp.nlpl.eu/
- Python 3.6
- JAMR
- NLTK
- Gensim
- Penman
- AllenNLP 0.9.0
For JAMR installation, please refer to #2.
Total training data is available at mrp-data.
Download model from google-drive (CoNLL2019 Submission Version).
For prediction, please specify the BERT path in config.json
to import the bert-indexer and bert-embedder. More prediction commands could be found in bash/predict.sh
.
About BERT version, DM/PSD/UCCA/EDS use cased_L-12_H-768_A-12 (cased-bert-base
) and AMR uses wwm_cased_L-24_H-1024_A-16 (wwm-cased-bert-large
).
We use conllu format companion data. This command adds companion.conllu
to data.mrp
and outputs to data.aug.mrp
python3 toolkit/augment_data.py \
companion.conllu \
data.mrp \
data.aug.mrp
For evaluation data, you need to convert udpipe to conllu format and split raw input to 5 files. Run this command instead.
python3 toolkit/preprocess_eval.py \
udpipe.mrp \
input.mrp \
--outdir /path/to/output
Different from the other 4 parsers, our AMR parser accepts input of augmented amr format instead of mrp format.
Since TAMR's alignment is built on the JAMR alignment results, you need to set JAMR and CDEC path in bash/amr_preprocess.sh
and run the command below.
bash bash/amr_preprocess.sh \
data.aug.mrp \
/path/to/word2wec
The final output is data.aug.mrp.actions.aug.txt
which can be input to AMR parser.
According to TAMR, it is recommended to use the glove.840B.300d and filter the embeddings by the words and concepts (trimming the tail in word sense) in the data.
Based on AllenNLP, the training command is like
CUDA_VISIBLE_DEVICES=${gpu_id} \
TRAIN_PATH=${train_set} \
DEV_PATH=${dev_set} \
BERT_PATH=${bert_path} \
WORD_DIM=${bert_output_dim} \
LOWER_CASE=${whether_bert_is_uncased} \
BATCH_SIZE=${batch_size} \
allennlp train \
-s ${model_save_path} \
--include-package utils \
--include-package modules \
--file-friendly-logging \
${config_file}
Refer to bash/train.sh
for more and detailed examples.
The predicting command is like
CUDA_VISIBLE_DEVICES=${gpu_id} \
allennlp predict \
--cuda-device 0 \
--output-file ${output_path} \
--predictor ${predictor_class} \
--include-package utils \
--include-package modules \
--batch-size ${batch_size} \
--silent \
${model_save_path} \
${test_set}
More examples in bash/predict.sh
.
bash/
command pipelines and examplesconfig/
Jsonnet config filesmetrics/
metrics used in training and evaluationmodules/
implementations of modulestoolkit/
external libraries and dataset toolsutils/
code for input/output and pre/post-processing
Thanks to the task organizers and also thanks to the developer of AllenNLP, JAMR and TAMR.
For further information, please contact lxdou@ir.hit.edu.cn, yxu@ir.hit.edu.cn