Skip to content

Tagger and parser models used on our recipes corpus (data), handled with pre- and postprocessing scripts for data conversion (data-conversions)

Notifications You must be signed in to change notification settings

interactive-cookbook/tagger-parser

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tagger and Parser(AllenNLP 2.8 Implementation)

Environment setup

  1. Create a conda environment with Python 3.8
conda create -n allennlp python=3.8
  1. Activate the new environment
conda activate allennlp
  1. Install allennlp (we use version 2.8.0) and other packages using pip
pip install -r requirements.txt

Internal note: both environments are already set up on coli servers, see instructions in the Wiki.

Parameter configuration

Adjust parameters including file paths in the respective .json config files, as needed. By default, the paths point to datasets in data. See respective README files there for details about the datasets.

Both our models consume data in CoNLL format where each line represents a token and columns are tab-separated. The column DEPRELS contains additional dependency relations if a token has more than one head.The tagger requires data in the CoNLL-2003 format with the relevant columns being the first (TEXT) and the fourth (LABEL). The parser requires data in the CoNLL-U format with the relevant columns being the second (FORM), the fifth (LABEL), the seventh (HEAD) and the eighth (DEPREL).

Available tagger configurations:

Available parser configurations:

For the ELMo taggers, we use the following ELMo parameters (i.e. options and weights):

Internal note: the ELMo options and weight files can be found on the Saarland servers at /proj/cookbook.shadow/elmo_english.

The weights and options files should be named and placed according to the paths specified in the .json files; alternatively, adjust the paths in the .json files.

Training

Run allennlp train [params] -s [serialization dir] to train a model, where

  • [params] is the path to the .json config file.
  • [serialization dir] is the directory to save trained model, logs and other results.

Evaluation

Run allennlp evaluate [archive file] [input file] --output-file [output file] to evaluate the model on some evaluation data, where

  • [archive file] is the path to an archived trained model.
  • [input file] is the path to the file containing the evaluation data.
  • [output file] is an optional path to save the metrics as JSON; if not provided, the output will be displayed on the console.

Performance (TODO - Needs updating)

ERRATUM (Donatelli et al., EMNLP 2021): Please refer to our Wiki page for a list of corrections, particularly concerning the reporting of results and comparability.

Our tagger's performance on our data split:

Model Corpus Embedder Precision Recall F-Score
Our tagger 300-r by Y'20 NER ELMo 85.86% 86.89% 85.86.38%
Our tagger 300-r by Y'20 BERT-base-NER 84.45% 86.02% 85.23%
Our tagger 300-r by Y'20 BERT-large-NER 85.96% 87.96% 86.95%

Parser performance on the English corpus (test.conllu):

Tag Source Precision Recall F-Score
gold tags 80.4 76.1 78.2
our tagger with ELMo embeddings 74.4 70.4 72.3

Prediction

Run allennlp predict [archive file] [input file] --use-dataset-reader --output-file [output file] to parse a file with a pretrained model, where

  • [archive file] is the path to an archived trained model.
  • [input file] is the path to the file you want to parse; this file should be in the same format as the training data, i.e. CoNLL-2003 for the tagger and CoNLL-U for the parser.
  • use-dataset-reader tells the parser to use the same dataset reader as it used during training.
  • [output file] is an optional path to save parsing results as JSON; if not provided, the output will be displayed on the console.

The output of the parser will be in JSON format. To transform this into the better readable CoNLL-U format, use data-scripts/json_to_conll.py. To get labeled evaluation results for parser output, use the script data-scripts/parser_evaluation.py. Instructions for their use can be found in data-scripts/README.md.

For sample inputs and outputs see English/Samples.

About

Tagger and parser models used on our recipes corpus (data), handled with pre- and postprocessing scripts for data conversion (data-conversions)

Resources

Stars

Watchers

Forks

Packages

No packages published