Codebase for A* CCG Parsing with a Supertag and Dependency Factored Model
- Increased stability and efficiency
- (Replaced OpenMP with multiprocessing)
- More integration with AllenNLP
- The parser is now callable from within a
predictor
(see here)
- The parser is now callable from within a
- More friendly way to define your own grammar (wrt. languages or treebanks)
- See
depccg/grammar/{en,ja}.py
for example grammars.
- See
- Python >= 3.6.0
- A C++ compiler supporting C++11 standard (in case of gcc, must be >= 4.8)
Using pip:
➜ pip install cython numpy depccg
Currently following models are available for English:
Name | Description | unlabeled/labeled F1 on CCGbank | Download |
---|---|---|---|
basic | model trained on the combination of CCGbank and tri-training dataset (Yoshikawa et al., 2017) | 94.0%/88.8% | link (189M) |
elmo |
basic model with its embeddings replaced with ELMo (Peters et al., 2018) | 94.98%/90.51% | link (649M) |
rebank |
basic model trained on Rebanked CCGbank (Honnibal et al., 2010) | - | link (337M) |
elmo_rebank |
ELMo model trained on Rebanked CCGbank | - | link (1G) |
The basic model is available by:
➜ depccg_en download
To use:
➜ echo "this is a test sentence ." | depccg_en
ID=1, Prob=-0.0006299018859863281
(<T S[dcl] 0 2> (<T S[dcl] 0 2> (<L NP XX XX this NP>) (<T S[dcl]\NP 0 2> (<L (S[dcl]\NP)/NP XX XX is (S[dcl]\NP)/NP>) (<T NP 0 2> (<L NP[nb]/N XX XX a NP[nb]/N>) (<T N 0 2> (<L N/N XX XX test N/N>) (<L N XX XX sentence N>) ) ) ) ) (<L . XX XX . .>) )
You can download other models by specifying their names:
➜ depccg_en download elmo
To use, make sure to install allennlp:
➜ echo "this is a test sentence ." | depccg_en --model elmo
You can also specify in the --model
option the path of a model file (in tar.gz) that is available from links above.
Using a GPU (by --gpu
option) is recommended if possible.
There are several output formats (see below).
➜ echo "this is a test sentence ." | depccg_en --format deriv
ID=1, Prob=-0.0006299018859863281
this is a test sentence .
NP (S[dcl]\NP)/NP NP[nb]/N N/N N .
---------------->
N
-------------------------->
NP
------------------------------------------>
S[dcl]\NP
------------------------------------------------<
S[dcl]
---------------------------------------------------<rp>
S[dcl]
By default, the input is expected to be pre-tokenized. If you want to process untokenized sentences, you can pass --tokenize
option.
The POS and NER tags in the output are filled with XX
by default. You can replace them with ones predicted using SpaCy:
➜ echo "this is a test sentence ." | depccg_en --annotator spacy
ID=1, Prob=-0.0006299018859863281
(<T S[dcl] 0 2> (<T S[dcl] 0 2> (<L NP DT DT this NP>) (<T S[dcl]\NP 0 2> (<L (S[dcl]\NP)/NP VBZ VBZ is (S[dcl]\NP)/NP>) (<T NP 0 2> (<L NP[nb]/N DT DT a NP[nb]/N>) (<T N 0 2> (<L N/N NN NN test N/N>) (<L N NN NN sentence N>) ) ) ) ) (<L . . . . .>) )
The parser uses a SpaCy's en_core_web_sm
model.
Orelse, you can use POS/NER taggers implemented in C&C, which may be useful in some sorts of parsing experiments:
➜ export CANDC=/path/to/candc
➜ echo "this is a test sentence ." | depccg_en --annotator candc
ID=1, log prob=-0.0006299018859863281
(<T S[dcl] 0 2> (<T S[dcl] 0 2> (<L NP DT DT this NP>) (<T S[dcl]\NP 0 2> (<L (S[dcl]\NP)/NP VBZ VBZ is (S[dcl]\NP)/NP>) (<T NP 0 2> (<L NP[nb]/N DT DT a NP[nb]/N>) (<T N 0 2> (<L N/N NN NN test N/N>) (<L N NN NN sentence N>) ) ) ) ) (<L . . . . .>) )
By default, depccg expects the POS and NER models are placed in $CANDC/models/pos
and $CANDC/models/ner
, but you can explicitly specify them by setting CANDC_MODEL_POS
and CANDC_MODEL_NER
environmental variables.
It is also possible to obtain logical formulas using ccg2lambda's semantic parsing algorithm.
➜ echo "This is a test sentence ." | depccg_en --format ccg2lambda --annotator spacy
ID=0 log probability=-0.0006299018859863281
exists x.(_this(x) & exists z1.(_sentence(z1) & _test(z1) & (x = z1)))
The best performing model is available by:
➜ depccg_ja download
It can be downloaded directly here (56M).
The parser provides the almost same interface as with the English one, with slight differences including the default output format, which is now one compatible with the Japanese CCGbank:
➜ echo "これはテストの文です。" | depccg_ja
ID=1, Prob=-53.98793411254883
{< S[mod=nm,form=base,fin=t] {< S[mod=nm,form=base,fin=f] {< NP[case=nc,mod=nm,fin=f] {NP[case=nc,mod=nm,fin=f] これ/これ/**} {NP[case=nc,mod=nm,fin=f]\NP[case=nc,mod=nm,fin=f] は/は/**}} {< S[mod=nm,form=base,fin=f]\NP[case=nc,mod=nm,fin=f] {< NP[case=nc,mod=nm,fin=f] {< NP[case=nc,mod=nm,fin=f] {NP[case=nc,mod=nm,fin=f] テスト/テスト/**} {NP[case=nc,mod=nm,fin=f]\NP[case=nc,mod=nm,fin=f] の/の/**}} {NP[case=nc,mod=nm,fin=f]\NP[case=nc,mod=nm,fin=f] 文/文/**}} {(S[mod=nm,form=base,fin=f]\NP[case=nc,mod=nm,fin=f])\NP[case=nc,mod=nm,fin=f] です/です/**}}} {S[mod=nm,form=base,fin=t]\S[mod=nm,form=base,fin=f] 。/。/**}}
You can pass pre-tokenized sentences as well:
➜ echo "これ は テスト の 文 です 。" | depccg_ja --pre-tokenized
ID=1, Prob=-53.98793411254883
{< S[mod=nm,form=base,fin=t] {< S[mod=nm,form=base,fin=f] {< NP[case=nc,mod=nm,fin=f] {NP[case=nc,mod=nm,fin=f] これ/これ/**} {NP[case=nc,mod=nm,fin=f]\NP[case=nc,mod=nm,fin=f] は/は/**}} {< S[mod=nm,form=base,fin=f]\NP[case=nc,mod=nm,fin=f] {< NP[case=nc,mod=nm,fin=f] {< NP[case=nc,mod=nm,fin=f] {NP[case=nc,mod=nm,fin=f] テスト/テスト/**} {NP[case=nc,mod=nm,fin=f]\NP[case=nc,mod=nm,fin=f] の/の/**}} {NP[case=nc,mod=nm,fin=f]\NP[case=nc,mod=nm,fin=f] 文/文/**}} {(S[mod=nm,form=base,fin=f]\NP[case=nc,mod=nm,fin=f])\NP[case=nc,mod=nm,fin=f] です/です/**}}} {S[mod=nm,form=base,fin=t]\S[mod=nm,form=base,fin=f] 。/。/**}}
auto
- the most standard format following AUTO format in the English CCGbankauto_extended
- extension of auto format with combinator info and POS/NER tagsderiv
- visualized derivations in ASCII artxml
- XML format compatible with C&C's XML format (only for English parsing)conll
- CoNLL formathtml
- visualized trees in MathMLprolog
- Prolog-like formatjigg_xml
- XML format compatible with Jiggptb
- Penn Treebank-style formatccg2lambda
- logical formula converted from a derivation using ccg2lambdajigg_xml_ccg2lambda
- jigg_xml format with ccg2lambda logical formula insertedjson
- JSON formatja
- a format adopted in Japanese CCGbank (only for Japanese)
Please look into depccg/__main__.py
.
You can use my allennlp-based supertagger and extend it.
To train a supertagger, prepare the English CCGbank and download vocab:
➜ cat ccgbank/data/AUTO/{0[2-9],1[0-9],20,21}/* > wsj_02-21.auto
➜ cat ccgbank/data/AUTO/00/* > wsj_00.auto
➜ wget http://cl.naist.jp/~masashi-y/resources/depccg/vocabulary.tar.gz
➜ tar xvf vocabulary.tar.gz
then,
➜ vocab=vocabulary train_data=wsj_02-21.auto test_data=wsj_00.auto gpu=0 \
encoder_type=lstm token_embedding_type=char \
allennlp train --include-package depccg --serialization-dir results depccg/allennlp/configs/supertagger.jsonnet
The training configs are passed either through environmental variables or directly writing to jsonnet config files, which are available in supertagger.jsonnet or supertagger_tritrain.jsonnet. The latter is a config file for using tri-training silver data (309M) constructed in (Yoshikawa et al., 2017), on top of the English CCGbank.
To use the trained supertagger,
➜ echo '{"sentence": "this is a test sentence ."}' > input.jsonl
➜ allennlp predict results/model.tar.gz --include-package depccg --output-file weights.json input.jsonl
or alternatively, you can perform CCG parsing:
➜ allennlp predict --include-package depccg --predictor parser-predictor --predictor-args '{"grammar_json_path": "depccg/models/config_en.jsonnet"}' model.tar.gz input.jsonl
The standard CCG parsing evaluation can be performed with the following script:
➜ cat ccgbank/data/PARG/00/* > wsj_00.parg
➜ export CANDC=/path/to/candc
➜ python -m depccg.tools.evaluate wsj_00.parg wsj_00.predicted.auto
The script is dependent on C&C's generate
program, which is only available by compiling the C&C program from the source.
(Currently, the above page is down. You can find the C&C parser here or here)
In error analysis, you must want to see diffs between trees in an intuitive way.
depccg.tools.diff
does exactly this:
➜ python -m depccg.tools.diff file1.auto file2.auto > diff.html
which outputs:
where trees in the same lines of the files are compared and the diffs are marked in color.
If you make use of this software, please cite the following:
@inproceedings{yoshikawa:2017acl,
author={Yoshikawa, Masashi and Noji, Hiroshi and Matsumoto, Yuji},
title={A* CCG Parsing with a Supertag and Dependency Factored Model},
booktitle={Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
publisher={Association for Computational Linguistics},
year={2017},
pages={277--287},
location={Vancouver, Canada},
doi={10.18653/v1/P17-1026},
url={http://aclweb.org/anthology/P17-1026}
}
MIT Licence
For questions and usage issues, please contact yoshikawa@tohoku.jp.
In creating the parser, I owe very much to: