Skip to content

A PyTorch implemention of Match-LSTM, R-NET and M-Reader for Machine Reading Comprehension

License

Notifications You must be signed in to change notification settings

mmerveunlu/Match-LSTM

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Match-LSTM

Here we implement the MatchLSTM (Wang and Jiang 2016) model, R-Net(Wang et al. 2017) model and M-Reader(Hu et al. 2017) on SQuAD (Rajpurkar et al. 2016).

Maybe there are some details different from initial paper.

Requirements

Experiments

The Match-LSTM+ model is a little change from Match-LSTM.

  • replace LSTM with GRU
  • add gated-attention match like r-net
  • add separated char-level encoding
  • add additional features like M-Reader
  • add aggregation layer with one GRU layer
  • initial GRU first state in pointer-net
    • add full-connect layer after match layer

Evaluate results on SQuAD dev set:

model em f1
Match-LSTM+ (our version) 70.2 79.2
Match-LSTM (paper) 64.1 73.9
R-NET-45 (our version) 64.2 73.6
R-NET (paper) 72.3 80.6
M-Reader (our version) 70.4 79.6
M-Reader+RL (paper) 72.1 81.6

'R-NET-45' refers to R-NET with hidden size of 45

Usage

python run.py [preprocess/train/test] [-c config_file] [-o ans_path]
  • -c config_file: Defined dataset, model, train methods and so on. Default: config/global_config.yaml
  • -o ans_path: see in test step

there several models you can choose in config/global_config.yaml, like 'match-lstm', 'match-lstm+', 'r-net' and 'm-reader'. view and modify.

Preprocess

  1. Put the GloVe embeddings file to the data/ directory
  2. Put the SQuAD dataset to the data/ directory
  3. Run python run.py preprocess to generate hdf5 file of SQuAD dataset

Note that preprocess will take a long time if multi-features used. Maybe close to an hour.

Train

python run.py train

Test

python run.py test [-o ans_file]
  • -o ans_file: Output the answer of question and context with a unique id to ans_file.

Note that we use data/model-weight.pt as our model weights by default. You can modify the config_file to set model weights file.

Evaluate

python helper_run/evaluate-v1.1.py [dataset_file] [prediction_file]
  • dataset_file: ground truth of dataset. example: data/SQuAD/dev-v1.1.json
  • prediction_file: your model predict on dataset. you can use the ans_file from test step.

Analysis

python helper_run/analysis_[*].py

Here we provide some scipt to analysis your model output, such as analysis_log.py, analysis_ans.py, analysis_dataset.py and so on. view and explore.

Reference

License

MIT

About

A PyTorch implemention of Match-LSTM, R-NET and M-Reader for Machine Reading Comprehension

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%