The RCZoo project is a toolkit for reading comprehension model. It contains the PyTorch reimplement of multiple reading comprehension model.
All the models are trained and tested on the SQuAD v1.1 dataset, and reach the performance in origin papers.
python 3.5
Pytorch 0.4
tqdm
We train each model on train set for 40 epoch, and report the best performance on dev set.
Model | Exact Match | F1 |
---|---|---|
Rnet | 69.25 | 78.97 |
BiDAF | 70.47 | 79.90 |
documentqa | 71.47 | 80.84 |
DrQA | 68.39 | 77.90 |
QAnet | ... | ... |
SLQA | 67.09 | 76.67 |
FusionNet | 68.27 | 77.79 |
- training
- performance
- predicting scripts
some different in the Dropout Layer
- training
- performance
- predicting scripts
The bi-attention in BiDAF does not work fin, and I introduce the co-attention in DCN paper. The final results is better than that in origin paper
- training
- performance
- predicting scripts
borrow from origin code
- training
- performance
- predicting scripts
- training
- performance
- predicting scripts
- training
- performance
- predicting scripts
no elmo contextualized embedding
- training
- performance
- predicting scripts
no CoVe embedding
- run
sh download.sh
to download the dataset and the glove embeddings. - run
sh train_xxx.sh
to start the train process. Dring the train process, model will be evaluated on dev set each epoch.
some code are borrowed from DrQA, a cool project about reading comprehension.
TODO: