This repository contains codes and configs for training models presented in : A Syntax-aware Multi-task Learning Framework for Chinese Semantic Role Labeling
If you have the CPB1.0 and CoNLL-2009 Chinese data, you can convert the original format into the json format using the scripts.
To use our code, you should install pytorch >= 0.4.0.
To train our neural models, you should set the train.sh and config.json. Then, run
nohup ./train.sh 0 > log.txt 2>&1 &
where 0 is the GPU id. We also give an corresponding example in exp-baseline-MTL-IIR/
For test, you should run the predict.sh presentated in the exp-baseline-MTL-IIR/ dir.
./predict.sh 0
To evaluate the end-to-end results, you should run the script two times!
srl-eval.pl gold_file sys_file
srl-eval.pl sys_file gold_file
The first command gives the actual recall score (reported recall score) and the second command gives the actual precision (reported recall score) score.
We put the Dan Bikel's comparer in the scripts directory. The workflow is as follows:
*.output should be in conll format
python2 each_sentence_analysis.py A.output > A.evalb
python2 each_sentence_analysis.py B.output > B.evalb
2. Move into the dir ``significance_test'' and conduct the evaluation to get the p_value for precision.
perl compare.pl -n 10000 A.evalb B.evalb