This is not official implementation for the paper Paragraph-level Neural Question Generation with Maxout Pointer and Gated Self-attention Networks. I implemented in Pytorch to reproduce similar result as the paper. You can find the checkpoint of pretrained model here.
This code is written in Python. Dependencies include
- python >= 3.6
- pytorch >= 1.4
- nltk
- tqdm
- pytorch_scatter
mkdir squad
wget http://nlp.stanford.edu/data/glove.840B.300d.zip -O ./data/glove.840B.300d.zip
unzip ./data/glove.840B.300d.zip
wget https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json -O ./squad/train-v1.1.json
wget https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json -O ./squad/dev-v1.1.json
cd data
python process_data.py
You might need to change configuration in config.py.
If you want to train, change train = True and set the gpu device in config.py
Evaluation from this repository
cd qgevalcap
python2 eval.py --out_file prediction_file --src_file src_file --tgt_file target_file
BLEU_1 | BLEU_2 | BLEU_3 | BLEU_4 |
---|---|---|---|
45.22 | 29.94 | 22.01 | 16.76 |