This repository contains the code for our EMNLP 2020 paper:
Zixiang Ding, Rui Xia, Jianfei Yu. End-to-End Emotion-Cause Pair Extraction based on SlidingWindow Multi-Label Learning. EMNLP 2020.
Please cite our paper if you use this code.
- Python 2 (tested on python 2.7.15)
- Tensorflow 1.13.1
- BERT (The pretrained BERT model "BERT-Base, Chinese" is required. Our model is built based on this implementation: https://github.com/google-research/bert)
python main.py
Experimental results under two different data split setttings:
- 10-fold cross-validation, following NUSTM/ECPE (already reported in our paper)
Approach | Emotion-Cause Pair Ext. | Emotion Ext. | Cause Ext. | ||||||
P | R | F1 | P | R | F1 | P | R | F1 | |
ECPE-MLL(ISML) | 0.7090 | 0.6441 | 0.6740 | 0.8582 | 0.8429 | 0.8500 | 0.7248 | 0.6702 | 0.6950 |
ECPE-MLL(BERT) | 0.7700 | 0.7235 | 0.7452 | 0.8608 | 0.9191 | 0.8886 | 0.7382 | 0.7912 | 0.7630 |
- Randomly sampling train/validation/test sets with 8:1:1 proportion 20 times, following HLT-HITSZ/TransECPE
Approach | Emotion-Cause Pair Ext. | Emotion Ext. | Cause Ext. | ||||||
P | R | F1 | P | R | F1 | P | R | F1 | |
ECPE-MLL(ISML) | 0.6725 | 0.6248 | 0.6471 | 0.8415 | 0.8212 | 0.8310 | 0.6864 | 0.6443 | 0.6639 |
ECPE-MLL(BERT) | 0.7488 | 0.6976 | 0.7220 | 0.8465 | 0.8990 | 0.8717 | 0.7051 | 0.7704 | 0.7358 |