Codes for our paper LCF: A Local Context Focus Mechanism for Aspect-Based Sentiment Classification.
This is the up-to-date version which transferred pytorch-pretrained-bert to pytorch-transformers, and an earlier version of LCF-BERT models can be found at ABSA-PyTorch.
Pytorch Implementations.
Pytorch-transformers.
Aspect-based Sentiment Analysis (GloVe/BERT).
Chinese Aspect-based Sentiment Analysis
中文方面级情感分析(中文ABSA)
Here is new paper aims at Chinese and multilingual-oriented Aspect Polarity Classification and Aspect Term Extraction: A Multi-task Learning Model for Chinese-oriented Aspect Polarity Classification and Aspect Term Extraction, the codes are available at: LCF-ATEPC.
-
python 3.7 (recommended)
-
pytorch >= 1.0
-
Pytorch-transformers >= 1.2.0
-
To unleash the performance of LCF-BERT models, a GTX 1080Ti or other GPU equipped with a large memory is required.
-
In fact, the memory consumption of this model can be significantly optimized in several ways (minize to 6GB of RAM).
- SemEval-2014 (Restaurant and Laptop datasets)
- ACL twitter dataset
- Chinese Review Datasets
Train the model with cmd
python train.py --model lcf_bert --dataset laptop --SRD 3 --local_context_focus cdm --use_single_bert
or try to train in batches
python batch_training.py --config experiments_glove.json
Try to set 'batch_size=8' or 'use_single_bert = True' while out-of-memory error occurs.
We made our efforts to make our research reproducible. However, the performance of the GloVe embedding-based models fluctuates and any slight changes in the model structure could also influence performance. We will try to alleviate this problem in future research.
The performance based on the pytorch pre-trained model of bert-base-uncased.
The results below are the best performance so try to set the different random seed to reproduce the result. (The original has been refactored and migrated from pytroch-pretrained-bert to pytorch-transformer and not all experiments are reproduced under current code version)
| Models | Restaurant (acc) | Laptop (acc) | Twitter(acc) |
|---|---|---|---|
| LCF-Glove-CDM | 82.50 | 76.02 | 72.25 |
| LCF-Glove-CDW | 81.61 | 75.24 | 71.82 |
| LCF-BERT-CDM | 86.52 | 82.29 | 76.45 |
| LCF-BERT-CDW | 87.14 | 82.45 | 77.31 |
This repository can achieve superior performance with BERT-ADA pre-trained models. Learn to train the domain adapted BERT pretrained models from domain-adapted-atsc, and place the pre-trained models in bert_pretrained_models. The results in the following table are the best of five training processes (random seed 0, 1, 2, 3, 4), Refer to the training log to reproduce the results. Try to set other random seeds to explore different results. The results on the Twitter dataset based on the domain-adapted BERT model on the Restaurant corpus.
| Models | Restaurant (acc) | Laptop (acc) | Twitter(acc) |
|---|---|---|---|
| LCF-BERT-CDM | 89.11 | 82.92 | 77.89 |
| LCF-BERT-CDW | 89.38 | 82.76 | 77.17 |
| LCF-BERT-Fusion | 88.95 | 82.45 | 77.75 |
The state-of-the-art benchmarks of the ABSA task can be found at NLP-progress (See Section of SemEval-2014 subtask4)
This work is based on the repositories of ABSA-PyTorch and the Pytorch-transformers. Thanks to the authors for their devotion and Thanks to everyone who offered assistance. Feel free to report any bug or discussing with us.
If this repository is helpful to you, please cite our paper:
@article{zeng2019lcf,
title={LCF: A Local Context Focus Mechanism for Aspect-Based Sentiment Classification},
author={Zeng, Biqing and Yang, Heng and Xu, Ruyang and Zhou, Wu and Han, Xuli},
journal={Applied Sciences},
volume={9},
number={16},
pages={3389},
year={2019},
publisher={Multidisciplinary Digital Publishing Institute}
}