Team 8:
this project compares the performance of multiple BERT-based models for the task of Emotion recognition in Arabic Tweets. Our approach compares 7 BERT models:
- AraBERT-base
- AraBERT-Twitter-base
- MARBERT
- CAMELBERT-DA
- CAMELBERT-MSA
- CAMELBERT-MSA-16th
- mBERT
All the experiments are avaialble in Experiments Directory. The model that achieved the highest accuracy is MARBERT. Its notebook is available in MARBERT.ipynb with some added inference examples in the end.
If you wish to experiment with other models or redo an experiment, you can simply open the Jupyter notebook in Google Colab or Kaggle, connect to a GPU, and paste the desired model path from Hugging Face into the model_path
parameter. Run all the cells if you don't wish to adjust any more parameters and dataset will be automatically imported, processed, and the model will be fine-tuned on the dataset and evaluated. Check the Report for more details :)