This project focuses on developing a sentiment analysis model for Arabic text, leveraging hybrid transformer-based models (AraBERT) and LSTM approaches. It aims to analyze and classify Arabic text into positive, negative, or neutral sentiment.
Arabic sentiment analysis presents unique challenges due to the language's complexity and lack of annotated data. This project explores state-of-the-art techniques to overcome these challenges and improve sentiment classification accuracy.
- Transformer-Based Model: Integrated AraBERT for effective contextual representation.
- Hybrid Approach: Combined transformer outputs with an LSTM for improved performance.
- Comprehensive Review: Included a literature review of recent advancements in Arabic NLP.
- Programming Languages: Python
- Libraries/Frameworks: PyTorch, Hugging Face Transformers, NumPy, Pandas
- Models: AraBERT, LSTM
Model Performance:
- Accuracy: To be updated.
- F1-Score: To be updated.
This project is licensed under the MIT License - see the LICENSE file for details.