Although Large Language Models (LLMs) have demonstrated significant advancements in natural language processing tasks, their effectiveness in the classification and transformation of abusive text into non-abusive versions remains an area ripe for explo- ration. In this study, we aim to use LLMs to transform abusive text (tweets and reviews) featuring hate speech and swear words into non-abusive text where the message is retained, i.e. the semantics and also the sentiment. We evaluate the performance of two state-of-the-art LLMs, such as Gemini and Groq, on their ability to identify abusive text. We then use two additional LLMs to trans- form them so that we obtain a text that is clean from abusive and inappropriate content but maintains a similar level of sentiment and semantics, i.e. the transformed text needs to maintain its message. Next, we evaluate the raw and transformed datasets with sentiment analysis and semantic analysis. Our results show Groq have vastly different results when compared with other modelsand we’ve identified many similarities between GPT-4o and DeepSeek-V3
-
Notifications
You must be signed in to change notification settings - Fork 1
pinglainstitute/LLM-reviewtransformation
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
About
Analysis of transformation of rabusive reviews by large language models using sentiment and semantic analyses
Resources
Stars
Watchers
Forks
Packages 0
No packages published