- Refer to the
./data/2020_processed
directories for train and test splits - One pickle file for each language
- Each having a dictionary structure
- Keys map you to lists containing the following:
tweet_id
: The provided tweet IDtask_1
: Label for Task 1task_2
: Label for Task 2hasoc_id
: Provided ID for the HASOC taskfull_tweet
: The complete tweet as istweet_raw_text
: Pure tweet text without hashtags, smileys, ...hashtags
: Hashtagssmiley
: Smileysemoji
: Emojisurl
: URLsmentions
: Mentionsnumerals
: Numbersreserved_word
: Reserved Wordsemotext
: A textual description of all emojissegmented_hash
: The hashtag text segmented into words
- Perspective API features for English and German
- XML-RoBERTa Model trained in multi-lingual setting
- Other Transformer based models including BERT and distilBERT
Detecting and classifying instances of hate in social media text has been a problem of interest in Natural Language Processing in the recent years. Our work leverages state of the art Transformer language models to identify hate speech in a multilingual setting. Capturing the intent of a post or a comment on social media involves careful evaluation of the language style, semantic content and additional pointers such as hashtags and emojis. We look at the problem of identifying whether a Twitter post is hateful and offensive or not. We further discriminate the detected toxic content into one of the following three classes: (a) Hate Speech (HATE), (b) Offensive (OFFN) and (c) Profane (PRFN). With a pre-trained multilingual Transformer-based masked LM at the base, we are able to successfully identify and classify hate speech from multiple languages. On the provided testing corpora, we achieve Macro F1 scores of 90.29, 81.87 and 75.40 for English, German and Hindi respectively while performing hate speech detection and of 60.70, 53.28 and 49.74 during fine-grained classification. In our experiments, we show the efficacy of Perspective API features for hate speech classification, and the effects of exploiting a multilingual training scheme.