The models are used to classify the toxic comments as toxic, severely toxic, insult, threat, obscene, & identity hate. By data collection & preprocessing to classify toxic comments with the help of lemmatization, lexicon normalization, & TF-IDF algorithm, we train & test the models using ML algorithms & evaluate using ROC curves & hamming score.
-
Updated
Dec 29, 2023 - Jupyter Notebook