Skip to content
#

lexicon-normalization

Here is 1 public repository matching this topic...

The models are used to classify the toxic comments as toxic, severely toxic, insult, threat, obscene, & identity hate. By data collection & preprocessing to classify toxic comments with the help of lemmatization, lexicon normalization, & TF-IDF algorithm, we train & test the models using ML algorithms & evaluate using ROC curves & hamming score.

  • Updated Dec 29, 2023
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the lexicon-normalization topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the lexicon-normalization topic, visit your repo's landing page and select "manage topics."

Learn more