-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
2018, EMNLP, Reducing Gender Bias in Abusive Language Detection #12
Comments
@Rounique |
As I mentioned, I didn't do summaries up to now. I'll do them after I finish the assignments I have for this week. |
@Rounique |
Title: Reducing Gender Bias in Abusive Language Detection Introduction In this paper, gender bias has been measured on models that are trained with abusive language datasets, and also some methods have been introduced for mitigating these biases. The bias measuring is done with a generated unbiased test set and the mitigating methods are: (1) debiased word embedding, (2) gender swap data augmentation, (3) fine-tuning with a larger corpus Dataset: Measuring Gender Biases Mitigating Bias Word Embeddings (DE) Gender Swap (GS) Bias fine-tuning (FT) Metric used: Conclusion Future Work Codes |
The summary is added. |
@Rounique |
No description provided.
The text was updated successfully, but these errors were encountered: