-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
2019, KDD, Fairness-Aware Ranking in Search & RecSys with application to LinkedIn Talent Search #26
Comments
This paper presents a framework for detecting and mitigating algorithmic bias in recommendation systems. In our work we have a predicted list of authors(T’), first, we show that there already exists bias in it and then by re-ranking the list, we will decrease or mitigate the bias. In the Linkedin Talent Search work, the fairness requirements are given in terms of a desired distribution over protected attributes (Gender, Age or both), and the proposed algorithms do a re-ranking to satisfy the fairness constraints. Measures for Bias Evaluation The Skew basically calculates the ratio of the existing proportion of candidates with attribute value ai to the desired proportion of ai. The more this ratio is closer to 1 the less unfairness exists in the distribution. Also after we calculate the log of this ratio it should be closer to zero in order to be fair.
To solve the first problem two more measures have been provided.
Since Skew is only calculated over an attribute, these new measures compute the min/max of skew over all the attribute values. To solve the second problem another ranking measure has been presented. NDKL also has two disadvantages:
|
@Rounique |
@hosseinfani |
@hosseinfani |
good job! |
No description provided.
The text was updated successfully, but these errors were encountered: