This is the Python library with the core algorithms used to do FA*IR ranking.
To install fairsearchcore
, simply use pip
(or pipenv
):
pip install fairsearchcore
And, that's it!
You need to import the package first:
import fairsearchcore as fsc
Creating and analyzing mtables:
k = 20 # number of topK elements returned (value should be between 10 and 400)
p = 0.25 # proportion of protected candidates in the topK elements (value should be between 0.02 and 0.98)
alpha = 0.1 # significance level (value should be between 0.01 and 0.15)
# create the Fair object
fair = fsc.Fair(k, p, alpha)
# create an mtable using alpha unadjusted
mtable = fair.create_unadjusted_mtable()
>> [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3]
# analytically calculate the fail probability
analytical = fair.compute_fail_probability(mtable)
>> 0.11517506930977106
# create an mtable using alpha adjusted
mtable = fair.create_adjusted_mtable()
>> [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2]
# again, analytically calculate the fail probability
analytical = fair.compute_fail_probability(mtable)
>> 0.10515247355215251
Generate random rankings and analyze them:
M = 10000 # number of rankings you want to generate (works better with big numbers)
# generate rankings using the simulator (M lists of k objects of class fairsearchcore.models.FairScoreDoc)
rankings = fsc.generate_rankings(M, k, p)
>> [[<FairScoreDoc [Protected]>, <FairScoreDoc [Nonprotected]>, <FairScoreDoc [Protected]>,
<FairScoreDoc [Protected]>, <FairScoreDoc [Nonprotected]>, <FairScoreDoc [Nonprotected]>,
<FairScoreDoc [Nonprotected]>, <FairScoreDoc [Protected]>, <FairScoreDoc [Nonprotected]>,
<FairScoreDoc [Nonprotected]>, <FairScoreDoc [Nonprotected]>, <FairScoreDoc [Nonprotected]>,
<FairScoreDoc [Nonprotected]>, <FairScoreDoc [Protected]>, <FairScoreDoc [Nonprotected]>,
<FairScoreDoc [Nonprotected]>, <FairScoreDoc [Nonprotected]>, <FairScoreDoc [Nonprotected]>,
<FairScoreDoc [Nonprotected]>, <FairScoreDoc [Protected]>],...]
# experimentally calculate the fail probability
experimental = fsc.compute_fail_probability(rankings, mtable)
>> 0.1025
Let's get the alpha adjusted (used to create an adjusted mtable)
# get alpha adjusted
alpha_adjusted = fair.adjust_alpha()
>> 0.07812500000000001
Apply a fair re-ranking to a given ranking:
# import the FairScoreDoc class
from fairsearchcore.models import FairScoreDoc
# let's manually create an unfair ranking (False -> unprotected, True -> protected)
# in this example the first document (docid=20) has a score of 20, the last document (docid=1) a score of 1
unfair_ranking = [FairScoreDoc(20, 20, False), FairScoreDoc(19, 19, False), FairScoreDoc(18, 18, False),
FairScoreDoc(17, 17, False), FairScoreDoc(16, 16, False), FairScoreDoc(15, 15, False),
FairScoreDoc(14, 14, False), FairScoreDoc(13, 13, False), FairScoreDoc(12, 12, False),
FairScoreDoc(11, 11, False), FairScoreDoc(10, 10, False), FairScoreDoc(9, 9, False),
FairScoreDoc(8, 8, False), FairScoreDoc(7, 7, False), FairScoreDoc(6, 6, True),
FairScoreDoc(5, 5, True), FairScoreDoc(4, 4, True), FairScoreDoc(3, 3, True),
FairScoreDoc(2, 2, True), FairScoreDoc(1, 1, True)]
# let's check the ranking is considered fair
fair.is_fair(unfair_ranking)
>> False
# now re-rank the unfair ranking
re_ranked = fair.re_rank(unfair_ranking)
>> [<FairScoreDoc [Nonprotected]>, <FairScoreDoc [Nonprotected]>, <FairScoreDoc [Nonprotected]>,
<FairScoreDoc [Nonprotected]>, <FairScoreDoc [Nonprotected]>, <FairScoreDoc [Nonprotected]>,
<FairScoreDoc [Nonprotected]>, <FairScoreDoc [Nonprotected]>, <FairScoreDoc [Protected]>,
<FairScoreDoc [Nonprotected]>, <FairScoreDoc [Nonprotected]>, <FairScoreDoc [Nonprotected]>,
<FairScoreDoc [Nonprotected]>, <FairScoreDoc [Nonprotected]>, <FairScoreDoc [Nonprotected]>,
<FairScoreDoc [Protected]>, <FairScoreDoc [Protected]>, <FairScoreDoc [Protected]>,
<FairScoreDoc [Protected]>, <FairScoreDoc [Protected]>]
# now let's see if the new ranking is fair
fair.is_fair(re_ranked)
>> True
The library contains sufficient code documentation for each of the functions.
- Clone this repository
git clone https://github.com/fair-search/fairsearchcore-python.git
- Change directory to the directory where you cloned the repository
cd WHERE_ITS_DOWNLOADED/fairsearchcore-python
- Use any IDE to work with the code
Just run:
python setup.py test
Note: The simulator tests take a looong time to execute. Also, because there is a randomness factor involved in the tests, it can happen that (rarely) they fail sometimes.
The FA*IR algorithm is described on this paper:
- Meike Zehlike, Francesco Bonchi, Carlos Castillo, Sara Hajian, Mohamed Megahed, Ricardo Baeza-Yates: "FA*IR: A Fair Top-k Ranking Algorithm". Proc. of the 2017 ACM on Conference on Information and Knowledge Management (CIKM).
This code was developed by Ivan Kitanovski based on the paper. See the license file for more information.
For any questions contact Meike Zehlike
You can also see the FA*IR plug-in for ElasticSearch and and FA*IR search core Java library.