Skip to content

Python library with the core algorithms used to do FA*IR ranking.

License

Notifications You must be signed in to change notification settings

fair-search/fairsearch-fair-python

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Fair search core for Python

image image image image image

This is the Python library with the core algorithms used to do FA*IR ranking.

Installation

To install fairsearchcore, simply use pip (or pipenv):

pip install fairsearchcore

And, that's it!

Using it in your code

You need to import the package first:

import fairsearchcore as fsc

Creating and analyzing mtables:

k = 20 # number of topK elements returned (value should be between 10 and 400)
p = 0.25 # proportion of protected candidates in the topK elements (value should be between 0.02 and 0.98) 
alpha = 0.1 # significance level (value should be between 0.01 and 0.15)

# create the Fair object 
fair = fsc.Fair(k, p, alpha)

# create an mtable using alpha unadjusted
mtable = fair.create_unadjusted_mtable()
>> [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3]

# analytically calculate the fail probability
analytical = fair.compute_fail_probability(mtable)
>> 0.11517506930977106 

# create an mtable using alpha adjusted
mtable = fair.create_adjusted_mtable()
>> [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2]

# again, analytically calculate the fail probability
analytical = fair.compute_fail_probability(mtable)
>> 0.10515247355215251

Generate random rankings and analyze them:

M = 10000 # number of rankings you want to generate (works better with big numbers)

# generate rankings using the simulator (M lists of k objects of class fairsearchcore.models.FairScoreDoc) 
rankings = fsc.generate_rankings(M, k, p)
>> [[<FairScoreDoc [Protected]>, <FairScoreDoc [Nonprotected]>, <FairScoreDoc [Protected]>, 
<FairScoreDoc [Protected]>, <FairScoreDoc [Nonprotected]>, <FairScoreDoc [Nonprotected]>, 
<FairScoreDoc [Nonprotected]>, <FairScoreDoc [Protected]>, <FairScoreDoc [Nonprotected]>, 
<FairScoreDoc [Nonprotected]>, <FairScoreDoc [Nonprotected]>, <FairScoreDoc [Nonprotected]>, 
<FairScoreDoc [Nonprotected]>, <FairScoreDoc [Protected]>, <FairScoreDoc [Nonprotected]>, 
 <FairScoreDoc [Nonprotected]>, <FairScoreDoc [Nonprotected]>, <FairScoreDoc [Nonprotected]>, 
 <FairScoreDoc [Nonprotected]>, <FairScoreDoc [Protected]>],...]

# experimentally calculate the fail probability
experimental = fsc.compute_fail_probability(rankings, mtable)
>> 0.1025

Let's get the alpha adjusted (used to create an adjusted mtable)

# get alpha adjusted
alpha_adjusted = fair.adjust_alpha()
>> 0.07812500000000001

Apply a fair re-ranking to a given ranking:

# import the FairScoreDoc class
from fairsearchcore.models import FairScoreDoc

# let's manually create an unfair ranking (False -> unprotected, True -> protected)
# in this example the first document (docid=20) has a score of 20, the last document (docid=1) a score of 1
unfair_ranking = [FairScoreDoc(20, 20, False), FairScoreDoc(19, 19, False), FairScoreDoc(18, 18, False),
                      FairScoreDoc(17, 17, False), FairScoreDoc(16, 16, False), FairScoreDoc(15, 15, False),
                      FairScoreDoc(14, 14, False), FairScoreDoc(13, 13, False), FairScoreDoc(12, 12, False),
                      FairScoreDoc(11, 11, False), FairScoreDoc(10, 10, False), FairScoreDoc(9, 9, False),
                      FairScoreDoc(8, 8, False), FairScoreDoc(7, 7, False), FairScoreDoc(6, 6, True),
                      FairScoreDoc(5, 5, True), FairScoreDoc(4, 4, True), FairScoreDoc(3, 3, True),
                      FairScoreDoc(2, 2, True), FairScoreDoc(1, 1, True)]
                      
# let's check the ranking is considered fair
fair.is_fair(unfair_ranking)
>> False

# now re-rank the unfair ranking                 
re_ranked = fair.re_rank(unfair_ranking)
>> [<FairScoreDoc [Nonprotected]>, <FairScoreDoc [Nonprotected]>, <FairScoreDoc [Nonprotected]>, 
<FairScoreDoc [Nonprotected]>, <FairScoreDoc [Nonprotected]>, <FairScoreDoc [Nonprotected]>, 
<FairScoreDoc [Nonprotected]>, <FairScoreDoc [Nonprotected]>, <FairScoreDoc [Protected]>, 
<FairScoreDoc [Nonprotected]>, <FairScoreDoc [Nonprotected]>, <FairScoreDoc [Nonprotected]>, 
<FairScoreDoc [Nonprotected]>, <FairScoreDoc [Nonprotected]>, <FairScoreDoc [Nonprotected]>,
<FairScoreDoc [Protected]>, <FairScoreDoc [Protected]>, <FairScoreDoc [Protected]>, 
<FairScoreDoc [Protected]>, <FairScoreDoc [Protected]>]

# now let's see if the new ranking is fair
fair.is_fair(re_ranked)
>> True

The library contains sufficient code documentation for each of the functions.

Development

  1. Clone this repository git clone https://github.com/fair-search/fairsearchcore-python.git
  2. Change directory to the directory where you cloned the repository cd WHERE_ITS_DOWNLOADED/fairsearchcore-python
  3. Use any IDE to work with the code

Testing

Just run:

python setup.py test 

Note: The simulator tests take a looong time to execute. Also, because there is a randomness factor involved in the tests, it can happen that (rarely) they fail sometimes.

Credits

The FA*IR algorithm is described on this paper:

  • Meike Zehlike, Francesco Bonchi, Carlos Castillo, Sara Hajian, Mohamed Megahed, Ricardo Baeza-Yates: "FA*IR: A Fair Top-k Ranking Algorithm". Proc. of the 2017 ACM on Conference on Information and Knowledge Management (CIKM).

This code was developed by Ivan Kitanovski based on the paper. See the license file for more information.

For any questions contact Meike Zehlike

See also

You can also see the FA*IR plug-in for ElasticSearch and and FA*IR search core Java library.

About

Python library with the core algorithms used to do FA*IR ranking.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages