NamSor fairness backtest on COMPAS database using Aequitas, with their example analysis of COMPAS as a reference.
Can a fairness audit of another tool work even if protected attributes like gender and ethnicity are NOT provided but we infer them using NamSor? For this experiment I repeat Aequitas' COMPAS analysis using not the original values but the predictions made by NamSor. For the data preparation I format the original COMPAS data from ProPublica with a script from Aequitas which I modified so it keeps first and last names. I add predictions for name gender and origin made as made by NamSor API. For using the NamSor Python SDK to get predictions for the gender and origin of names I oriented on what I did in my Bachelor Thesis.
For a critical review of ProPublica's original analysis see Anthony et. al 2016.
How good are NamSor's predictions for gender and ethnicity? And are they equally good for all groups of people? How fair is NamSor? To answer these questions I first need to prepare two data sets (using a modified script from 1.) and then calculate Fairness Measures with the help of Aequitas.
Details on theoretical background, the fairness audit technology, method, results and discussion are made available in my report Algorithmic Fairness from Theory to Practical Application. A Fairness Audit of NamSor’s Gender and Ethnicity Classification Algorithms. which I included in this repository.
I developed with the following environment
- Python 3.7.3 installed
- Anaconda 1.7.2 installed with Jupyter Notebook
- Aequitas 38.1 downloaded
- NamSor 2.0.9 SDK for Python downloaded (To use, get an API key for NamSor and safe it in a key.txt file the root folder of this repository. The gitignore file is set to ignore the key.txt file.)