Releases: Trusted-AI/AIF360
Releases · Trusted-AI/AIF360
v0.1.1
v0.1.0
AIF360 0.1.0 Release Notes
The AI Fairness 360 toolkit is an open-source library to help detect and remove bias in machine learning models. The AI Fairness 360 Python package includes a comprehensive set of metrics for datasets and models to test for biases, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
Highlights
A brief list of features provided in this release include:
- Algorithms:
- Optimized Preprocessing (Calmon et al., 2017)
- Disparate Impact Remover (Feldman et al., 2015)
- Equalized Odds Postprocessing (Hardt et al., 2016)
- Reweighing (Kamiran and Calders, 2012)
- Reject Option Classification (Kamiran et al., 2012)
- Prejudice Remover Regularizer (Kamishima et al., 2012)
- Calibrated Equalized Odds Postprocessing (Pleiss et al., 2017)
- Learning Fair Representations (Zemel et al., 2013)
- Adversarial Debiasing (Zhang et al., 2018)
- Datasets Interface (raw data not included)
- UCI ML Repository: Adult, German Credit, Bank Marketing
- ProPublica Recidivism
- Medical Expenditure Panel Survey
- Metrics
- Comprehensive set of group fairness metrics derived from selection rates and error rates
- Comprehensive set of sample distortion metrics
- Generalized Entropy Index (Speicher et al., 2018)
- Metric Explanations
- Text and JSON output formats supported