This Python package provides an implementation of the empirical methodology to systematically evaluate the effectiveness of label noise correction techniques in ensuring the fairness of models trained on biased datasets, proposed in [1]. The methodology involves manipulating the amount of label noise and can be used with fairness benchmarks but also with standard ML datasets. Experiment tracking is done using mlflow
.
You can install the package using pip
:
pip install fair_lnc_evaluation
Examples of how to use this package can be found on the examples
folder.
Contributions to this package are welcome! If you have any bug reports, feature requests, or would like to contribute with code improvements, please submit an issue or a pull request on the GitHub repository.
This package is distributed under the MIT License.
Feel free to modify and expand upon this README.md template according to your specific package and the algorithms you implement.