Assess fairness of machine learning models and choose an appropriate fairness metric for your use case with Fairlearn
-
Updated
Nov 23, 2021 - Jupyter Notebook
Assess fairness of machine learning models and choose an appropriate fairness metric for your use case with Fairlearn
Personal Data Science Projects
Demos about Teaching your Models to play fair with FairLearn
Learn different techniques for mitigating fairness-related harms using Fairlearn.
Microsoft Ignite - Getting started on your health-tech journey using responsible AI
A platform developed with Cash App to help ML engineers detect and visualize biases in models using Fairlearn. Features include a collaborative and interactive dashboard (React, Chart.js), a Flask backend, and a secure MySQL database for data storage and analysis.
This repository was used for my thesis. The goal was to find a biased dataset, and mitigate its bias. That is done under the patients directory. Check the README file for more.
Evaluating Fairness in Machine Learning: Comparative Analysis of Fairlearn and AIF360
An ethically-aware deep learning project to predict credit card offer acceptance while mitigating income-based bias using SHAP, Fairlearn, and AIF360.
Demo's of FairLearn and InterpretML as described in my article on responsible AI.
Student Success Model (SSM)
A demonstration of detecting and mitigating bias in AI.
Mitigating bias for our model used to identify Glioblastoma Multiforme (tumors) using Xgboost.
Add a description, image, and links to the fairlearn topic page so that developers can more easily learn about it.
To associate your repository with the fairlearn topic, visit your repo's landing page and select "manage topics."