LangFair is a Python library for conducting use-case level LLM bias and fairness assessments
-
Updated
Nov 12, 2024 - Python
LangFair is a Python library for conducting use-case level LLM bias and fairness assessments
Bias Auditing & Fair ML Toolkit
An official implementation for "Fairness Evaluation in Deepfake Detection Models using Metamorphic Testing"
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
Automatic Location of Disparities (ALD) for algorithmic audits.
Data Challenge leading to "A Multidisciplinary Lens of Bias in Hate Speech" (ASONAM 2023)
Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰
⚖️ A bias audit tool for binary decision-making systems
A curated list of Robust Machine Learning papers/articles and recent advancements.
Machine Learning Bias Mitigation
Search-Based Fairness Testing
BiasFinder | IEEE TSE | Metamorphic Test Generation to Uncover Bias for Sentiment Analysis Systems
implementation of fair dummies
This repository covers essential tools for machine learning with python, including but not limited to supervised and unsupervised models, natural language processing (NLP), and fairness auditing.
Article for Special Edition of Information: Machine Learning with Python
Examples of unfairness detection for a classification-based credit model
NuSMV models for traffic lights controlling a crossing of two one-way roads.
Simulate different outcomes and the probability of each player winning, in our team's game, 'Doggo Quest', during the King card game challenge.
Add a description, image, and links to the fairness-testing topic page so that developers can more easily learn about it.
To associate your repository with the fairness-testing topic, visit your repo's landing page and select "manage topics."