Skip to content

Releases: cvs-health/langfair

v0.3.1

02 Jan 19:31
da2ee94
Compare
Choose a tag to compare

Highlights

  • New method check_ftu to check for FTU in CounterfactualGenerator class. This method provides a more user-friendly way to check for FTU than the previous approach with parse_texts
  • Updates to counterfactual demo notebook
  • Updates to dev dependencies
  • Fix broken links in readme

What's Changed

New Contributors

Full Changelog: v0.3.0...v0.3.1

v0.3.0

20 Dec 19:37
c5d1429
Compare
Choose a tag to compare

Highlights

  • Option to return response-level scores for CounterfactualMetrics, AutoEval
  • Additional unit tests for CounterfactualMetrics, AutoEval
  • Data loader functions for cleaner code when using example data
  • Enforced strings in ResponseGenerator, CounterfactualGenerator output to avoid error when computing metrics if any response is None

What's Changed

Full Changelog: v0.2.1...v0.3.0

v0.2.1

11 Dec 21:56
9908818
Compare
Choose a tag to compare

Highlights

  • updated README for more illustrative examples
  • patch to AutoEval for pairwise filtering of counterfactual responses in cases of generation failure
  • references in docstring
  • fix to SPDX expression in pyproject.toml

What's Changed

Full Changelog: v0.2.0...v0.2.1

v0.2.0

21 Nov 22:32
679464c
Compare
Choose a tag to compare

Highlights

  • Upgrade version of LangChain to 0.3.7 to resolve dependency conflicts with later versions of LangChain community packages
  • Refactor ResponseGenerator, CounterfactualGenerator, AutoEval to adjust for LangChain upgrade
  • More intuitive printing in AutoEval
  • Update unit tests
  • Update documentation in notebooks for user-friendliness and to include MistralAI
  • Improved exception handling
  • Remove 'langchain: ' from print statements

What's Changed

New Contributors

Full Changelog: v0.1.2...v0.2.0

v0.1.2

11 Nov 15:34
6563ba2
Compare
Choose a tag to compare

Highlights

  • Improved Readme for readability
  • Improved notebook documentation for readability
  • Removed scipy, sklearn, openai and langchain-openai dependencies
  • Created new argument for ResponseGenerator and CounterfactualGenerator that allows users to specify which exceptions to suppress

What's Changed

New Contributors

Full Changelog: v0.1.1...v0.1.2

v0.1.1

28 Oct 17:45
c93e884
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.1.0...v0.1.1

v0.1.0

23 Oct 19:24
07e6a33
Compare
Choose a tag to compare

LangFair v0.1.0 Release Notes

LangFair is a Python library for conducting bias and fairness assessments of LLM use cases. This repository includes a framework for choosing bias and fairness metrics, demo notebooks, and a LLM bias and fairness technical playbook containing a thorough discussion of LLM bias and fairness risks, evaluation metrics, and best practices. Please refer to our documentation site for more details on how to use LangFair.

Highlights

Bias and fairness metrics offered by LangFair fall into one of several categories. The full suite of metrics is displayed below.

Counterfactual Fairness Metrics
Stereotype Metrics
Toxicity Metrics
Recommendation Fairness Metrics
Classification Fairness Metrics