Skip to content

Commit

Permalink
master->main (#515)
Browse files Browse the repository at this point in the history
  • Loading branch information
hoffmansc authored Feb 22, 2024
1 parent c3c110d commit c704cd5
Show file tree
Hide file tree
Showing 13 changed files with 19 additions and 19 deletions.
6 changes: 3 additions & 3 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@ name: Continuous Integration

# Controls when the action will run.
on:
# Triggers the workflow on push or pull request events but only for the master branch
# Triggers the workflow on push or pull request events but only for the main branch
push:
branches: [ master ]
branches: [ main ]
pull_request:
branches: [ master ]
branches: [ main ]
paths-ignore:
- '*.md'

Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.rst
Original file line number Diff line number Diff line change
Expand Up @@ -260,7 +260,7 @@ a draft and ask for help.
First-time contributors require approval to run workflows with GitHub actions. CI
should run unit tests for both Python and R for all supported versions as well as
print linter warnings. See
`ci.yml <https://github.com/Trusted-AI/AIF360/blob/master/.github/workflows/ci.yml>`_
`ci.yml <https://github.com/Trusted-AI/AIF360/blob/main/.github/workflows/ci.yml>`_
for the latest build script.

Community
Expand Down
2 changes: 1 addition & 1 deletion MAINTAINERS.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ The maintainers are listed in alphabetical order.

This repository does not have a traditional release management cycle, but
should instead be maintained as a useful, working, and polished reference at
all times. While all work can therefore be focused on the master branch, the
all times. While all work can therefore be focused on the main branch, the
quality of this branch should never be compromised.

The remainder of this document details how to merge pull requests to the
Expand Down
4 changes: 2 additions & 2 deletions examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@ Both tutorials and demos illustrate working code using AIF360. Tutorials provid
the user through the various steps of the notebook.

## Tutorials
The [Credit scoring](https://nbviewer.jupyter.org/github/Trusted-AI/AIF360/blob/master/examples/tutorial_credit_scoring.ipynb) tutorial is the recommended first tutorial to get an understanding for how AIF360 works. It first provides a brief summary of a machine learning workflow and an overview of AIF360. It then demonstrates the use of one fairness metric (mean difference) and one bias mitigation algorithm (optimized preprocessing) in the context of age bias in a credit scoring scenario using the [German Credit dataset](https://archive.ics.uci.edu/ml/datasets/Statlog+%28German+Credit+Data%29).
The [Credit scoring](https://nbviewer.jupyter.org/github/Trusted-AI/AIF360/blob/main/examples/tutorial_credit_scoring.ipynb) tutorial is the recommended first tutorial to get an understanding for how AIF360 works. It first provides a brief summary of a machine learning workflow and an overview of AIF360. It then demonstrates the use of one fairness metric (mean difference) and one bias mitigation algorithm (optimized preprocessing) in the context of age bias in a credit scoring scenario using the [German Credit dataset](https://archive.ics.uci.edu/ml/datasets/Statlog+%28German+Credit+Data%29).

The [Medical expenditure](https://nbviewer.jupyter.org/github/Trusted-AI/AIF360/blob/master/examples/tutorial_medical_expenditure.ipynb) tutorial is a comprehensive tutorial demonstrating the interactive exploratory nature of a data scientist detecting and mitigating racial bias in a care management scenario. It uses a variety of fairness metrics (disparate impact, average odds difference, statistical parity difference, equal opportunity difference, and Theil index) and algorithms (reweighing, prejudice remover, and disparate impact remover). It also demonstrates how explanations can be generated for predictions made by models learned with the toolkit using LIME.
The [Medical expenditure](https://nbviewer.jupyter.org/github/Trusted-AI/AIF360/blob/main/examples/tutorial_medical_expenditure.ipynb) tutorial is a comprehensive tutorial demonstrating the interactive exploratory nature of a data scientist detecting and mitigating racial bias in a care management scenario. It uses a variety of fairness metrics (disparate impact, average odds difference, statistical parity difference, equal opportunity difference, and Theil index) and algorithms (reweighing, prejudice remover, and disparate impact remover). It also demonstrates how explanations can be generated for predictions made by models learned with the toolkit using LIME.
Data from the Medical Expenditure Panel Survey ([2015](https://meps.ahrq.gov/mepsweb/data_stats/download_data_files_detail.jsp?cboPufNumber=HC-181) and [2016](https://meps.ahrq.gov/mepsweb/data_stats/download_data_files_detail.jsp?cboPufNumber=HC-192)) is used in this tutorial.

## Demos
Expand Down
2 changes: 1 addition & 1 deletion examples/demo_lime.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Trusted-AI/AIF360/blob/master/examples/demo_lime.ipynb)\n"
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Trusted-AI/AIF360/blob/main/examples/demo_lime.ipynb)\n"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
{
"cell_type": "markdown",
"source": [
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://github.com/Trusted-AI/AIF360/blob/master/examples/sklearn/demo_grid_search_reduction_classification_sklearn.ipynb)\n"
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://github.com/Trusted-AI/AIF360/blob/main/examples/sklearn/demo_grid_search_reduction_classification_sklearn.ipynb)\n"
],
"metadata": {
"id": "Zny_LW9qx_vx"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
"id": "fgfyb_2c9WL4"
},
"source": [
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://github.com/Trusted-AI/AIF360/blob/master/examples/sklearn/demo_grid_search_reduction_regression_sklearn.ipynb)"
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://github.com/Trusted-AI/AIF360/blob/main/examples/sklearn/demo_grid_search_reduction_regression_sklearn.ipynb)"
]
},
{
Expand Down Expand Up @@ -418,7 +418,7 @@
"id": "WyqzfrL271DN"
},
"source": [
"Search for the best regressor and observe mean absolute error. Grid search for regression uses \"BoundedGroupLoss\" to specify using bounded group loss for its constraints. Accordingly we need to specify a loss function, like \"Absolute.\" Other options include \"Square\" and \"ZeroOne.\" When the loss is \"Absolute\" or \"Square\" we also specify the expected range of the y values in min_val and max_val. For details on the implementation of these loss function see the fairlearn library here https://github.com/fairlearn/fairlearn/blob/master/fairlearn/reductions/_moments/bounded_group_loss.py."
"Search for the best regressor and observe mean absolute error. Grid search for regression uses \"BoundedGroupLoss\" to specify using bounded group loss for its constraints. Accordingly we need to specify a loss function, like \"Absolute.\" Other options include \"Square\" and \"ZeroOne.\" When the loss is \"Absolute\" or \"Square\" we also specify the expected range of the y values in min_val and max_val. For details on the implementation of these loss function see the fairlearn library here https://github.com/fairlearn/fairlearn/blob/main/fairlearn/reductions/_moments/bounded_group_loss.py."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion examples/sklearn/demo_learning_fair_representations.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
"id": "vr9-SPbQxLvu"
},
"source": [
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Trusted-AI/AIF360/blob/master/examples/sklearn/demo_learning_fair_representations.ipynb)"
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Trusted-AI/AIF360/blob/main/examples/sklearn/demo_learning_fair_representations.ipynb)"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion examples/sklearn/demo_mdss_bias_scan.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
"id": "-8hjwN6W0VJz"
},
"source": [
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Trusted-AI/AIF360/blob/master/examples/sklearn/demo_mdss_bias_scan.ipynb)"
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Trusted-AI/AIF360/blob/main/examples/sklearn/demo_mdss_bias_scan.ipynb)"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion examples/sklearn/demo_mdss_classifier_metric_sklearn.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
"id": "nYDA72xl0a5e"
},
"source": [
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Trusted-AI/AIF360/blob/master/examples/sklearn/demo_mdss_classifier_metric_sklearn.ipynb)"
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Trusted-AI/AIF360/blob/main/examples/sklearn/demo_mdss_classifier_metric_sklearn.ipynb)"
]
},
{
Expand Down
6 changes: 3 additions & 3 deletions examples/sklearn/demo_new_features.ipynb

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion examples/sklearn/demo_reject_option_classification.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Trusted-AI/AIF360/blob/master/examples/sklearn/demo_reject_option_classification.ipynb)"
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Trusted-AI/AIF360/blob/main/examples/sklearn/demo_reject_option_classification.ipynb)"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion examples/sklearn/monthly_bee_datasets_metrics.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Trusted-AI/AIF360/blob/master/examples/sklearn/monthly_bee_datasets_metrics.ipynb)"
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Trusted-AI/AIF360/blob/main/examples/sklearn/monthly_bee_datasets_metrics.ipynb)"
]
},
{
Expand Down

0 comments on commit c704cd5

Please sign in to comment.