Algorithms for abstention, calibration and domain adaptation to label shift.
Associated papers:
Shrikumar A*†, Alexandari A*, Kundaje A†, A Flexible and Adaptive Framework for Abstention Under Class Imbalance
Alexandari A*, Kundaje A†, Shrikumar A*†, Maximum Likelihood with Bias-Corrected Calibration is Hard-To-Beat at Label Shift Adaptation
*co-first authors † co-corresponding authors
See https://github.com/blindauth/abstention_experiments and https://github.com/blindauth/labelshiftexperiments for colab notebooks reproducing the experiments in the papers.
pip install abstention
For calibration:
- Platt Scaling
- Isotonic Regression
- Temperature Scaling
- Vector Scaling
- Bias-Corrected Temperature Scaling
- No-Bias Vector Scaling
For domain adaptation to label shift:
- Expectation Maximization (Saerens et al., 2002)
- Black-Box Shift Learning (BBSL) (Lipton et al., 2018)
- Regularized Learning under Label Shifts (RLLS) (Azizzadenesheli et al., 2019)
For abstention:
- Metric-specific abstention methods described in A Flexible and Adaptive Framework for Abstention Under Class Imbalance, including abstention to optimize auROC, auPRC, sensitivity at a target specificity and weighted Cohen's Kappa
- Jensen-Shannon Divergence from class priors
- Entropy in the predicted class probabilities (Wan, 1990)
- Probability of the highest-predicted class (Hendrycks & Gimpel, 2016)
- The method of Fumera et al., 2000
- See Colab notebook experiments in https://github.com/blindauth/abstention_experiments for details on how to use the various abstention methods.
If you have any questions, please contact:
Avanti Shrikumar: avanti [dot] shrikumar [at] gmail.com
Amr Alexandari: amr [dot] alexandari [at] gmail.com
Anshul Kundaje: akundaje [at] stanford [dot] edu