diff --git a/cs8850_22_calibration.html b/cs8850_22_calibration.html
index 0798de3b..6500bf9b 100644
--- a/cs8850_22_calibration.html
+++ b/cs8850_22_calibration.html
@@ -125,11 +125,12 @@
Schedule
-
+
Outline for the lecture
- Receiver Operator Characteristics
+
- Trustworthy AI
- Model Calibration
@@ -190,6 +191,65 @@ Area Under the Curve (AUC)
+
+
+
+
+
+ Why trustworthy AI is interesting
+
+ - AI is increasingly used not only for decision support, but also for automated decision making
+
- Trust in resulting AI decisions is vital
+
- How to make AI solutions trustworthy?
+
- What does it mean to be trustworthy?
+
- AI trustworthiness is strongly manifested in the fields of Explainable AI (XAI) and Fairness, Accountability and Transparency (FAT)
+
+
+
+
+
+ Interpretability
+
+ - A recognized key property of trustworthy predictive models
+
- Interpretable models make it possible to understand individual predictions without invoking explanation frameworks/modules
+
- If a model is interpretable, inspection and analysis becomes straightforward
+
- However, the most visible approaches are building external explanation frameworks. Vigorously (including ourselves )
+
+
+
+
+ Algorithmic Confidence
+
+ - FAT Principlesfooter include accuracy as a vital component of accountable algorithms
+
- One guiding question for accountable algorithms: "How confident are the decisions output by your system?"
+
- Thus, not just everything with the accuracy on top, but also ability to, at the very least, report uncertainty
+
- Extremely valuable to have algorithm reason about its own uncertainty and confidence in individual recommendations
+
+
+
+
+
+ Interpretable and Accountable models
+ Requirements
+
+ - Interpretable models
+
+ decision trees, rule sets, or glass-box layer of Usman Mahmood
+
+ - Well-calibrated models
+
- Specific to individual predictions, exhibiting different confidences
+
- Fixed models available for inspection and analysis
+
+
+
+
+
On Calibration of Modern Neural Networks