Deep Probabilistic Scaling (DPS) is an uncertainty quantification tool for the control of misclassification error in (binary) neural classification. The algorithm relies on probabilistic scaling, a branch of order statistics for non-parametric inference with confidence bounds on the prediction.
DPS is a direct application of Scalable Classification to convolutional neural networks for (binary) classification:
that is the predictor function of the network
This framework allows to define a special region
This repository contains the code for the experiments to validate the DPS algorithm.
We considered 6 benchmarks datasets, on which we defined a binary classification problem, as shown below:
As an example, MNIST data are also available in the data
folder in the .npy
format.
The following notebooks are available:
-
get_pneumoniaMNIST_data.ipynb
shows how to download and save data for pnemoniaMNIST by using medmnist python library -
DeepSC_NNtraining.ipynb
: contains the training of the convolutional models (3-layer CNNs) used for DPS. The models are also shared in themodels
folder -
DeepSC_ProbScaling.ipynb
: main code to implement DPS -
EvaluationMetricsPlot.ipynb
: computes the evaluation metrics and plots the results for all the considered datasets.
DPS was implemented for a research paper presented to COPA2024 on the basis of the concept paper
Carlevaro, Alberto, et al. "Probabilistic Safety Regions Via Finite Families of Scalable Classifiers." arXiv preprint arXiv:2309.04627 (2023).
The full paper will be available in the proceedings of the conference.