Skip to content

DataSystemsGroupUT/explainability-comparison-for-images

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 

Repository files navigation

A Comparative Evaluation of Explainability Techniques for Image Data

This repository includes the Jupyter notebooks used to evaluate different explainability techniques, across five metrics representing different aspects of quality, specifically fidelity, stability, identity, separability and time.

The techniques covered are LIME (repo), SHAP (repo), GradCAM and GradCAM++ (repo), IntGrad and SmoothGrad (repo).

Three common benchmarking datasets were used in the experiments: CIFAR10, SVHN and Imagenette. All datasets were sourced from PyTorch.

For each dataset, three models with different architectures were used: VGG16BN, ResNet50, Densenet151. Pretrained models for CIFAR10 and SVHN were sourced from detectors library, while models for Imagenet were sourced from PyTorch.

All experiments were originally performed in Google Colab using T4 instance to ensure access to CUDA, in a single notebook. To ensure completeness, and since the output visualisations are quite large, for Github the original notebook was split. Each split contains the same definitions for initializing datasets and models, definitions for metrics, and definitions for explainer adapters, and can be run intependently, including in Google Colab. Contents of each file is explained below:

About

A Comparative Evaluation of Explainability Techniques for Image Data

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published