Public facing deeplift repo
-
Updated
Apr 28, 2022 - Python
Public facing deeplift repo
Pytorch implementation of various neural network interpretability methods
SyReNN: Symbolic Representations for Neural Networks
In this part, I've introduced and experimented with ways to interpret and evaluate models in the field of image. (Pytorch)
A small repository to test Captum Explainable AI with a trained Flair transformers-based text classifier.
Integrated gradients attribution method implemented in PyTorch
PyTorch implementation of 'Vanilla' Gradient, Grad-CAM, Guided backprop, Integrated Gradients and their SmoothGrad variants.
simple implementation of Expected Gradients and Integrated Gradients by pytorch
Attribution methods that explain image classification models, implemented in PyTorch, and support batch input and GPU.
Code and data for the ACL 2023 NLReasoning Workshop paper "Saliency Map Verbalization: Comparing Feature Importance Representations from Model-free and Instruction-based Methods" (Feldhus et al., 2023)
Neural network visualization tool after an optional model compression with parameter pruning: (integrated) gradients, guided/visual backpropagation, activation maps for the cao model on the IndianPines dataset
Scripts to reproduce results within the following manuscript: Perez, I., Skalski, P., Barns-Graham, A., Wong, J. and Sutton, D. (2022) Attribution of Predictive Uncertainties in Classification Models, 38th Conference on Uncertainty in Artificial Intelligence (UAI), Eindhoven, Netherlands, 2022.
Suite of methods that create attribution maps from image classification models.
Reproducible code for our paper "Explainable Learning with Gaussian Processes"
Implementation of 2 XAI methods to visualize the region highlighted by the network to make a prediction
Source code for the IJCKG2021 paper "Normal vs. Adversarial: Salience-based Analysis of Adversarial Samples for Relation Extraction".
The code for integrated gradients in torch.
This repository is the code basis for the paper titled "Balancing Privacy and Explainability in Federated Learning"
Exercise on interpretability with integrated gradients.
Add a description, image, and links to the integrated-gradients topic page so that developers can more easily learn about it.
To associate your repository with the integrated-gradients topic, visit your repo's landing page and select "manage topics."