Skip to content

This repository provides a range of practical examples and educational resources for exploring the field of Explainable AI (XAI). You'll find examples using tools like LIME and SHAP to interpret machine learning model predictions, making the decision-making processes of complex algorithms more transparent and accessible

Notifications You must be signed in to change notification settings

Naviden/Introduction-to-XAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

78 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Explainable AI (XAI) Examples Repository

Overview

This repository offers practical examples and educational resources to help you understand Explainable AI (XAI). It includes Jupyter notebooks and Python scripts that demonstrate the use of various XAI frameworks, such as LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations). The aim is to provide a hands-on approach to interpreting machine learning model predictions and to shed light on the decision-making processes of complex algorithms.

Contents

  • Notebooks/ - Contains Jupyter notebooks demonstrating the usage of LIME and SHAP for different types of data (tabular, text, and images).
  • Tools/ - Includes Python scripts and utilities that support the implementation of XAI techniques.

Getting Started

To explore the examples in this repository, follow these steps:

  1. Clone the repository:

    git clone https://github.com/Naviden/Introduction-to-XAI.git
  2. Install the required dependencies:

    pip install -r requirements.txt
  3. Navigate to the Notebooks directory and open the Jupyter notebooks:

    cd Notebooks/
    jupyter notebook

Contributing

Contributions are welcome! If you'd like to add new examples, enhance existing ones, or suggest additional XAI frameworks to include, please submit a pull request or open an issue.

About

This repository provides a range of practical examples and educational resources for exploring the field of Explainable AI (XAI). You'll find examples using tools like LIME and SHAP to interpret machine learning model predictions, making the decision-making processes of complex algorithms more transparent and accessible

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published