This repository contains the official implementation for the article "GraphXAIN: Narratives to Explain Graph Neural Networks". Our method integrates Graph Neural Networks (GNNs), graph explainers, and Large Language Models (LLMs) to generate GraphXAINs — explainable AI (XAI) narratives that provide enhanced interpretability of GNN predictions.
Graph Neural Networks (GNNs) are a powerful technique for machine learning on graph-structured data, yet they pose interpretability challenges, especially for non-expert users. Existing GNN explanation methods often yield technical outputs such as subgraphs and feature importance scores, which are not easily understood. Building on recent insights from social science and other Explainable AI (XAI) methods, we propose GraphXAIN, a natural language narrative that explains individual predictions made by GNNs. We present a model-agnostic and explainer-agnostic XAI approach that complements graph explainers by generating GraphXAINs—coherent narratives explaining the GNN's prediction process—using Large Language Models (LLMs) and integrating graph data, individual predictions from GNNs, explanatory subgraphs, and feature importances. We define XAI Narratives and XAI Descriptions, highlighting their distinctions and emphasizing the importance of narrative principles in effective explanations. By incorporating natural language narratives, our approach supports graph practitioners and non-expert users, aligning with social science research on explainability and enhancing user understanding and trust in complex GNN models. We demonstrate GraphXAIN's capabilities on a real-world graph dataset, illustrating how its generated narratives can aid understanding compared to traditional graph explainer outputs or other descriptive explanation methods.
To generate GraphXAINs for a given GNN model:
- Prepare Data: Ensure you have ready-to-use graph data or adjacency matrix with features matrix ready for the input graph.
- Run the Graph Explainer: Use
notebooks/GraphXAIN_tutorial.ipynb
notebook to extract subgraphs and feature importance values. - Generate GraphXAINs: Use
notebooks/GraphXAIN_tutorial.ipynb
notebook to generate GraphXAINs based on the extracted data.
dataset/
: Contains sample datasets used in the paper.explanations/
: Contains outputs from graph explainer.images/
: Contains images used in publication.notebooks/
: Jupyter notebook for generation GraphXAINs.utils/
: Containsmodel.py
with GNN model andutils.py
with utility functions.
If you find this work useful, please cite our paper:
@article{cedro2024graphxain,
title={GraphXAIN: Narratives to Explain Graph Neural Networks},
author={Cedro, Mateusz and Martens, David},
journal={arXiv preprint arXiv:2411.02540},
year={2024}
}
This project is licensed under the MIT License.
For questions or collaborations, feel free to contact:
- Mateusz Cedro: [mateusz.cedro@uantwerpen.be]
- Affiliation: University of Antwerp, Belgium
We appreciate any feedback or contributions to the project!