Skip to content

Excel file used in the published SLR paper: Local interpretable model-agnostic explanation approach for medical imaging analysis: A systematic literature review

Notifications You must be signed in to change notification settings

SafwanAlselwi/LIME_SLR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 

Repository files navigation


Abstract

Background: The interpretability and explainability of machine learning (ML) and artificial intelligence systems are critical for generating trust in their outcomes in fields such as medicine and healthcare. Errors generated by these systems, such as inaccurate diagnoses or treatments, can have serious and even life-threatening effects on patients. Explainable Artificial Intelligence (XAI) is emerging as an increasingly significant area of research nowadays, focusing on the black-box aspect of sophisticated and difficult-to-interpret ML algorithms. XAI techniques such as Local Interpretable Model-Agnostic Explanations (LIME) can give explanations for these models, raising confidence in the systems and improving trust in their predictions. Numerous works have been published that respond to medical problems through the use of ML models in conjunction with XAI algorithms to give interpretability and explainability. The primary objective of the study is to evaluate the performance of the newly emerging LIME techniques within healthcare domains that require more attention in the realm of XAI research.

Method: A systematic search was conducted in numerous databases (Scopus, Web of Science, IEEE Xplore, ScienceDirect, MDPI, and PubMed) that identified 1614 peer-reviewed articles published between 2019 and 2023.

Results: 52 articles were selected for detailed analysis that showed a growing trend in the application of LIME techniques in healthcare, with significant improvements in the interpretability of ML models used for diagnostic and prognostic purposes.

Conclusion: The findings suggest that the integration of XAI techniques, particularly LIME, enhances the transparency and trustworthiness of AI systems in healthcare, thereby potentially improving patient outcomes and fostering greater acceptance of AI-driven solutions among medical professionals.

Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA)

Steps involved in mapping the literature study

Commonly used machine learning/Deep learning techniques in the context of LIME interpretability

Please consider citing our work:

Hassan, Shahab Ul, Said Jadid Abdulkadir, M. Soperi Mohd Zahid, and Safwan Mahmood Al-Selwi. "Local interpretable model-agnostic explanation approach for medical imaging analysis: A systematic literature review." Computers in Biology and Medicine 185 (2025): 109569.

BibTeX

@article{Hassan2025LimeSLR,
title = {Local interpretable model-agnostic explanation approach for medical imaging analysis: A systematic literature review},
journal = {Computers in Biology and Medicine},
volume = {185},
pages = {109569},
year = {2025},
issn = {0010-4825},
doi = {https://doi.org/10.1016/j.compbiomed.2024.109569},
author = {Shahab Ul Hassan and Said Jadid Abdulkadir and M Soperi Mohd Zahid and Safwan Mahmood Al-Selwi},
keywords = {XAI, Explainability, Machine learning, LIME, Medical health, Classification, Systematic literature review}
}

Data Availability
The Microsoft Excel file utilized for this systematic literature review will be made publicly accessible in this repository soon.

Thank you

About

Excel file used in the published SLR paper: Local interpretable model-agnostic explanation approach for medical imaging analysis: A systematic literature review

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published