AdaCBM: An Adaptive Concept Bottleneck Model for Explainable and Accurate Diagnosis, MICCAI 2024
Created by Townim Faisal Chowdhury, Vu Minh Hieu Phan, Kewen Liao, Minh-Son To, Yutong Xie, Anton van den Hengel, Johan W. Verjans, and Zhibin Liao
This research addresses limitations in Label-Free Concept Bottleneck Models (CBM) for medical diagnosis by re-examining the CBM framework as a simple linear classification system. Our analysis shows that current fine-tuning modules mainly rescale and shift outcomes, underutilizing the system's learning capacity. We propose an adaptive module between CLIP and CBM to bridge the gap between source and downstream domains, improving performance in medical applications.
We generate the concepts using GPT-4 for each dataset. Our GPT-4 prompt generated concepts are available in the gpt_concepts folder.
The code was tested on a single Nvidia RTX A6000 GPU with Python 3.9.13, running Ubuntu 22.04.4 LTS. To install the required packages, use the following command:
pip install -r requirements.txt
Following the structure of LaBo, the cfg folder contains the configuration files, while the datasets folder stores dataset-specific data, including images
, splits
, and concepts
.
We provide the train.sh script to train the model. A pre-trained model for 10 concepts per class of the HAM10000 dataset is available in the saved_models folder.
This project uses code from the following repository: LaBo
CLIP: https://github.com/openai/CLIP
@inproceedings{adacbm,
title = {{AdaCBM}: An Adaptive Concept Bottleneck Model for Explainable and Accurate Diagnosis},
author = {Faisal Chowdhury, Townim and Liao, Kewen and Minh Hieu Phan, Vu and To, Minh-Son and Xie, Yutong and Hengel, Anton van den and W. Verjans, Johan and Liao, Zhibin},
year = 2024,
booktitle = {27th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI)}
}