- Introduction
- Where to find the module?
- Train new models
- Experiments & Results
- Explainability
- Contribute
- License
VolumeAXI aims to develop interpretable deep learning models for the automated classification of impacted maxillary canines and assessment of dental root resorption in adjacent teeth using Cone-Beam Computed Tomography (CBCT). We propose to develop a 3D slicer module, called Volume Analysis, eXplainability and Interpretability (Volume-AXI), with the goal of providing users an explainable approach for classification of bone and teeth structural defects in CBCT scans gray-level images. Visualization through Gradient-weighted Class Activation Mapping (Grad-CAM) has been integrated to generate explanations of the CNN predictions, enhancing interpretability and trustworthiness for clinical adoption.
VolumeAXI model has been deployed in the open-source software 3D Slicer.
It is available in the extension Automated Dental Tools. Installation steps:
- Install the last stable or nightly version of 3D Slicer (module available from version 5.6.2).
- Use the Extension Manager to search for Automated Dental Tools. How to install extensions in 3D slicer
- Restart the software as requested.
Bravo! You can now look for it in the Module Selection Tool bar.
Python version 3.12.2
Main packages and their versions (a Yaml file is available to recreate the environment with Conda):
pytorch-lightning==1.9.5
torch==2.2.2
torchaudio==2.2.2
torchmetrics==1.3.2
torchvision==0.17.2
numpy==1.26.4
nibabel==5.2.1
matplotlib.pyplot==3.8.3
scikit-learn==1.4.2
simpleitk==2.3.1
To train a new model, you need a CSV file with the path of the images and the column with the labels.
There are several network options. Two of them 'CV_2pred' and 'CV_2fclayer' need 2 columns with the label. In the folder 'Preprocess', you will find most scripts that were used to preprocess the data.
Our method follows this pipeline:
- Applying the mask to the CBCT
python3 create_CBCTmask.py --dir Path/To/The/Scans/Folder-or-File --mask Path/to/MasksToApply --label 1 2 3 --output Output/path
you can add --dilatation_radius, if you need to dilate the mask with a box-shaped structure.
- Resample to the willing size/ spacing
python3 resample.py --dir --size 224 224 224 --spacing 0.3 0.3 0.3 --out Output/path
transformations parameters:
--linear True/False to use linear interpolation
--center True/False to center the image in the space
--fit_spacing True/False to recompute the spacing according to the new image size
--iso_spacing True/False to keep the same spacing for all the images
- Create the CSV file
python3 create_CSV_input.py --input_folder --output Path/to/The/CSV/File --label_file Path/to/xlsx-or-csv --patient_column <column with the names to match the files names> --label_column Label
In our cases, we had some letter that we could make match with --words_list ['_L','_R'] and --side.
- Change the label identification or just count the number of them.
python3 dataset_info.py --input Path/CSV/File --class_column Label
- Split the dataset into training and testing (if willing)
python3 split_dataset.py --input Csv/File --out_dir Path/to/folder --test_size 0.2 --val_size 0.15 --class_column Label
2 options to split the dataset are available with --split_option <'TT' or 'TTV'>
. 'TT' stands for Train, Test if you only need a splitting between both (used for our training).
'TTV' splits into training, testing and validation. It just another option unused here.
There are 3 options to train a model.
Modes | Descriptions |
---|---|
'CV' | Csv file has only one column with the labels. Use MONAI architectures. |
'CV_2pred' | Csv file has 2 labels columns (one for each side Left and Right). Use MONAI architectures. |
'CV_2fclayer' | Csv file has 2 labels columns. Use MONAI archiectures + 2 fully connected layers (one for each side Left and Right). |
For 'CV_2pred' and 'CV_2fclayer', you choose which column you pass to the --class_column
parameter, it will be used to split the dataset in the Cross-Validation.
Some MONAI architectures are already implemented (base_encoder): DenseNet, DenseNet169, DenseNet201, DenseNet264, SEResNet50, ResNet18 and the EfficientNetBN.
python3 classification_train_v2.py --csv <csv file path> --img_column Path --class_column Label --nb_classes 8
--base_encoder DenseNet201 --lr 1e-4 --epochs 400 --out <output folder> --patience 50 --img_size 224 --mode CV_2pred
Cross-Validation parameters:
--split 5
--test_size 0.15
--val_size 0.15
Optionnal parameter:
--csv_special path/to/specialDataset to add a dataset that need other transformations to the training
python3 classification_predict.py --csv <csv file to predict> --csv_train <csv training file>
--img_column Path --class_column Label --out <output directory> --pred_column Predictions --base_encoder DenseNet201 --mode 'CV'
2 Labels columns parameters:
--nb_classes 8
--class_column1 Label_R
--class_column2 Label_L
--diff ['_R','_L'] to create the predictions column with the same differentiator than initially used.
If you choose the mode 'CV_2pred', it will compute the AUC for the classes at this step.
python3 classification_eval_VAXI.py --csv <csv prediction file> --csv_true_column Label --csv_prediction_column Predictions
--out <path to the plot file> --mode CV
2 Labels columns parameters:
--csv_true_column and --csv_prediction_column must have the common denominator of the columns names.
For example, if the csv has Label_R, Label_L, Predictions_R, Predictions_L, the common names are Label and Predictions.
--diff ['_R','_L']
To use the GradCam script, you need to know the layers names of your model. You can use retrieve_modelsData.py
to do so.
The script must be run for each class by changing --class_index
and if you have 2 predictions columns, it must be done for each one of them.
The output includes the grey-level scan and a grey-level heatmaps. To see the results, you need to use 3D Slicer to superimpose both image and change the color of the heatmaps to ColdToHotRainbow (see How to Overlay)
python3 gradcam3D_monai.py --csv_test <path to the prediction file> --img_column Path --class_column Label_R --pred_column Predictions_R
--model_path <path to the .ckpt file> --out <output directory> --img_size 224 --nb_class 8 --class_index 1 --base_encoder DenseNet201
--layer_name model.features.denseblock4
In the case where you have 2 fully connected layer, you need to specify a --side
because the Gradcam function from MONAI doesn't work with multiple outputs models.
Differents architectures have been used. The mode 'CV_2fclayer' with DenseNet201 is the one giving the best Gradcam and metrics so far.
Fig1: DenseNet201 architecture with the 2 fully connected layer
Fig2: DenseNet201 architecture
- Classes: Non impacted, Buccal, Bicortical and Palatal
- Accuracy: 78%
- Weight Average F1-score: 77%
Fig3: External testing fold Confusion Matrix normalized
Fig4: Heatmaps Overlaid in 3D slicer
- Load your grey-level CBCT scan and the grey-level heatmap. First you need to change the color of the heatmap:
- Select the Volumes Module.
- Select the heatmap in Active Volume and the ColdToHotRainbow option in the Lookup Table (see image below)
Time to superimpose! 5. Click on the pins and then the small arrows in one of the view (Axial, Coronal or Sagittal). 6. Synchronize all view by clicking on the link icon 7. Select the scan file (named _original), the heatmap and change the percentage of appearance of the top one on the other (see image below)
Now, enjoy the visualization :)
We welcome community contributions to VolumeAXI. For those keen on enhancing this tool, please adhere to the steps below:
Fork the repository. Create your feature branch (git checkout -b feature/YourFeature). Commit your changes (git commit -am 'Add some feature'). Push to the branch (git push origin feature/YourFeature). Open a pull request. For a comprehensive understanding of our contribution process, consult our Contribution Guidelines.
VolumeAXI is under the APACHE 2.0 license.
VolumeAXI Team: For further details, inquiries, or suggestions, feel free to contact us.