Skip to content

Commit d19b869

Browse files
authored
Merge pull request #46 from ivadomed/plb/cleaning_repo
Cleaning repo with new SCT command and removed the previous inference script
2 parents 1a24a70 + e1f5bdf commit d19b869

10 files changed

+70
-229
lines changed

README.md

+21-47
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ Publication linked to this model: see [CITATION.cff](./CITATION.cff)
1313

1414
## Project description
1515

16-
In this project, we trained a 3D nnU-Net for spinal cord white and grey matter segmentation. The data contains 22 mice with different number of chunks, for a total of 72 MRI 3D images. Each MRI image is T2-weighted, has a size of 200x200x500, with the following resolution: 0.05x0.05x0.05 mm.
16+
In this project, we trained a 3D nnU-Net for spinal cord white and grey matter segmentation. The data contains 22 mice with different number of chunks, for a total of 72 MRI 3D images. Each MRI image is T1-weighted, has a size of 200x200x500, with the following resolution: 0.05x0.05x0.05 mm.
1717

1818
<details>
1919
<summary>Expand this for more information on how we trained the model</summary>
@@ -29,63 +29,37 @@ For the packaging we decided to keep only fold 4 as it has the best dice score a
2929

3030
</details>
3131

32-
For information on how to retrain the same model, refer to this file [README_training_model.md](https://github.com/ivadomed/model_seg_mouse-sc_wm-gm_t1/blob/main/utils/README_training_model.md).
32+
For information on how to retrain the same model, refer to this file [README.md](./utils/README.md).
3333

34-
If you wish to try the model on your own data, follow the instructions at [Installation](#installation) and [Perform predictions](#perform-predictions).
34+
## How to use the model
3535

36-
## Installation
36+
This is the recommended method to use our model.
3737

38-
This section explains how to install and use the model on new images.
38+
### Install dependencies
3939

40-
Clone the repository:
41-
~~~
42-
git clone https://github.com/ivadomed/model_seg_mouse-sc_wm-gm_t1.git
43-
cd model_seg_mouse-sc_wm-gm_t1
44-
~~~
40+
- [Spinal Cord Toolbox (SCT) v6.2](https://github.com/spinalcordtoolbox/spinalcordtoolbox/releases/tag/6.2) or higher -- follow the installation instructions [here](https://github.com/spinalcordtoolbox/spinalcordtoolbox?tab=readme-ov-file#installation)
41+
- [conda](https://conda.io/projects/conda/en/latest/user-guide/install/index.html)
42+
- Python
4543

46-
We recommend to use a virtual environment with python 3.9 to use nnUNet:
47-
~~~
48-
conda create -n venv_nnunet python=3.9
49-
~~~
44+
Once the dependencies are installed, download the latest model:
5045

51-
We activate the environment:
52-
~~~
53-
conda activate venv_nnunet
54-
~~~
46+
```bash
47+
sct_deepseg -install-task seg_mouse_gm_wm_t1w
48+
```
5549

56-
Then install the required libraries:
57-
~~~
58-
pip install -r requirements.txt
59-
~~~
50+
### Getting the WM and GM segmentation
6051

61-
## Perform predictions
52+
To segment a single image, run the following command:
6253

63-
To run an inference and obtain a segmentation, we advise using the following method (refer to [utils/README_training_model.md](https://github.com/ivadomed/model_seg_mouse-sc_wm-gm_t1/blob/main/utils/README_training_model.md) for alternatives).
54+
```bash
55+
sct_deepseg -i <INPUT> -o <OUTPUT> -task seg_mouse_gm_wm_t1w
56+
```
6457

65-
Download the [model.zip](https://github.com/ivadomed/model_seg_mouse-sc_wm-gm_t1/releases/tag/v0.3) from the release and unzip it.
58+
For example:
6659

67-
To perform predictions on a Nifti image (".nii.gz" or ".nii")
68-
~~~
69-
python test.py --path-image /path/to/image --path-out /path/to/output --path-model /path/to/nnUNetTrainer__nnUNetPlans__3d_fullres
70-
~~~
71-
72-
> [!NOTE]
73-
> The `nnUNetTrainer__nnUNetPlans__3d_fullres` folder is inside the `Dataset500_zurich_mouse` folder. <br>
74-
> To use GPU, add the flag `--use-gpu` in the previous command.<br>
75-
> To use mirroring (test-time) augmentation, add flag `--use-mirroring`. NOTE: Inference takes a long time when this is enabled. Default: False.<br>
76-
> To speed up inference, add flag `--step-size XX` with X being a value above 0.5 and below 1 (0.9 is advised).<br>
77-
> If inference fails : refer to the following [issue 44](https://github.com/ivadomed/model_seg_mouse-sc_wm-gm_t1/issues/44) for image pre-processing.
78-
79-
## Apply post-processing
80-
81-
nnU-Net v2 comes with the possiblity of performing post-processing on the segmentation images. This was not included in the run inference script as it doesn't bring notable change to the result. To run post-processing run the following script.
82-
83-
~~~
84-
CUDA_VISIBLE_DEVICES=XX nnUNetv2_apply_postprocessing -i /seg/folder -o /output/folder -pp_pkl_file /path/to/postprocessing.pkl -np 8 -plans_json /path/to/post-processing/plans.json
85-
~~~
86-
> [!NOTE]
87-
> The file `postprocessing.pkl` is stored in `Dataset500_zurich_mouse/nnUNetTrainer__nnUNetPlans__3d_fullres/crossval_results_folds_0_1_2_3_4/postprocessing.pkl`.<br>
88-
> The file `plans.json` is stored in `Dataset500_zurich_mouse/nnUNetTrainer__nnUNetPlans__3d_fullres/crossval_results_folds_0_1_2_3_4/plans.json`.
60+
```bash
61+
sct_deepseg -i sub-001_T2w.nii.gz -o sub-001_T2w_wm-gm-seg.nii.gz -task seg_mouse_gm_wm_t1w
62+
```
8963

9064
## Notes
9165

test.py

-176
This file was deleted.

utils/README_training_model.md training_scripts/README.md

+49-6
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,36 @@
11
# Training of a nnUNet model for SC WM and GM segmentation
22

3-
First, you need to perform the installation instructions from the [README.md](https://github.com/ivadomed/model_seg_mouse-sc_wm-gm_t1/blob/main/README.md).
3+
Here, we detail all the steps necessary to train and use an nnUNet model for the segmentation of mouse SC WM an GM.
4+
The steps detail how to :
5+
- set-up the environment
6+
- preprocess the data
7+
- train the model
8+
- performing inference
9+
10+
## Installation
11+
12+
This section explains how to install and use the model on new images.
13+
14+
Clone the repository:
15+
~~~
16+
git clone https://github.com/ivadomed/model_seg_mouse-sc_wm-gm_t1.git
17+
cd model_seg_mouse-sc_wm-gm_t1
18+
~~~
19+
20+
We recommend to use a virtual environment with python 3.9 to use nnUNet:
21+
~~~
22+
conda create -n venv_nnunet python=3.9
23+
~~~
24+
25+
We activate the environment:
26+
~~~
27+
conda activate venv_nnunet
28+
~~~
29+
30+
Then install the required libraries:
31+
~~~
32+
pip install -r utils/requirements.txt
33+
~~~
434

535
## Data
636

@@ -55,7 +85,7 @@ python ./utils/convert_nnunet_to_bids.py --path-conversion-dict /PATH/TO/DICT --
5585

5686
This will output a dataset add a segmentation `mask_name` in the dataset derivatives.
5787

58-
## Data preprocessing
88+
### nnUNet data preprocessing
5989

6090
Before training the model, nnU-Net performs data preprocessing and checks the integrity of the dataset:
6191

@@ -86,17 +116,19 @@ You can track the progress of the model with:
86116
nnUNet_results/DatasetDATASET-ID_TASK-NAME/nnUNetTrainer__nnUNetPlans__CONFIG/fold_FOLD/progress.png
87117
~~~
88118

89-
## Run inference
90-
91-
Here are the alernatives method from the one given in [README.md](https://github.com/ivadomed/model_seg_mouse-sc_wm-gm_t1/blob/main/README.md) to perform inference.
119+
## Running inference
92120

93-
To run an inference and obtain a segmentation, there are multiple ways to do so.
121+
To run inference using our trained model, we recommend using the instructions in [README.md](../README.md). However, if you want to perform inference on your own model, there are multiple ways to do so.
94122

95123
### Method 1 - Using your previous training
96124

97125
Format the image data to the nnU-Net file structure.
98126
Use a terminal command line:
99127
~~~
128+
export nnUNet_raw="/path/to/nnUNet_raw"
129+
export nnUNet_preprocessed="/path/to/nnUNet_preprocessed"
130+
export nnUNet_results="/path/to/nnUNet_results"
131+
100132
CUDA_VISIBLE_DEVICES=XXX nnUNetv2_predict -i /path/to/image/folder -o /path/to/predictions -d DATASET_ID -c CONFIG --save_probabilities -chk checkpoint_best.pth -f FOLD
101133
~~~
102134

@@ -112,3 +144,14 @@ CUDA_VISIBLE_DEVICES=XXX nnUNetv2_predict -i /path/to/image/folder -o /path/to/p
112144
~~~
113145

114146
You can now access the predictions in the folder `/path/to/predictions`.
147+
148+
## Apply post-processing
149+
150+
nnU-Net v2 comes with the possiblity of performing post-processing on the segmentation images. This was not included in the run inference script as it doesn't bring notable change to the result. To run post-processing run the following script.
151+
152+
~~~
153+
CUDA_VISIBLE_DEVICES=XX nnUNetv2_apply_postprocessing -i /seg/folder -o /output/folder -pp_pkl_file /path/to/postprocessing.pkl -np 8 -plans_json /path/to/post-processing/plans.json
154+
~~~
155+
> [!NOTE]
156+
> The file `postprocessing.pkl` is stored in `Dataset500_zurich_mouse/nnUNetTrainer__nnUNetPlans__3d_fullres/crossval_results_folds_0_1_2_3_4/postprocessing.pkl`.<br>
157+
> The file `plans.json` is stored in `Dataset500_zurich_mouse/nnUNetTrainer__nnUNetPlans__3d_fullres/crossval_results_folds_0_1_2_3_4/plans.json`.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.

0 commit comments

Comments
 (0)