You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+21-47
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@ Publication linked to this model: see [CITATION.cff](./CITATION.cff)
13
13
14
14
## Project description
15
15
16
-
In this project, we trained a 3D nnU-Net for spinal cord white and grey matter segmentation. The data contains 22 mice with different number of chunks, for a total of 72 MRI 3D images. Each MRI image is T2-weighted, has a size of 200x200x500, with the following resolution: 0.05x0.05x0.05 mm.
16
+
In this project, we trained a 3D nnU-Net for spinal cord white and grey matter segmentation. The data contains 22 mice with different number of chunks, for a total of 72 MRI 3D images. Each MRI image is T1-weighted, has a size of 200x200x500, with the following resolution: 0.05x0.05x0.05 mm.
17
17
18
18
<details>
19
19
<summary>Expand this for more information on how we trained the model</summary>
@@ -29,63 +29,37 @@ For the packaging we decided to keep only fold 4 as it has the best dice score a
29
29
30
30
</details>
31
31
32
-
For information on how to retrain the same model, refer to this file [README_training_model.md](https://github.com/ivadomed/model_seg_mouse-sc_wm-gm_t1/blob/main/utils/README_training_model.md).
32
+
For information on how to retrain the same model, refer to this file [README.md](./utils/README.md).
33
33
34
-
If you wish to try the model on your own data, follow the instructions at [Installation](#installation) and [Perform predictions](#perform-predictions).
34
+
## How to use the model
35
35
36
-
## Installation
36
+
This is the recommended method to use our model.
37
37
38
-
This section explains how to install and use the model on new images.
We recommend to use a virtual environment with python 3.9 to use nnUNet:
47
-
~~~
48
-
conda create -n venv_nnunet python=3.9
49
-
~~~
44
+
Once the dependencies are installed, download the latest model:
50
45
51
-
We activate the environment:
52
-
~~~
53
-
conda activate venv_nnunet
54
-
~~~
46
+
```bash
47
+
sct_deepseg -install-task seg_mouse_gm_wm_t1w
48
+
```
55
49
56
-
Then install the required libraries:
57
-
~~~
58
-
pip install -r requirements.txt
59
-
~~~
50
+
### Getting the WM and GM segmentation
60
51
61
-
## Perform predictions
52
+
To segment a single image, run the following command:
62
53
63
-
To run an inference and obtain a segmentation, we advise using the following method (refer to [utils/README_training_model.md](https://github.com/ivadomed/model_seg_mouse-sc_wm-gm_t1/blob/main/utils/README_training_model.md) for alternatives).
> The `nnUNetTrainer__nnUNetPlans__3d_fullres` folder is inside the `Dataset500_zurich_mouse` folder. <br>
74
-
> To use GPU, add the flag `--use-gpu` in the previous command.<br>
75
-
> To use mirroring (test-time) augmentation, add flag `--use-mirroring`. NOTE: Inference takes a long time when this is enabled. Default: False.<br>
76
-
> To speed up inference, add flag `--step-size XX` with X being a value above 0.5 and below 1 (0.9 is advised).<br>
77
-
> If inference fails : refer to the following [issue 44](https://github.com/ivadomed/model_seg_mouse-sc_wm-gm_t1/issues/44) for image pre-processing.
78
-
79
-
## Apply post-processing
80
-
81
-
nnU-Net v2 comes with the possiblity of performing post-processing on the segmentation images. This was not included in the run inference script as it doesn't bring notable change to the result. To run post-processing run the following script.
> The file `postprocessing.pkl` is stored in `Dataset500_zurich_mouse/nnUNetTrainer__nnUNetPlans__3d_fullres/crossval_results_folds_0_1_2_3_4/postprocessing.pkl`.<br>
88
-
> The file `plans.json` is stored in `Dataset500_zurich_mouse/nnUNetTrainer__nnUNetPlans__3d_fullres/crossval_results_folds_0_1_2_3_4/plans.json`.
Copy file name to clipboardexpand all lines: training_scripts/README.md
+49-6
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,36 @@
1
1
# Training of a nnUNet model for SC WM and GM segmentation
2
2
3
-
First, you need to perform the installation instructions from the [README.md](https://github.com/ivadomed/model_seg_mouse-sc_wm-gm_t1/blob/main/README.md).
3
+
Here, we detail all the steps necessary to train and use an nnUNet model for the segmentation of mouse SC WM an GM.
4
+
The steps detail how to :
5
+
- set-up the environment
6
+
- preprocess the data
7
+
- train the model
8
+
- performing inference
9
+
10
+
## Installation
11
+
12
+
This section explains how to install and use the model on new images.
Here are the alernatives method from the one given in [README.md](https://github.com/ivadomed/model_seg_mouse-sc_wm-gm_t1/blob/main/README.md) to perform inference.
119
+
## Running inference
92
120
93
-
To run an inference and obtain a segmentation, there are multiple ways to do so.
121
+
To run inference using our trained model, we recommend using the instructions in [README.md](../README.md). However, if you want to perform inference on your own model, there are multiple ways to do so.
94
122
95
123
### Method 1 - Using your previous training
96
124
97
125
Format the image data to the nnU-Net file structure.
You can now access the predictions in the folder `/path/to/predictions`.
147
+
148
+
## Apply post-processing
149
+
150
+
nnU-Net v2 comes with the possiblity of performing post-processing on the segmentation images. This was not included in the run inference script as it doesn't bring notable change to the result. To run post-processing run the following script.
> The file `postprocessing.pkl` is stored in `Dataset500_zurich_mouse/nnUNetTrainer__nnUNetPlans__3d_fullres/crossval_results_folds_0_1_2_3_4/postprocessing.pkl`.<br>
157
+
> The file `plans.json` is stored in `Dataset500_zurich_mouse/nnUNetTrainer__nnUNetPlans__3d_fullres/crossval_results_folds_0_1_2_3_4/plans.json`.
0 commit comments