Skip to content

Commit

Permalink
Merge pull request #29 from BrainLesion/polishing
Browse files Browse the repository at this point in the history
Polishing
  • Loading branch information
neuronflow authored Feb 3, 2024
2 parents 02e74bc + af3c1b3 commit 6a81e29
Show file tree
Hide file tree
Showing 12 changed files with 82 additions and 93 deletions.
67 changes: 20 additions & 47 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,14 @@
[![PyPI version panoptica](https://badge.fury.io/py/brainles-aurora.svg)](https://pypi.python.org/pypi/brainles-aurora/)
[![PyPI version AURORA](https://badge.fury.io/py/brainles-aurora.svg)](https://pypi.python.org/pypi/brainles-aurora/)
[![Documentation Status](https://readthedocs.org/projects/brainles-aurora/badge/?version=latest)](http://brainles-aurora.readthedocs.io/?badge=latest)
[![tests](https://github.com/BrainLesion/AURORA/actions/workflows/tests.yml/badge.svg)](https://github.com/BrainLesion/AURORA/actions/workflows/tests.yml)

# AURORA

Deep learning models for the manuscript

[Identifying core MRI sequences for reliable automatic brain metastasis segmentation](https://www.medrxiv.org/content/10.1101/2023.05.02.23289342v1)
Deep learning models for brain cancer metastasis segmentation based on the manuscripts:
* [Identifying core MRI sequences for reliable automatic brain metastasis segmentation](https://www.medrxiv.org/content/10.1101/2023.05.02.23289342v1)
* [Development and external validation of an MRI-based neural network for brain metastasis segmentation in the AURORA multicenter study](https://www.sciencedirect.com/science/article/pii/S0167814022045625)

## Installation

With a Python 3.10+ environment you can install directly from [pypi.org](https://pypi.org/project/brainles-aurora/):
With a Python 3.10+ environment, you can install directly from [pypi.org](https://pypi.org/project/brainles-aurora/):

```
pip install brainles-aurora
Expand All @@ -20,52 +18,31 @@ pip install brainles-aurora

- CUDA 11.4+ (https://developer.nvidia.com/cuda-toolkit)
- Python 3.10+
- GPU with at least 8GB of VRAM

further details in requirements.txt
- GPU with CUDA support and at least 8GB of VRAM

## Usage

**Tutorial.ipynb**: Step-by-step example of project setup and segmentation of supplied example data ([BraTS-METS](https://doi.org/10.48550/arXiv.2306.00838))

**run_inference_cli.py**: Simple command-line implementation:

This command will list all available options:

python3 /run_inference_cli.py --help

**run_inference.py**: Example script for single inference. More customization possible.

**_Input: t1_file, t1c_file, t2_file, fla_file_**

All 4 input files must be nifti (nii.gz) files containing 3D MRIs. Please ensure that all input images are correctly preprocessed (skullstripped, co-registered, registered on SRI-24). You can use [BraTS Toolkit](https://github.com/neuronflow/BraTS-Toolkit) for preprocessing (please follow the instructions [here](https://github.com/neuronflow/BraTS-Toolkit)).

**_Output: segmentation_file_**

Add path to your desired output folder.

**_optional Output: whole_network_outputs_file, enhancing_network_outputs_file_**
BrainLes features Jupyter Notebook [tutorials](https://github.com/BrainLesion/tutorials/tree/main/AURORA) with usage instructions.

## Citation
Please support our development by citing the following manuscripts:

when using the software please cite TODO
[Identifying core MRI sequences for reliable automatic brain metastasis segmentation](https://www.sciencedirect.com/science/article/pii/S016781402389795X)

```
@article {Buchner2023.05.02.23289342,
author = {Josef A Buchner and Jan C Peeken and Lucas Etzel and Ivan Ezhov and Michael Mayinger and Sebastian M Christ and Thomas B Brunner and Andrea Wittig and Bj{\"o}rn Menze and Claus Zimmer and Bernhard Meyer and Matthias Guckenberger and Nicolaus Andratschke and Rami A El Shafie and J{\"u}rgen Debus and Susanne Rogers and Oliver Riesterer and Katrin Schulze and Horst J Feldmann and Oliver Blanck and Constantinos Zamboglou and Konstantinos Ferentinos and Angelika Bilger and Anca L Grosu and Robert Wolff and Jan S Kirschke and Kerstin A Eitz and Stephanie E Combs and Denise Bernhardt and Daniel R{\"u}ckert and Marie Piraud and Benedikt Wiestler and Florian Kofler},
title = {Identifying core MRI sequences for reliable automatic brain metastasis segmentation},
elocation-id = {2023.05.02.23289342},
year = {2023},
doi = {10.1101/2023.05.02.23289342},
publisher = {Cold Spring Harbor Laboratory Press},
abstract = {Background: Many automatic approaches to brain tumor segmentation employ multiple magnetic resonance imaging (MRI) sequences. The goal of this project was to compare different combinations of input sequences to determine which MRI sequences are needed for effective automated brain metastasis (BM) segmentation. Methods: We analyzed preoperative imaging (T1-weighted sequence without and with contrast- enhancement (T1/T1-CE), T2-weighted sequence (T2), and T2 fluid-attenuated inversion recovery (T2-FLAIR) sequence) from 333 patients with BMs from six centers. A baseline 3D U-Net with all four sequences and six U-Nets with plausible sequence combinations (T1-CE, T1, T2-FLAIR, T1-CE+T2-FLAIR, T1-CE+T1+T2-FLAIR, T1- CE+T1) were trained on 239 patients from two centers and subsequently tested on an external cohort of 94 patients from four centers. Results: The model based on T1-CE alone achieved the best segmentation performance for BM segmentation with a median Dice similarity coefficient (DSC) of 0.96. Models trained without T1-CE performed worse (T1-only: DSC = 0.70 and T2- FLAIR-only: DSC = 0.72). For edema segmentation, models that included both T1-CE and T2-FLAIR performed best (DSC = 0.93), while the remaining four models without simultaneous inclusion of these both sequences reached a median DSC of 0.81-0.89. Conclusions: A T1-CE-only protocol suffices for the segmentation of BMs. The combination of T1-CE and T2-FLAIR is important for edema segmentation. Missing either T1-CE or T2-FLAIR decreases performance. These findings may improve imaging routines by omitting unnecessary sequences, thus allowing for faster procedures in daily clinical practice while enabling optimal neural network-based target definitions.Competing Interest StatementThe authors have declared no competing interest.Funding StatementThis work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation - PE 3303/1-1 (JCP), WI 4936/4-1 (BW)).Author DeclarationsI confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.YesThe details of the IRB/oversight body that provided approval or exemption for the research described are given below:The ethics committee of Technical University of Munich gave ethical approval for this work (119/19 S-SR; 466/16 S)I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.YesI understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).Yes I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable.YesThe datasets generated and analyzed during the current study are not available.},
URL = {https://www.medrxiv.org/content/early/2023/05/02/2023.05.02.23289342},
eprint = {https://www.medrxiv.org/content/early/2023/05/02/2023.05.02.23289342.full.pdf},
journal = {medRxiv}
@article{buchner2023identifying,
title={Identifying core MRI sequences for reliable automatic brain metastasis segmentation},
author={Buchner, Josef A and Peeken, Jan C and Etzel, Lucas and Ezhov, Ivan and Mayinger, Michael and Christ, Sebastian M and Brunner, Thomas B and Wittig, Andrea and Menze, Bjoern H and Zimmer, Claus and others},
journal={Radiotherapy and Oncology},
volume={188},
pages={109901},
year={2023},
publisher={Elsevier}
}
```

also consider citing the original AURORA manuscript: [Development and external validation of an MRI-based neural network for brain metastasis segmentation in the AURORA multicenter study](https://www.sciencedirect.com/science/article/pii/S0167814022045625)
also consider citing the original AURORA manuscript, especially when using the `vanilla` model:

[Development and external validation of an MRI-based neural network for brain metastasis segmentation in the AURORA multicenter study](https://www.sciencedirect.com/science/article/pii/S0167814022045625)

```
@article{buchner2022development,
Expand All @@ -77,10 +54,6 @@ also consider citing the original AURORA manuscript: [Development and external v
}
```

## Four Sequences

If you have all four MR sequences (T1, T1c, T2, FLAIR) consider using:
https://github.com/neuronflow/AURORA

## Licensing

Expand Down
5 changes: 3 additions & 2 deletions brainles_aurora/inferer/inferer.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,11 +35,12 @@
BaseConfig,
)
from brainles_aurora.utils import (
turbo_path,
download_model_weights,
remove_path_suffixes,
)

from auxiliary.turbopath import turbopath

logger = logging.getLogger(__name__)


Expand Down Expand Up @@ -185,7 +186,7 @@ def _validate_image(
f"File {data} must be a nifti file with extension .nii or .nii.gz"
)
self.input_mode = DataMode.NIFTI_FILE
return turbo_path(data)
return Path(turbopath(data))

images = [
_validate_image(img)
Expand Down
2 changes: 1 addition & 1 deletion brainles_aurora/utils/__init__.py
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
from .utils import turbo_path, remove_path_suffixes
from .utils import remove_path_suffixes
from .download import download_model_weights
21 changes: 0 additions & 21 deletions brainles_aurora/utils/utils.py
Original file line number Diff line number Diff line change
@@ -1,25 +1,4 @@
from pathlib import Path
import os
from typing import IO
import sys


def turbo_path(path: str | Path) -> Path:
"""Make path absolute and normed
Args:
path (str | Path): input path
Returns:
Path: absolute and normed path
"""
return Path(
os.path.normpath(
os.path.abspath(
path,
)
)
)


def remove_path_suffixes(path: Path | str) -> Path:
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
9 changes: 4 additions & 5 deletions segmentation_test.py → example/segmentation_example.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,18 +2,17 @@
AuroraInferer,
AuroraGPUInferer,
AuroraInfererConfig,
DataMode,
)
import os
from path import Path
import nibabel as nib

BASE_PATH = Path(os.path.abspath(__file__)).parent

t1 = BASE_PATH / "example_data/BraTS-MET-00110-000-t1n.nii.gz"
t1c = BASE_PATH / "example_data/BraTS-MET-00110-000-t1c.nii.gz"
t2 = BASE_PATH / "example_data/BraTS-MET-00110-000-t2w.nii.gz"
fla = BASE_PATH / "example_data/BraTS-MET-00110-000-t2f.nii.gz"
t1 = BASE_PATH / "example/data/BraTS-MET-00110-000-t1n.nii.gz"
t1c = BASE_PATH / "example/data/BraTS-MET-00110-000-t1c.nii.gz"
t2 = BASE_PATH / "example/data/BraTS-MET-00110-000-t2w.nii.gz"
fla = BASE_PATH / "example/data/BraTS-MET-00110-000-t2f.nii.gz"


def load_np_from_nifti(path):
Expand Down
3 changes: 2 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ numpy = ">=1.23.0"
PyGithub = "^1.57"
tqdm = "^4.64.1"
path = "^16.2.0"
auxiliary ="^0.0.40"

[tool.poetry.dev-dependencies]
pytest = "^6.2"
Expand All @@ -59,4 +60,4 @@ optional = true
Sphinx = ">=7.0.0"
sphinx-copybutton = ">=0.5.2"
sphinx-rtd-theme = ">=1.3.0"
myst-parser = ">=2.0.0"
myst-parser = ">=2.0.0"
67 changes: 52 additions & 15 deletions tests/test_inferer.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
from pathlib import Path
from unittest.mock import patch
import logging

import nibabel as nib
import pytest
Expand All @@ -17,19 +16,19 @@
class TestAuroraInferer:
@pytest.fixture
def t1_path(self):
return "example_data/BraTS-MET-00110-000-t1n.nii.gz"
return "example/data/BraTS-MET-00110-000-t1n.nii.gz"

@pytest.fixture
def t1c_path(self):
return "example_data/BraTS-MET-00110-000-t1c.nii.gz"
return "example/data/BraTS-MET-00110-000-t1c.nii.gz"

@pytest.fixture
def t2_path(self):
return "example_data/BraTS-MET-00110-000-t2w.nii.gz"
return "example/data/BraTS-MET-00110-000-t2w.nii.gz"

@pytest.fixture
def fla_path(self):
return "example_data/BraTS-MET-00110-000-t2f.nii.gz"
return "example/data/BraTS-MET-00110-000-t2f.nii.gz"

@pytest.fixture
def mock_config(self):
Expand All @@ -46,44 +45,79 @@ def _load_np_from_nifti(path):

return _load_np_from_nifti

def test_validate_images(self, t1_path, t1c_path, t2_path, fla_path, mock_inferer):
def test_validate_images(
self,
t1_path,
t1c_path,
t2_path,
fla_path,
mock_inferer,
):
images = mock_inferer._validate_images(
t1=t1_path, t1c=t1c_path, t2=t2_path, fla=fla_path
t1=t1_path,
t1c=t1c_path,
t2=t2_path,
fla=fla_path,
)
assert len(images) == 4
assert all(isinstance(img, Path) for img in images)

def test_validate_images_file_not_found(self, mock_inferer):
def test_validate_images_file_not_found(
self,
mock_inferer,
):
with pytest.raises(FileNotFoundError):
_ = mock_inferer._validate_images(t1="invalid_path.nii.gz")

def test_validate_images_different_types(
self, mock_inferer, t1_path, t1c_path, load_np_from_nifti
self,
mock_inferer,
t1_path,
t1c_path,
load_np_from_nifti,
):
with pytest.raises(AssertionError):
_ = mock_inferer._validate_images(
t1=t1_path, t1c=load_np_from_nifti(t1c_path)
)

def test_validate_images_no_inputs(self, mock_inferer):
def test_validate_images_no_inputs(
self,
mock_inferer,
):
with pytest.raises(AssertionError):
_ = mock_inferer._validate_images()

def test_determine_inference_mode(self, mock_inferer, t1_path):
def test_determine_inference_mode(
self,
mock_inferer,
t1_path,
):
validated_images = mock_inferer._validate_images(t1=t1_path)
mode = mock_inferer._determine_inference_mode(images=validated_images)
assert isinstance(mode, InferenceMode)

def test_determine_inference_mode_not_implemented(self, mock_inferer, t2_path):
def test_determine_inference_mode_not_implemented(
self,
mock_inferer,
t2_path,
):
images = mock_inferer._validate_images(t2=t2_path)
with pytest.raises(NotImplementedError):
mode = mock_inferer._determine_inference_mode(images=images)

def test_infer(self, mock_inferer, t1_path):
def test_infer(
self,
mock_inferer,
t1_path,
):
with patch.object(mock_inferer, "_sliding_window_inference", return_value=None):
mock_inferer.infer(t1=t1_path)

def test_configure_device(self, mock_config):
def test_configure_device(
self,
mock_config,
):
inferer = AuroraInferer(config=mock_config)
device = inferer._configure_device()
assert device == torch.device("cpu")
Expand All @@ -92,7 +126,10 @@ def test_configure_device(self, mock_config):
not torch.cuda.is_available(),
reason="Skipping GPU device test since cuda is not available",
)
def test_configure_device_gpu(self, mock_config):
def test_configure_device_gpu(
self,
mock_config,
):
inferer = AuroraGPUInferer(config=mock_config)
device = inferer._configure_device()
assert device == torch.device("cuda")
1 change: 0 additions & 1 deletion tests/test_utils.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import pytest
from pathlib import Path
from brainles_aurora.utils import remove_path_suffixes

Expand Down

0 comments on commit 6a81e29

Please sign in to comment.