Semantic image segmentation predicts class labels for each pixel in the image. In medicine, semantic segmentation is widely used for medical image diagnostics, where machines can augment analysis performed by radiologists with efficiency and scalability. One active area of research is brain tumor segmentation, where deep neural networks are trained to predict where each pixel falls into tumorous tissue classes that were labeled by neuroradiologists. In this work, we used a data set comprising clinically-acquired pre-operative multimodal MRI scans of glioblastoma and lower grade glioma, common types of brain tumors. This data set was preprocessed and labeled by neuroradiologists for BraTS 2020 competiton. We trained two state-of-the-art deep neural network models for image segmentation, U-net and FC-DeepNets and evaluated their performance.
We build CNN-based models for pixel-by-pixel segmentation of brain tumor subregions [0: Not tumor, 1: Necrotic/Core, 2: Edema, 3: Enhancing core]. Two models with U-Net and FC-DenseNets architectures were trained and their performance was evaluated using various metrics (see below). The MRI data are collected/preprocessed by the BraTS community for 2020 BraTS challenge.
📄 Please find the final report of this project here: Final report
💻 Please find the final presentation of this project here: Final presentation