Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Auto Segmentation error in 'brats_segmentation_v0.3.3' model #239

Open
ulphypro opened this issue Nov 22, 2022 · 25 comments
Open

Auto Segmentation error in 'brats_segmentation_v0.3.3' model #239

ulphypro opened this issue Nov 22, 2022 · 25 comments

Comments

@ulphypro
Copy link

Describe the bug
[2022-11-22 10:24:37,281] [19036] [MainThread] [ERROR] (uvicorn.error:119) - Traceback (most recent call last):
File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\routing.py", line 635, in lifespan
async with self.lifespan_context(app):
File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\routing.py", line 530, in aenter
await self.router.startup()
File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\routing.py", line 612, in startup
await handler()
File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\monailabel\app.py", line 106, in startup_event
instance = app_instance()
File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\monailabel\interfaces\utils\app.py", line 51, in app_instance
app = c(app_dir=app_dir, studies=studies, conf=conf)
File "C:\Users\AA\apps\monaibundle\main.py", line 90, in init
super().init(
File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\monailabel\interfaces\app.py", line 96, in init
self.trainers = self.init_trainers() if settings.MONAI_LABEL_TASKS_TRAIN else {}
File "C:\Users\AA\apps\monaibundle\main.py", line 116, in init_trainers
t = BundleTrainTask(b, self.conf)
File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\monailabel\tasks\train\bundle.py", line 83, in init
self.bundle_config.read_config(self.bundle_config_path)
File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\monai\bundle\config_parser.py", line 300, in read_config
content.update(self.load_config_files(f, **kwargs))
File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\monai\bundle\config_parser.py", line 403, in load_config_files
for k, v in (cls.load_config_file(i, **kwargs)).items():
File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\monai\bundle\config_parser.py", line 382, in load_config_file
return json.load(f, **kwargs)
File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\json_init
.py", line 293, in load
return loads(fp.read(),
File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\json_init
.py", line 346, in loads
return _default_decoder.decode(s)
File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\json\decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting ',' delimiter: line 63 column 43 (char 1947)

[2022-11-22 10:24:37,282] [19036] [MainThread] [ERROR] (uvicorn.error:56) - Application startup failed. Exiting.

To Reproduce
Steps to reproduce the behavior:

  1. 'in_channel' value edited 4 into 1 and added 'ensure_channel_first": ture in inference.json

    • edited inference.json code
      "target": "SegResNet",
      "blocks_down": [
      1,
      2,
      2,
      4
      ],
      "blocks_up": [
      1,
      1,
      1
      ],
      "init_filters": 16,
      "in_channels": 1,
      "out_channels": 3,
      "dropout_prob": 0.2
      },
      "network": "$@network_def.to(@device)",
      "preprocessing": {
      "target": "Compose",
      "transforms": [
      {
      "target": "LoadImaged",
      "keys": "image",
      "ensure_channel_first": true
      },
  2. 'in_channel' value edited 4 into 1 and added 'ensure_channel_first": ture in train.json

    • edited train.json code
      "target": "SegResNet",
      "blocks_down": [
      1,
      2,
      2,
      4
      ],
      "blocks_up": [
      1,
      1,
      1
      ],
      "init_filters": 16,
      "in_channels": 1,
      "out_channels": 3,
      "dropout_prob": 0.2
      },
      .
      .
      .
      "train": {
      "preprocessing_transforms": [
      {
      "target": "LoadImaged",
      "keys": [
      "image",
      "label",
      "ensure_channel_first": true
      ]
      },
  3. '("inputs" :) num_channel' value edits 4 into 1 train.json

    • edited metadata.json code
      "intended_use": "This is an example, not to be used for diagnostic purposes",
      "references": [
      "Myronenko, Andriy. '3D MRI brain tumor segmentation using autoencoder regularization.' International MICCAI Brainlesion Workshop. Springer, Cham, 2018. https://arxiv.org/abs/1810.11654"
      ],
      "network_data_format": {
      "inputs": {
      "image": {
      "type": "image",
      "format": "magnitude",
      "modality": "MR",
      "num_channels": 1,
      "spatial_shape": [
      "8n",
      "8
      n",
      "8*n"
      ],

Expected behavior
I can expect result when I press 'run' button in Auto Segmentation option, after I edited and added inference.json, train.josn and metadata.json.
image

image

Screenshots
But I can't extract brain tumor as following figure.

image

Environment

Printing MONAI config...

MONAI version: 1.0.1
Numpy version: 1.23.4
Pytorch version: 1.12.1+cu113
MONAI flags: HAS_EXT = False, USE_COMPILED = False, USE_META_DICT = False
MONAI rev id: 8271a193229fe4437026185e218d5b06f7c8ce69
MONAI file: C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\monai_init_.py

Optional dependencies:
Pytorch Ignite version: 0.4.10
Nibabel version: 4.0.2
scikit-image version: 0.19.3
Pillow version: 9.3.0
Tensorboard version: 2.10.1
gdown version: 4.5.3
TorchVision version: 0.13.1+cu113
tqdm version: 4.64.1
lmdb version: 1.3.0
psutil version: 5.9.4
pandas version: 1.5.1
einops version: 0.6.0
transformers version: 4.24.0
mlflow version: 2.0.1
pynrrd version: 0.4.3

For details about installing the optional dependencies, please visit:
https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies

================================
Printing system config...

System: Windows
Win32 version: ('10', '10.0.22000', 'SP0', 'Multiprocessor Free')
Win32 edition: Core
Platform: Windows-10-10.0.22000-SP0
Processor: AMD64 Family 25 Model 80 Stepping 0, AuthenticAMD
Machine: AMD64
Python version: 3.9.13
Process name: python.exe
Command: ['C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\python.exe', '-c', 'import monai; monai.config.print_debug_info()']
Open files: [popenfile(path='C:\Program Files\WindowsApps\Microsoft.LanguageExperiencePackko-KR_22000.29.134.0_neutral__8wekyb3d8bbwe\Windows\System32\ko-KR\39386f74d1967f5c37a5b4171f81c8f3\kernel32.dll.mui', fd=-1), popenfile(path='C:\Program Files\WindowsApps\Microsoft.LanguageExperiencePackko-KR_22000.29.134.0_neutral__8wekyb3d8bbwe\Windows\System32\ko-KR\fe441ef3ed396a241e46f9f354057863\tzres.dll.mui', fd=-1), popenfile(path='C:\Program Files\WindowsApps\Microsoft.LanguageExperiencePackko-KR_22000.29.134.0_neutral__8wekyb3d8bbwe\Windows\System32\ko-KR\a7c1941e6709c10ab525083b61805316\KernelBase.dll.mui', fd=-1)]
Num physical CPUs: 8
Num logical CPUs: 16
Num usable CPUs: 16
CPU usage (%): [10.3, 9.4, 6.9, 3.8, 4.4, 0.6, 1.9, 2.2, 6.6, 15.2, 7.8, 3.2, 2.2, 0.9, 6.0, 41.1]
CPU freq. (MHz): 3301
Load avg. in last 1, 5, 15 mins (%): [0.0, 0.0, 0.0]
Disk usage (%): 60.3
Avg. sensor temp. (Celsius): UNKNOWN for given OS
Total physical memory (GB): 15.4
Available memory (GB): 7.1
Used memory (GB): 8.3

================================
Printing GPU config...

Num GPUs: 1
Has CUDA: True
CUDA version: 11.3
cuDNN enabled: True
cuDNN version: 8302
Current device: 0
Library compiled for CUDA architectures: ['sm_37', 'sm_50', 'sm_60', 'sm_61', 'sm_70', 'sm_75', 'sm_80', 'sm_86', 'compute_37']
GPU 0 Name: NVIDIA GeForce RTX 3070 Laptop GPU
GPU 0 Is integrated: False
GPU 0 Is multi GPU board: False
GPU 0 Multi processor count: 40
GPU 0 Total memory (GB): 8.0
GPU 0 CUDA capability (maj.min): 8.6

@yiheng-wang-nv
Copy link
Collaborator

Hi, the error is json.decoder.JSONDecodeError: Expecting ',' delimiter: line 63 column 43 (char 1947), could you please double check if line 63 misses a comma?

@ulphypro
Copy link
Author

Hello, @yiheng-wang-nv
Thank you for answering my question
image

There is no problem regarding 63 line in decorder.py that error showed.

Is it correct to modify it as follows??
In inference.json code
"in_channels": 1 ---->edit
"ensure_channel_first: true" ------------->add

In train.json code
In inference.json code
"in_channels": 1 ---->edit
"ensure_channel_first: true" ------------->add

In metadata.json
"num_channels": 1 ------>edit

@yiheng-wang-nv
Copy link
Collaborator

Hi @ulphypro , I suspect here "line 63" is for the json file that produced the error, not for decoder.py. I cannot find an error in the code you attached, but may need to see the whole json files to double check it.

@ulphypro
Copy link
Author

ulphypro commented Nov 22, 2022

Thank you. @yiheng-wang-nv

I edited total 3 files: inference.json, train.json and metadata.json included in 'brats_mri_segmentation_v0.3.3'.

There is no problem in corresponding codes.

I don't know why it occurs the error that is '[2022-11-22 15:37:19,967] [5628] [MainThread] [ERROR] (uvicorn.error:56) - Application startup failed. Exiting.'.

Corresponding codes are as following : edited inference.json, train.json and metadata.json.

@ulphypro
Copy link
Author

Edited inference.json code included in brats_mri_segmentation_v0.3.3
{
"imports": [
"$import glob",
"$import os"
],
"bundle_root": "/workspace/brats_mri_segmentation",
"ckpt_dir": "$@bundle_root + '/models'",
"output_dir": "$@bundle_root + '/eval'",
"data_list_file_path": "$@bundle_root + '/configs/datalist.json'",
"data_file_base_dir": "/workspace/data/medical/brats2018challenge",
"test_datalist": "$monai.data.load_decathlon_datalist(@data_list_file_path, data_list_key='testing', base_dir=@data_file_base_dir)",
"device": "$torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')",
"amp": true,
"network_def": {
"target": "SegResNet",
"blocks_down": [
1,
2,
2,
4
],
"blocks_up": [
1,
1,
1
],
"init_filters": 16,
"in_channels": 1,
"out_channels": 3,
"dropout_prob": 0.2
},
"network": "$@network_def.to(@device)",
"preprocessing": {
"target": "Compose",
"transforms": [
{
"target": "LoadImaged",
"keys": "image",
"ensure_channel_first": true
},
{
"target": "NormalizeIntensityd",
"keys": "image",
"nonzero": true,
"channel_wise": true
}
]
},
"dataset": {
"target": "Dataset",
"data": "@test_datalist",
"transform": "@preprocessing"
},
"dataloader": {
"target": "DataLoader",
"dataset": "@dataset",
"batch_size": 1,
"shuffle": true,
"num_workers": 4
},
"inferer": {
"target": "SlidingWindowInferer",
"roi_size": [
240,
240,
160
],
"sw_batch_size": 1,
"overlap": 0.5
},
"postprocessing": {
"target": "Compose",
"transforms": [
{
"target": "Activationsd",
"keys": "pred",
"sigmoid": true
},
{
"target": "Invertd",
"keys": "pred",
"transform": "@preprocessing",
"orig_keys": "image",
"meta_keys": "pred_meta_dict",
"nearest_interp": false,
"to_tensor": true
},
{
"target": "AsDiscreted",
"keys": "pred",
"threshold": 0.5
},
{
"target": "SaveImaged",
"keys": "pred",
"meta_keys": "pred_meta_dict",
"output_dir": "@output_dir"
}
]
},
"handlers": [
{
"target": "CheckpointLoader",
"load_path": "$@bundle_root + '/models/model.pt'",
"load_dict": {
"model": "@network"
}
},
{
"target": "StatsHandler",
"iteration_log": false
}
],
"evaluator": {
"target": "SupervisedEvaluator",
"device": "@device",
"val_data_loader": "@dataloader",
"network": "@network",
"inferer": "@inferer",
"postprocessing": "@Postprocessing",
"val_handlers": "@handlers",
"amp": true
},
"evaluating": [
"$setattr(torch.backends.cudnn, 'benchmark', True)",
"$@evaluator.run()"
]
}

@ulphypro
Copy link
Author

Edited train.json code included in brats_mri_segmentation_v0.3.3

{
"imports": [
"$import glob",
"$import os"
],
"bundle_root": "/workspace/brats_mri_segmentation",
"ckpt_dir": "$@bundle_root + '/models'",
"output_dir": "$@bundle_root + '/eval'",
"data_list_file_path": "$@bundle_root + '/configs/datalist.json'",
"data_file_base_dir": "/workspace/data/medical/brats2018challenge",
"train_datalist": "$monai.data.load_decathlon_datalist(@data_list_file_path, data_list_key='training', base_dir=@data_file_base_dir)",
"val_datalist": "$monai.data.load_decathlon_datalist(@data_list_file_path, data_list_key='validation', base_dir=@data_file_base_dir)",
"device": "$torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')",
"epochs": 300,
"val_interval": 1,
"learning_rate": 0.0001,
"amp": true,
"network_def": {
"target": "SegResNet",
"blocks_down": [
1,
2,
2,
4
],
"blocks_up": [
1,
1,
1
],
"init_filters": 16,
"in_channels": 1,
"out_channels": 3,
"dropout_prob": 0.2
},
"network": "$@network_def.to(@device)",
"loss": {
"target": "DiceLoss",
"smooth_nr": 0,
"smooth_dr": 1e-05,
"squared_pred": true,
"to_onehot_y": false,
"sigmoid": true
},
"optimizer": {
"target": "torch.optim.Adam",
"params": "$@network.parameters()",
"lr": "@learning_rate",
"weight_decay": 1e-05
},
"lr_scheduler": {
"target": "torch.optim.lr_scheduler.CosineAnnealingLR",
"optimizer": "@optimizer",
"T_max": "@epochs"
},
"train": {
"preprocessing_transforms": [
{
"target": "LoadImaged",
"keys": [
"image",
"label",
"ensure_channel_first": true
]
},
{
"target": "ConvertToMultiChannelBasedOnBratsClassesd",
"keys": "label"
},
{
"target": "NormalizeIntensityd",
"keys": "image",
"nonzero": true,
"channel_wise": true
}
],
"random_transforms": [
{
"target": "RandSpatialCropd",
"keys": [
"image",
"label"
],
"roi_size": [
224,
224,
144
],
"random_size": false
},
{
"target": "RandFlipd",
"keys": [
"image",
"label"
],
"prob": 0.5,
"spatial_axis": 0
},
{
"target": "RandFlipd",
"keys": [
"image",
"label"
],
"prob": 0.5,
"spatial_axis": 1
},
{
"target": "RandFlipd",
"keys": [
"image",
"label"
],
"prob": 0.5,
"spatial_axis": 2
},
{
"target": "RandScaleIntensityd",
"keys": "image",
"factors": 0.1,
"prob": 1.0
},
{
"target": "RandShiftIntensityd",
"keys": "image",
"offsets": 0.1,
"prob": 1.0
}
],
"preprocessing": {
"target": "Compose",
"transforms": "$@train#preprocessing_transforms + @train#random_transforms"
},
"dataset": {
"target": "Dataset",
"data": "@train_datalist",
"transform": "@train#preprocessing"
},
"dataloader": {
"target": "DataLoader",
"dataset": "@train#dataset",
"batch_size": 1,
"shuffle": true,
"num_workers": 4
},
"inferer": {
"target": "SimpleInferer"
},
"postprocessing": {
"target": "Compose",
"transforms": [
{
"target": "Activationsd",
"keys": "pred",
"sigmoid": true
},
{
"target": "AsDiscreted",
"keys": "pred",
"threshold": 0.5
}
]
},
"handlers": [
{
"target": "LrScheduleHandler",
"lr_scheduler": "@lr_scheduler",
"print_lr": true
},
{
"target": "ValidationHandler",
"validator": "@Validate#evaluator",
"epoch_level": true,
"interval": "@val_interval"
},
{
"target": "StatsHandler",
"tag_name": "train_loss",
"output_transform": "$monai.handlers.from_engine(['loss'], first=True)"
},
{
"target": "TensorBoardStatsHandler",
"log_dir": "@output_dir",
"tag_name": "train_loss",
"output_transform": "$monai.handlers.from_engine(['loss'], first=True)"
}
],
"key_metric": {
"train_mean_dice": {
"target": "MeanDice",
"include_background": true,
"output_transform": "$monai.handlers.from_engine(['pred', 'label'])"
}
},
"trainer": {
"target": "SupervisedTrainer",
"max_epochs": "@epochs",
"device": "@device",
"train_data_loader": "@train#dataloader",
"network": "@network",
"loss_function": "@loss",
"optimizer": "@optimizer",
"inferer": "@train#inferer",
"postprocessing": "@train#postprocessing",
"key_train_metric": "@train#key_metric",
"train_handlers": "@train#handlers",
"amp": "@amp"
}
},
"validate": {
"preprocessing": {
"target": "Compose",
"transforms": "$@train#preprocessing_transforms"
},
"dataset": {
"target": "Dataset",
"data": "@val_datalist",
"transform": "@Validate#preprocessing"
},
"dataloader": {
"target": "DataLoader",
"dataset": "@Validate#dataset",
"batch_size": 1,
"shuffle": false,
"num_workers": 4
},
"inferer": {
"target": "SlidingWindowInferer",
"roi_size": [
240,
240,
160
],
"sw_batch_size": 1,
"overlap": 0.5
},
"postprocessing": {
"target": "Compose",
"transforms": [
{
"target": "Activationsd",
"keys": "pred",
"sigmoid": true
},
{
"target": "AsDiscreted",
"keys": "pred",
"threshold": 0.5
},
{
"target": "SplitDimd",
"keys": [
"pred",
"label"
],
"output_postfixes": [
"tc",
"wt",
"et"
]
}
]
},
"handlers": [
{
"target": "StatsHandler",
"iteration_log": false
},
{
"target": "TensorBoardStatsHandler",
"log_dir": "@output_dir",
"iteration_log": false
},
{
"target": "CheckpointSaver",
"save_dir": "@ckpt_dir",
"save_dict": {
"model": "@network"
},
"save_key_metric": true,
"key_metric_filename": "model.pt"
}
],
"key_metric": {
"val_mean_dice": {
"target": "MeanDice",
"include_background": true,
"output_transform": "$monai.handlers.from_engine(['pred', 'label'])"
}
},
"additional_metrics": {
"val_mean_dice_tc": {
"target": "MeanDice",
"include_background": true,
"output_transform": "$monai.handlers.from_engine(['pred_tc', 'label_tc'])"
},
"val_mean_dice_wt": {
"target": "MeanDice",
"include_background": true,
"output_transform": "$monai.handlers.from_engine(['pred_wt', 'label_wt'])"
},
"val_mean_dice_et": {
"target": "MeanDice",
"include_background": true,
"output_transform": "$monai.handlers.from_engine(['pred_et', 'label_et'])"
}
},
"evaluator": {
"target": "SupervisedEvaluator",
"device": "@device",
"val_data_loader": "@Validate#dataloader",
"network": "@network",
"inferer": "@Validate#inferer",
"postprocessing": "@Validate#postprocessing",
"key_val_metric": "@Validate#key_metric",
"additional_metrics": "@Validate#additional_metrics",
"val_handlers": "@Validate#handlers",
"amp": "@amp"
}
},
"training": [
"$monai.utils.set_determinism(seed=123)",
"$setattr(torch.backends.cudnn, 'benchmark', True)",
"$@train#trainer.run()"
]
}

@ulphypro
Copy link
Author

Edited train.json code included in brats_mri_segmentation_v0.3.3

{
"schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20220324.json",
"version": "0.3.3",
"changelog": {
"0.3.3": "update to use monai 1.0.1",
"0.3.2": "enhance readme on commands example",
"0.3.1": "fix license Copyright error",
"0.3.0": "update license files",
"0.2.1": "fix network_data_format error",
"0.2.0": "unify naming",
"0.1.1": "update for MetaTensor",
"0.1.0": "complete the model package"
},
"monai_version": "1.0.1",
"pytorch_version": "1.13.0",
"numpy_version": "1.22.4",
"optional_packages_version": {
"nibabel": "4.0.1",
"pytorch-ignite": "0.4.9",
"scikit-learn": "1.1.3",
"tensorboard": "2.10.1"
},
"task": "Multimodal Brain Tumor segmentation",
"description": "A pre-trained model for volumetric (3D) segmentation of brain tumor subregions from multimodal MRIs based on BraTS 2018 data",
"authors": "MONAI team",
"copyright": "Copyright (c) MONAI Consortium",
"data_source": "https://www.med.upenn.edu/sbia/brats2018/data.html",
"data_type": "nibabel",
"image_classes": "4 channel data, T1c, T1, T2, FLAIR at 1x1x1 mm",
"label_classes": "3 channel data, channel 0 for Tumor core, channel 1 for Whole tumor, channel 2 for Enhancing tumor",
"pred_classes": "3 channels data, same as label_classes",
"eval_metrics": {
"val_mean_dice": 0.8518,
"val_mean_dice_tc": 0.8559,
"val_mean_dice_wt": 0.9026,
"val_mean_dice_et": 0.7905
},
"intended_use": "This is an example, not to be used for diagnostic purposes",
"references": [
"Myronenko, Andriy. '3D MRI brain tumor segmentation using autoencoder regularization.' International MICCAI Brainlesion Workshop. Springer, Cham, 2018. https://arxiv.org/abs/1810.11654"
],
"network_data_format": {
"inputs": {
"image": {
"type": "image",
"format": "magnitude",
"modality": "MR",
"num_channels": 1,
"spatial_shape": [
"8n",
"8
n",
"8n"
],
"dtype": "float32",
"value_range": [],
"is_patch_data": true,
"channel_def": {
"0": "T1c",
"1": "T1",
"2": "T2",
"3": "FLAIR"
}
}
},
"outputs": {
"pred": {
"type": "image",
"format": "segmentation",
"num_channels": 3,
"spatial_shape": [
"8
n",
"8n",
"8
n"
],
"dtype": "float32",
"value_range": [
0,
1
],
"is_patch_data": true,
"channel_def": {
"0": "Tumor core",
"1": "Whole tumor",
"2": "Enhancing tumor"
}
}
}
}
}

@ulphypro
Copy link
Author

ulphypro commented Nov 22, 2022

@yiheng-wang-nv I edited only total 6 lines.
nference.json
-"in_channels" : 1
-ensure_channel_first" : true

train.json
-"in_channels" : 1
-ensure_channel_first" : true

metadata.json
-In "inputs" section, "num_channels": 1

@yiheng-wang-nv
Copy link
Collaborator

this place is wrong:

"keys": [
"image",
"label",
"ensure_channel_first": true
]

"ensure_channel_first" should not be inside the list of "keys", it is another arg.

@ulphypro
Copy link
Author

@yiheng-wang-nv

Someone uploaded as demo video how to edit json files .

Project-MONAI/MONAILabel#1051
https://user-images.githubusercontent.com/11991079/194732741-6d55c171-0eb6-4661-97fc-8fa0004897be.mp4

The person input "ensure_channel_first" inside "keys".

Then, where should I put "ensure_channel_first"?

@yiheng-wang-nv
Copy link
Collaborator

            {
                "_target_": "LoadImaged",
                "keys": [
                    "image",
                    "label"
                ],
                "ensure_channel_first": true
            },

@ulphypro
Copy link
Author

Thank you for helping me @yiheng-wang-nv

I edited inference.json and train.json code as you let me know.

  1. Edited inference.json code
    "preprocessing": {
    "target": "Compose",
    "transforms": [
    {
    "target": "LoadImaged",
    "keys": [
    "image",
    ],
    "ensure_channel_first":true
    },
    {
    "target": "NormalizeIntensityd",
    "keys": "image",
    "nonzero": true,
    "channel_wise": true
    }
    ]
    },

  2. Edited train.json code
    "preprocessing_transforms": [
    {
    "target": "LoadImaged",
    "keys": [
    "image",
    "label"
    ],
    "ensure_channel_firts": true
    },
    When I input as following, it occurs bottom error.

[2022-11-22 16:24:42,505] [12688] [MainThread] [ERROR] (uvicorn.error:119) - Traceback (most recent call last):
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\routing.py", line 635, in lifespan
async with self.lifespan_context(app):
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\routing.py", line 530, in aenter
await self.router.startup()
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\routing.py", line 612, in startup
await handler()
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\monailabel\app.py", line 106, in startup_event
instance = app_instance()
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\monailabel\interfaces\utils\app.py", line 51, in app_instance
app = c(app_dir=app_dir, studies=studies, conf=conf)
File "C:\Users\TRL 3D\apps\monaibundle\main.py", line 90, in init
super().init(
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\monailabel\interfaces\app.py", line 95, in init
self.infers = self.init_infers()
File "C:\Users\TRL 3D\apps\monaibundle\main.py", line 105, in init_infers
i = BundleInferTask(b, self.conf)
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\monailabel\tasks\infer\bundle.py", line 95, in init
self.bundle_config.read_config(os.path.join(path, "configs", config_paths[0]))
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\monai\bundle\config_parser.py", line 300, in read_config
content.update(self.load_config_files(f, **kwargs))
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\monai\bundle\config_parser.py", line 403, in load_config_files
for k, v in (cls.load_config_file(i, **kwargs)).items():
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\monai\bundle\config_parser.py", line 382, in load_config_file
return json.load(f, **kwargs)
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\json_init
.py", line 293, in load
return loads(fp.read(),
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\json_init
.py", line 346, in loads
return _default_decoder.decode(s)
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 40 column 16 (char 1182)

[2022-11-22 16:24:42,506] [12688] [MainThread] [ERROR] (uvicorn.error:56) - Application startup failed. Exiting.

@ulphypro
Copy link
Author

Dear @yiheng-wang-nv
Sorry for bothering you, but let me know how to perfectly run 'Auto Segmentation' using 'brats_mri_segmentation_v0.3.3'.

@yiheng-wang-nv
Copy link
Collaborator

This place in inference config is wrong:

"preprocessing": {
"target": "Compose",
"transforms": [
{
"target": "LoadImaged",
"keys": [
"image",
],
"ensure_channel_first":true
},

the comma after "image" will raise the error, and actually only one key named "image" is used, there is no need to put it into a list.
You can do:

"keys": "image",

or

"keys": ["image"],

@ulphypro
Copy link
Author

ulphypro commented Nov 22, 2022

@yiheng-wang-nv

  1. edited inference.json code
    "preprocessing": {
    "target": "Compose",
    "transforms": [
    {
    "target": "LoadImaged",
    "keys": ["image"],
    "ensure_channel_first": true
    },

  2. edited train.json code
    "train": {
    "preprocessing_transforms": [
    {
    "target": "LoadImaged",
    "keys": [
    "image",
    "label"
    ],
    "ensure_channel_firt": true
    },

I'm sorry.

The result also occurs same error although I edited json files. :

[2022-11-22 17:52:02,288] [956] [MainThread] [ERROR] (uvicorn.error:119) - Traceback (most recent call last):
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\routing.py", line 635, in lifespan
async with self.lifespan_context(app):
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\routing.py", line 530, in aenter
await self.router.startup()
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\routing.py", line 612, in startup
await handler()
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\monailabel\app.py", line 106, in startup_event
instance = app_instance()
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\monailabel\interfaces\utils\app.py", line 51, in app_instance
app = c(app_dir=app_dir, studies=studies, conf=conf)
File "C:\Users\TRL 3D\apps\monaibundle\main.py", line 90, in init
super().init(
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\monailabel\interfaces\app.py", line 95, in init
self.infers = self.init_infers()
File "C:\Users\TRL 3D\apps\monaibundle\main.py", line 105, in init_infers
i = BundleInferTask(b, self.conf)
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\monailabel\tasks\infer\bundle.py", line 95, in init
self.bundle_config.read_config(os.path.join(path, "configs", config_paths[0]))
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\monai\bundle\config_parser.py", line 300, in read_config
content.update(self.load_config_files(f, **kwargs))
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\monai\bundle\config_parser.py", line 403, in load_config_files
for k, v in (cls.load_config_file(i, **kwargs)).items():
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\monai\bundle\config_parser.py", line 382, in load_config_file
return json.load(f, **kwargs)
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\json_init
.py", line 293, in load
return loads(fp.read(),
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\json_init
.py", line 346, in loads
return _default_decoder.decode(s)
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 40 column 16 (char 1182)

[2022-11-22 17:52:02,288] [956] [MainThread] [ERROR] (uvicorn.error:56) - Application startup failed. Exiting.

@diazandr3s
Copy link
Collaborator

Hi,

We've tried to explain what's happening with this issue in this reply: Project-MONAI/MONAILabel#1051 (comment)

I try to explain here again:

  • Unfortunately, the monaibundle for brats (brats_mri_segmentation_v0.2.1) needs more work to properly manage the 4 modalities in BRATS dataset. The solution isn't only to add _"ensure_channel_first":true_ - The model might need to be re-trained in MONAI core.
  • If you want to use 3D Slicer, you can't use 4 modalities in a single nifti file. 3D Slicer can't interpret a nifti file with 4 modalities in it. I'd recommend using files that contain a single modality.

The issue is not only with the code, but it is also with the way the dataset was created. Each nifti file should only have one modality, not 4.

Hope this helps,

@yiheng-wang-nv
Copy link
Collaborator

yiheng-wang-nv commented Nov 23, 2022

Hi @ulphypro

I modified the inference.json into:

    "preprocessing": {
        "_target_": "Compose",
        "transforms": [
            {
                "_target_": "LoadImaged",
                "keys": ["image"],
                "ensure_channel_first": false
            },

and run the bundle inference command and no error happened. Since you did not post the actual json file here, I'm not able to detect more about it. If possible, please push your changes into your forked repo, and attach the link, thanks!

@diazandr3s
Copy link
Collaborator

Hi @ulphypro

I modified the inference.json into:

    "preprocessing": {
        "_target_": "Compose",
        "transforms": [
            {
                "_target_": "LoadImaged",
                "keys": ["image"],
                "ensure_channel_first": false
            },

and run the bundle inference command and no error happened. Since you did not post the actual json file here, I'm not able to detect more about it. If possible, please push your changes into your forked repo, and attach the link, thanks!

By run the bundle inference command and no error happened you mean using the command line interface, right?

@ulphypro
Copy link
Author

ulphypro commented Nov 25, 2022

Dear @yiheng-wang-nv @diazandr3s

Error associated with inference.json and train.json doesn't occur any more though it couldn't detect segment as green.

Thank you for helping me.

bandicam.2022-11-25.10-00-04-016.mp4

@ulphypro
Copy link
Author

ulphypro commented Nov 25, 2022

Hi,

We've tried to explain what's happening with this issue in this reply: Project-MONAI/MONAILabel#1051 (comment)

I try to explain here again:

  • Unfortunately, the monaibundle for brats (brats_mri_segmentation_v0.2.1) needs more work to properly manage the 4 modalities in BRATS dataset. The solution isn't only to add _"ensure_channel_first":true_ - The model might need to be re-trained in MONAI core.
  • If you want to use 3D Slicer, you can't use 4 modalities in a single nifti file. 3D Slicer can't interpret a nifti file with 4 modalities in it. I'd recommend using files that contain a single modality.

The issue is not only with the code, but it is also with the way the dataset was created. Each nifti file should only have one modality, not 4.

Hope this helps,

Dear @diazandr3s @yiheng-wang-nv

As @diazandr3s mention Project-MONAI/MONAILabel#1051 (comment) and here, I downloaded a single dataset(BraTs2021 included flair, t1, t1ce and t2) and I run in 3D-Slicer again.

Running process is as follow.

  1. edited three files in 'brats_mri_segmentation_v0.3.3'
    (1) inference.json
    --> "in_channels": 1
    (2) train.json
    -->"in_channels": 1
    (3) metadata.json
    --> ("inputs" part) "num_channels": 1

  2. started MONAI Label Server after running command 'monailabel start_server --app apps/monaibundle --studies datasets/Task01_BrainTumour/BraTS2021 --conf models brats_mri_segmentation_v0.3.3' in Windows PowerShell.

(in 3D-Slicer)
3. ran MONAI Label module in 3D-Slicer.

  1. clicked 'refresh' button next to 'MONAI Label Server' option in 3D-Slicer.

  2. opened dataset clicking 'Next Sample' button.

But, it occurrs error as bottom figure, when I opened dataset clicking 'Next Sample' button.
image

And it also occurrs error in Window Powershell as bottom.

Traceback (most recent call last):
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 366, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 75, in call
return await self.app(scope, receive, send)
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\fastapi\applications.py", line 269, in call
await super().call(scope, receive, send)
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\applications.py", line 124, in call
await self.middleware_stack(scope, receive, send)
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\middleware\errors.py", line 184, in call
raise exc
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\middleware\errors.py", line 162, in call
await self.app(scope, receive, _send)
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\middleware\cors.py", line 84, in call
await self.app(scope, receive, send)
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\exceptions.py", line 93, in call
raise exc
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\exceptions.py", line 82, in call
await self.app(scope, receive, sender)
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in call
raise e
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in call
await self.app(scope, receive, send)
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\routing.py", line 670, in call
await route.handle(scope, receive, send)
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\routing.py", line 266, in handle
await self.app(scope, receive, send)
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\routing.py", line 65, in app
response = await func(request)
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\fastapi\routing.py", line 227, in app
raw_response = await run_endpoint_function(
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\fastapi\routing.py", line 160, in run_endpoint_function
return await dependant.call(**values)
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\monailabel\endpoints\infer.py", line 179, in api_run_inference
return run_inference(background_tasks, model, image, session_id, params, file, label, output)
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\monailabel\endpoints\infer.py", line 161, in run_inference
result = instance.infer(request)
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\monailabel\interfaces\app.py", line 299, in infer
result_file_name, result_json = task(request)
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\monailabel\tasks\infer\basic_infer.py", line 271, in call
data = self.run_inferer(data, device=device)
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\monailabel\tasks\infer\basic_infer.py", line 428, in run_inferer
network = self._get_network(device)
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\monailabel\tasks\infer\basic_infer.py", line 402, in _get_network
network.load_state_dict(model_state_dict, strict=self.load_strict)
File "C:\Users\TRL 3D\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 1604, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for SegResNet:
size mismatch for convInit.conv.weight: copying a param with shape torch.Size([16, 4, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([16, 1, 3, 3, 3]).

Can you let me know what should I do how to solve the error?

I

@ulphypro
Copy link
Author

ulphypro commented Nov 25, 2022

@diazandr3s

I downloaded BraTS2021 dataset as you mention.

Should I run using apps/radiology with BraTS2021 dataset?

After starting monailabel server using command 'monailabel start_server --app apps/radiology --studies datasets/Task01_BrainTumour/imagesTr --conf models segmentation' in Window Powershell, I can't run 3D-Slicer.

Because it doesn't support segmentation associated with brain tumor.

Person that I posted in Project-MONAI/model-zoo#239 is also me.

@diazandr3s
Copy link
Collaborator

Again, the BRATS dataset has 4 modalities (four image tensors) per patient/case. If you used Task01 dataset from the Medical Segmentation Decathlon, you're using all four modalities in a single nifti file. This DO NOT work as is with Slicer: Project-MONAI/MONAILabel#1051 (comment)

This is what I recommend: Project-MONAI/MONAILabel#1051 (comment)

Hope that helps,

@yiheng-wang-nv
Copy link
Collaborator

Hi @ulphypro , did the error be resolved? Should we keep this ticket open?

@EdenSehatAI
Copy link

Hi,

I'm always getting an error on the 9th epoch of the training. Why is this the case?

Val 9/50 221/250 , dice_tc: 0.7409106 , dice_wt: 0.837617 , dice_et: 0.78997755 , time 4.82s
Val 9/50 222/250 , dice_tc: 0.74006385 , dice_wt: 0.8375687 , dice_et: 0.7895737 , time 4.82s

RuntimeError Traceback (most recent call last)
in <cell line: 11>()
9 loss_epochs,
10 trains_epoch,
---> 11 ) = trainer(
12 model=model,
13 train_loader=train_loader,

5 frames
/usr/local/lib/python3.10/dist-packages/torch/_utils.py in reraise(self)
692 # instantiate since we don't know how to
693 raise RuntimeError(msg) from None
--> 694 raise exception
695
696

RuntimeError: Caught RuntimeError in DataLoader worker process 1.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/nibabel/loadsave.py", line 90, in load
stat_result = os.stat(filename)
FileNotFoundError: [Errno 2] No such file or directory: '/content/drive/MyDrive/EdenSehat/BraTS2021_Training_Data/TrainingData/TrainingData/BraTS2021_00390/BraTS2021_00390_flair.nii.gz'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/monai/transforms/transform.py", line 141, in apply_transform
return _apply_transform(transform, data, unpack_items, lazy, overrides, log_stats)
File "/usr/local/lib/python3.10/dist-packages/monai/transforms/transform.py", line 98, in _apply_transform
return transform(data, lazy=lazy) if isinstance(transform, LazyTrait) else transform(data)
File "/usr/local/lib/python3.10/dist-packages/monai/transforms/io/dictionary.py", line 162, in call
data = self.loader(d[key], reader)
File "/usr/local/lib/python3.10/dist-packages/monai/transforms/io/array.py", line 255, in call
img = reader.read(filename)
File "/usr/local/lib/python3.10/dist-packages/monai/data/image_reader.py", line 908, in read
img = nib.load(name, **kwargs
)
File "/usr/local/lib/python3.10/dist-packages/nibabel/loadsave.py", line 92, in load
raise FileNotFoundError(f"No such file or no access: '{filename}'")
FileNotFoundError: No such file or no access: '/content/drive/MyDrive/EdenSehat/BraTS2021_Training_Data/TrainingData/TrainingData/BraTS2021_00390/BraTS2021_00390_flair.nii.gz'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/monai/transforms/transform.py", line 141, in apply_transform
return _apply_transform(transform, data, unpack_items, lazy, overrides, log_stats)
File "/usr/local/lib/python3.10/dist-packages/monai/transforms/transform.py", line 98, in _apply_transform
return transform(data, lazy=lazy) if isinstance(transform, LazyTrait) else transform(data)
File "/usr/local/lib/python3.10/dist-packages/monai/transforms/compose.py", line 335, in call
result = execute_compose(
File "/usr/local/lib/python3.10/dist-packages/monai/transforms/compose.py", line 111, in execute_compose
data = apply_transform(
File "/usr/local/lib/python3.10/dist-packages/monai/transforms/transform.py", line 171, in apply_transform
raise RuntimeError(f"applying transform {transform}") from e
RuntimeError: applying transform <monai.transforms.io.dictionary.LoadImaged object at 0x7c01153aa680>

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py", line 51, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.10/dist-packages/monai/data/dataset.py", line 112, in getitem
return self._transform(index)
File "/usr/local/lib/python3.10/dist-packages/monai/data/dataset.py", line 98, in _transform
return apply_transform(self.transform, data_i) if self.transform is not None else data_i
File "/usr/local/lib/python3.10/dist-packages/monai/transforms/transform.py", line 171, in apply_transform
raise RuntimeError(f"applying transform {transform}") from e
RuntimeError: applying transform <monai.transforms.compose.Compose object at 0x7c01153abee0>

@KumoLiu
Copy link
Collaborator

KumoLiu commented Dec 14, 2023

FileNotFoundError: No such file or no access: '/content/drive/MyDrive/EdenSehat/BraTS2021_Training_Data/TrainingData/TrainingData/BraTS2021_00390/BraTS2021_00390_flair.nii.gz'

Hi @EdenSehatAI, from the error message, seems missing this file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants