-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I encountered a problem in the preprocessing stage, how should I solve it? #18
Comments
Hey @Yuanyangkai, have you managed to solve this issue? |
I didn't solve it |
This is the command I entered.
} |
When I run my tests_mednext file, I also get an error. D:\Anaconda3\envs\MeNeXt\python.exe "D:/Pycharm/PyCharm Community Edition 2024.1/plugins/python-ce/helpers/pycharm/_jb_pytest_runner.py" --target tests_mednext_miccai_architectures.py::Test_MedNeXt_archs ============================= test session starts ============================= tests_mednext_miccai_architectures.py::Test_MedNeXt_archs::test_init_and_forward[S-3] ============================== 8 failed in 8.25s ============================== 预期:(1, 4, 128, 128, 128) self = <tests_mednext_miccai_architectures.Test_MedNeXt_archs object at 0x0000020E4982E100>
E assert torch.Size([4, 128, 128, 128]) == (1, 4, 128, 128, 128) tests_mednext_miccai_architectures.py:24: AssertionError 预期:(1, 4, 128, 128, 128) self = <tests_mednext_miccai_architectures.Test_MedNeXt_archs object at 0x0000020E53DFAA30>
E assert torch.Size([4, 128, 128, 128]) == (1, 4, 128, 128, 128) tests_mednext_miccai_architectures.py:24: AssertionError 预期:(1, 4, 128, 128, 128) self = <tests_mednext_miccai_architectures.Test_MedNeXt_archs object at 0x0000020E53DFAF70>
E assert torch.Size([4, 128, 128, 128]) == (1, 4, 128, 128, 128) tests_mednext_miccai_architectures.py:24: AssertionError 预期:(1, 4, 128, 128, 128) self = <tests_mednext_miccai_architectures.Test_MedNeXt_archs object at 0x0000020E53E1C070>
E assert torch.Size([4, 128, 128, 128]) == (1, 4, 128, 128, 128) tests_mednext_miccai_architectures.py:24: AssertionError 预期:(1, 4, 128, 128, 128) self = <tests_mednext_miccai_architectures.Test_MedNeXt_archs object at 0x0000020E53E1C130>
E assert torch.Size([4, 128, 128, 128]) == (1, 4, 128, 128, 128) tests_mednext_miccai_architectures.py:24: AssertionError 预期:(1, 4, 128, 128, 128) self = <tests_mednext_miccai_architectures.Test_MedNeXt_archs object at 0x0000020E53E1C1F0>
E assert torch.Size([4, 128, 128, 128]) == (1, 4, 128, 128, 128) tests_mednext_miccai_architectures.py:24: AssertionError 预期:(1, 4, 128, 128, 128) self = <tests_mednext_miccai_architectures.Test_MedNeXt_archs object at 0x0000020E53E1C2B0>
E assert torch.Size([4, 128, 128, 128]) == (1, 4, 128, 128, 128) tests_mednext_miccai_architectures.py:24: AssertionError 预期:(1, 4, 128, 128, 128) self = <tests_mednext_miccai_architectures.Test_MedNeXt_archs object at 0x0000020E53E1C370>
E assert torch.Size([4, 128, 128, 128]) == (1, 4, 128, 128, 128) tests_mednext_miccai_architectures.py:24: AssertionError 进程已结束,退出代码为 1 |
My data set .nii.gz file successfully ran in the nnunet_v2 official file, but after I modified the json file, it failed to run in the mednext file. I did not find the problem. |
So based on this, the nnunetv1 experiment planner cannot find any files to preprocess. I would start with making sure that the nnunetv1 code underneath can find the your files. The first thing I can see, which deviates from standard nnunetv1 recommendations are the paths in the dataset.json. Could you try with relative paths instead of absolute paths in the |
Still reporting an error. (MeNeXt) PS E:\PythonProject\MedNeXt> mednextv1_plan_and_preprocess -t 521 -pl3d ExperimentPlanner3D_v21_customTargetSpacing_1x1x1 Task521_Lung qqq <nnunet_mednext.experiment_planning.alternative_experiment_planning.target_spacing.experiment_planner_v21_isotropic1mm.ExperimentPlanner3D_v21_customTargetSpacing_1x1x1 object at 0x00000184087D9340> |
(MeNeXt) PS E:\PythonProject\MedNeXt> mednextv1_plan_and_preprocess -t 521 -pl3d ExperimentPlanner3D_v21_customTargetSpacing_1x1x1
E:\PythonProject\MedNeXt\mednext\DATASET\nnUNet_raw_data_base\nnUNet_raw_data\Task521_Lung\imagesTr\CTA_007
E:\PythonProject\MedNeXt\mednext\DATASET\nnUNet_raw_data_base\nnUNet_raw_data\Task521_Lung\imagesTr\CTA_001
E:\PythonProject\MedNeXt\mednext\DATASET\nnUNet_raw_data_base\nnUNet_raw_data\Task521_Lung\imagesTr\CTA_002
E:\PythonProject\MedNeXt\mednext\DATASET\nnUNet_raw_data_base\nnUNet_raw_data\Task521_Lung\imagesTr\CTA_006
E:\PythonProject\MedNeXt\mednext\DATASET\nnUNet_raw_data_base\nnUNet_raw_data\Task521_Lung\imagesTr\CTA_005
E:\PythonProject\MedNeXt\mednext\DATASET\nnUNet_raw_data_base\nnUNet_raw_data\Task521_Lung\imagesTr\CTA_004
E:\PythonProject\MedNeXt\mednext\DATASET\nnUNet_raw_data_base\nnUNet_raw_data\Task521_Lung\imagesTr\CTA_003
Task521_Lung
number of threads: (8, 8)
D:\Anaconda3\envs\MeNeXT\lib\site-packages\numpy\core\fromnumeric.py:3504: RuntimeWarning: Mean of empty slice.
return _methods._mean(a, axis=axis, dtype=dtype,
D:\Anaconda3\envs\MeNeXT\lib\site-packages\numpy\core_methods.py:129: RuntimeWarning: invalid value encountered in scalar divide
ret = ret.dtype.type(ret / rcount)
not using nonzero mask for normalization
Are we using the nonzero mask for normalization? OrderedDict([(0, False)])
Traceback (most recent call last):
File "\?\D:\Anaconda3\envs\MeNeXt\Scripts\mednextv1_plan_and_preprocess-script.py", line 33, in
sys.exit(load_entry_point('mednextv1', 'console_scripts', 'mednextv1_plan_and_preprocess')())
File "e:\pythonproject\mednext\mednext\nnunet_mednext\experiment_planning\nnUNet_plan_and_preprocess.py", line 159, in main
exp_planner.plan_experiment()
File "e:\pythonproject\mednext\mednext\nnunet_mednext\experiment_planning\experiment_planner_baseline_3DUNet.py", line 266, in plan_experiment
median_shape = np.median(np.vstack(new_shapes), 0)
File "D:\Anaconda3\envs\MeNeXT\lib\site-packages\numpy\core\shape_base.py", line 289, in vstack
return _nx.concatenate(arrs, 0, dtype=dtype, casting=casting)
ValueError: need at least one array to concatenate
The text was updated successfully, but these errors were encountered: