Skip to content

TVayne/GroupMorph

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GroupMorph: Medical Image Registration via Grouping Network with Contextual Fusion

This is the official Pytorch implementation of "GroupMorph: Medical Image Registration via Grouping Network with Contextual Fusion. IEEE Transactions on Medical Imaging (TMI), 2024".

Keywords: Deformable image registration, deformation decomposition, contextual feature fusion.

Prerequisites

  • Python 3.8
  • Pytorch 1.7.0
  • NumPy
  • NiBabel

Introduction

Framework: Framework

Decoder: decoder

We propose a novel registration model, called GroupMorph. Different from typical pyramid-based methods, we adopt the grouping-combination strategy to predict deformation field at each resolution. Specifically, we perform group-wise correlation calculation to measure the similarities of grouped features. After that, n groups of deformation subfields with different receptive fields are predicted in parallel. By composing these subfields, a deformation field with multi-receptive field ranges is formed, which can effectively identify both large and small deformations. Meanwhile, a contextual fusion module is designed to fuse the contextual features and provide the inter-group information for the field estimator of the next level. By leveraging the inter-group correspondence, the synergy among deformation subfields is enhanced.

Training

Step 1: Replace ../neurite-oasis.v1.0/OASIS_OAS1_*_MR1 with the path of your training data. You may also need to implement your own dataset function, i.e., Dataset_OASIS in Functions.py.

Step 2: set the groups variable in train.py to set the groups of each level, and change the imgshape to match the resolution of your data.

Step 3: You may adjust the size of the model by manipulating the argument --bs_ch, which is defaulted to 8.

Testing

Use this command to obtain the quantitative results.

python test.py --modelpath=/xx/xx/

Dataset

We used four datasets to validate our methods:

OASIS: We use the neuronal version, which undergoes preprocessing identical to that of HyperMorph. The OASIS of neuronal version is available here.

IXI: We use the IXI dataset that is preprocessed by TransMorph. Detailed introduction and download link can be found here.

Hippocampus Dataset: The hippocampus dataset is available on Learn2Reg Task 2.

Abdomen Dataset: The abdomen dataset comes from Abdomen MR-CT Task in Learn2Reg challenge.

Contact

If you have any questions, please be free to contact us by e-mail (zuopengtan@mail.dlut.edu.cn).

Acknowledgements

Some codes in this repository are modified from LapIRN and ULAE.

Thanks a lot for their great contribution!

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages