Cardiac motion estimation in 3D echocardiography provides critical insights into heart health but remains challenging due to complex geometry and limited 3D DLIR implementations. We propose a spatial feedback attention module (FBA) to enhance unsupervised 3D deep learning image registration (DLIR) (as shown in the figure below). FBA leverages initial registration results to generate co-attention maps of residual errors, improving self-supervision by minimizing these errors.
FBA improves various 3D DLIR designs, including transformer-enhanced networks, and performs well on both fetal and adult 3D echocardiography. Combining FBA with a spatial transformer and attention-modified backbone achieves state-of-the-art results, highlighting the effectiveness of spatial attention in scaling DLIR from 2D to 3D.
Description:
Fig. 2 illustrates the optimal configuration identified for our FBA DLIR framework. This configuration integrates the FBA module (denoted as Block C) with an existing transformer-based DLIR network. The modularity of this design allows for the removal or replacement of various components, enabling the exploration of alternative DLIR network architectures.
Key Points:
- Block C: Represents the FBA module.
- The figure demonstrates how the FBA module can be incorporated into different 3D CNN and Transformer-based DLIR networks.
- This modular approach allows us to systematically evaluate the impact of the FBA module on registration performance across diverse architectures.
By conducting this analysis, we aim to assess whether the inclusion of the FBA module offers consistent improvements in the performance of DLIR networks.
We utilized two distinct echocardiographic datasets for our study: the publicly available 3D MITEA dataset and a proprietary in-house 3D fetal dataset. The source code provided in this repository is designed to be reproducible using the 3D MITEA dataset, which can be accessed through the provided link.
We would like to express our gratitude to the following repository, which was utilized as a reference for our work: