Self supervised learning model for Neuroimaging #257
Replies: 3 comments 3 replies
-
Hi @GuanghuiFU! Thanks a lot for opening this discussion :) Currently in ClinicaDL data augmentation is managed by the function The complete list of transforms available corresponds to the dictionary defined in the function:
The main problem is that these functions are not very general, and we cannot control their hyperparameters (for example What do you think of this project? Would you like to contribute in this way? Or would you prefer to start with something else? |
Beta Was this translation helpful? Give feedback.
-
I think it can be divided into two parts:
Currently, I am doing the first part. |
Beta Was this translation helpful? Give feedback.
-
Foundation models are trained on broad data at scale and can be adapted to a wide range of downstream tasks [1]. Self-supervised learning (SSL) techniques are one of the methods to achieve this target. A good SSL feature extractor can avoid the cost of annotating large-scale datasets [2], which is the most crucial challenge in the medical image analysis domain [3]. It is a good idea to combine some SSL models with the ClinicalDL framework.
It may contain three parts, utilize data augmentation to create data for the pretext tasks, train pretext tasks for feature learning, and downstream for evaluation:
Recently, I am trying to apply the method in [3] to brain data, hoping to train a good self-supervised model, and build a better feature extractor based on the characteristics of the brain image.
References:
[1] Bommasani, Rishi, et al. "On the opportunities and risks of foundation models." arXiv preprint arXiv:2108.07258 (2021).
[2] Jing, Longlong, and Yingli Tian. "Self-supervised visual feature learning with deep neural networks: A survey." IEEE transactions on pattern analysis and machine intelligence (2020).
[3] Chaitanya, Krishna, et al. "Contrastive learning of global and local features for medical image segmentation with limited annotations." arXiv preprint arXiv:2006.10511 (2020).
Beta Was this translation helpful? Give feedback.
All reactions