-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathmelba.json
77 lines (77 loc) · 8.55 KB
/
melba.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
[
{
"melba_id": "2022:018",
"doi": "10.59275/j.melba.2022-16cc",
"title": "Joint Frequency and Image Space Learning for MRI Reconstruction and Analysis",
"authors": [
"Nalini M., Singh#Massachusetts Institute of Technology#0000-0003-3584-2198",
"Juan Eugenio, Iglesias#Massachusetts General Hospital<br>Harvard Medical School<br>University College London<br>Massachusetts Institute of Technology#0000-0001-7569-173X",
"Elfar Adalsteinsson#Massachusetts Institute of Technology#0000-0002-7637-2914",
"Adrian V., Dalca#Massachusetts General Hospital<br>Harvard Medical School<br>Massachusetts Institute of Technology#0000-0002-8422-0136",
"Polina Golland#Massachusetts Institute of Technology#0000-0003-2516-731X"
],
"corresponding": "nmsingh@mit.edu",
"abstract": "We propose neural network layers that explicitly combine frequency and image feature representations and show that they can be used as a versatile building block for reconstruction from frequency space data. Our work is motivated by the challenges arising in MRI acquisition where the signal is a corrupted Fourier transform of the desired image. The proposed joint learning schemes enable both correction of artifacts native to the frequency space and manipulation of image space representations to reconstruct coherent image structures at every layer of the network. This is in contrast to most current deep learning approaches for image reconstruction that treat frequency and image space features separately and often operate exclusively in one of the two spaces. We demonstrate the advantages of joint convolutional learning for a variety of tasks, including motion correction, denoising, reconstruction from undersampled acquisitions, and combined undersampling and motion correction on simulated and real world multicoil MRI data. The joint models produce consistently high quality output images across all tasks and datasets. When integrated into a state of the art unrolled optimization network with physics-inspired data consistency constraints for undersampled reconstruction, the proposed architectures significantly improve the optimization landscape, which yields an order of magnitude reduction of training time. This result suggests that joint representations are particularly well suited for MRI signals in deep learning networks. Our code and pretrained models are publicly available at <a href='https://github.com/nalinimsingh/interlacer'>https://github.com/nalinimsingh/interlacer</a>.",
"keywords": [
"Magnetic Resonance Imaging",
"Deep Learning",
"Undersampled Reconstruction",
"Motion Correction",
"Denoising"
],
"sch_manuscript": 1589010,
"arxiv": "2007.01441v4",
"pdf_file": "pdf/2022:018.pdf",
"pages": [
1,
28
],
"cover_file": "cover/2022:018.jpg",
"volume": 1,
"issue": "June 2022 issue",
"public": true,
"pubdate": "2022/06/23",
"links": {
"Code": "https://github.com/nalinimsingh/interlacer",
"Video": "https://youtu.be/9dNspg8bIcM"
}
},
{
"melba_id": "2023:004",
"doi": "10.59275/j.melba.2023-5g54",
"title": "Deep Weakly-Supervised Learning Methods for Classification and Localization in Histology Images: A Survey",
"authors": [
"Jérôme Rony#LIVIA, Dept. of Systems Engineering, École de technologie supérieure, Montreal, Canada#0000-0002-6359-6142",
"Soufiane Belharbi#LIVIA, Dept. of Systems Engineering, École de technologie supérieure, Montreal, Canada#0000-0001-6326-380X",
"Jose Dolz#LIVIA, Dept. of Software and IT Engineering, École de technologie supérieure, Montreal, Canada#0000-0002-2436-7750",
"Ismail, Ben Ayed#LIVIA, Dept. of Systems Engineering, École de technologie supérieure, Montreal, Canada#",
"Luke McCaffrey#Goodman Cancer Research Centre, Dept. of Oncology, McGill University, Montreal, Canada#",
"Eric Granger#LIVIA, Dept. of Systems Engineering, École de technologie supérieure, Montreal, Canada#0000-0001-6116-7945"
],
"corresponding": "soufiane.belharbi@gmail.com",
"abstract": "Using state-of-the-art deep learning (DL) models to diagnose cancer from histology data presents several challenges related to the nature and availability of labeled histology images, including image size, stain variations, and label ambiguity. In addition, cancer grading and the localization of regions of interest (ROIs) in such images normally rely on both image- and pixel-level labels, with the latter requiring a costly annotation process. Deep weakly-supervised object localization (WSOL) methods provide different strategies for low-cost training of DL models. Given only image-class annotations, these methods can be trained to simultaneously classify an image, and yield class activation maps (CAMs) for ROI localization. This paper provides a review of deep WSOL methods to identify and locate diseases in histology images, without the need for pixel-level annotations. We propose a taxonomy in which these methods are divided into bottom-up and top-down methods according to the information flow in models. Although the latter have seen only limited progress, recent bottom-up methods are currently driving a lot of progress with the use of deep WSOL methods. Early works focused on designing different spatial pooling functions. However, those methods quickly peaked in term of localization accuracy and revealed a major limitation, namely, – the under-activation of CAMs, which leads to high false negative localization. Subsequent works aimed to alleviate this shortcoming and recover the complete object from the background, using different techniques such as perturbation, self-attention, shallow features, pseudo-annotation, and task decoupling.<br>In the present paper, representative deep WSOL methods from our taxonomy are also evaluated and compared in terms of classification and localization accuracy using two challenging public histology datasets – one for colon cancer (GlaS), and a second, for breast cancer (CAMELYON16). Overall, the results indicate poor localization performance, particularly for generic methods that were initially designed to process natural images. Methods designed to address the challenges posed by histology data often use priors such as ROI size, or additional pixel-wise supervision estimated from a pre-trained classifier, allowing them to achieve better results. However, all the methods suffer from high false positive/negative localization. Classification performance is mainly affected by the model selection process, which uses either the classification or the localization metric. Finally, four key challenges are identified in the application of deep WSOL methods in histology, namely, – under-/over-activation of CAMs, sensitivity to thresholding, and model selection – and research avenues are provided to mitigate them. Our code is publicly available at <a href='https://github.com/jeromerony/survey_wsl_histology'>https://github.com/jeromerony/survey_wsl_histology</a>",
"keywords": [
"Medical/Histology Image Analysis",
"Computer-Aided Diagnosis",
"Deep Learning",
"Weakly Supervised Object Localization",
"Weakly Supervised Learning",
"Image Classification"
],
"sch_manuscript": 1827231,
"arxiv": "1909.03354v7",
"pdf_file": "pdf/2023:004.pdf",
"pages": [
96,
150
],
"cover_file": "cover/2023:004.png",
"volume": 2,
"issue": "March 2023 issue",
"public": true,
"pubdate": "2023/03/06",
"links": {
"Code": "https://github.com/jeromerony/survey_wsl_histology"
}
}
]