Skip to content

Commit b040e7b

Browse files
author
juliusge
committed
upd paper
1 parent b2b096d commit b040e7b

File tree

2 files changed

+13
-1
lines changed

2 files changed

+13
-1
lines changed

paper/paper.bib

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -146,3 +146,15 @@ @article{fire
146146
pages={16--28},
147147
year={2017}
148148
}
149+
150+
@article{tummala2023,
151+
title={EfficientNetV2 based ensemble model for quality estimation of diabetic retinopathy images from DeepDRiD},
152+
author={Tummala, Sudhakar and Thadikemalla, Venkata Sainath Gupta and Kadry, Seifedine and Sharaf, Mohamed and Rauf, Hafiz Tayyab},
153+
journal={Diagnostics},
154+
volume={13},
155+
number={4},
156+
pages={622},
157+
year={2023},
158+
publisher={MDPI}
159+
}
160+

paper/paper.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ The Fundus Image Toolbox was developed to address this need within the medical i
4545
# Tools
4646
The main functionalities of the Fundus Image Toolbox are:
4747

48-
- Quality prediction (\autoref{fig:example}a). We trained an ensemble of ResNets and EfficientNets on the combined DeepDRiD and DrimDB datasets [@deepdrid;@drimdb] to predict the gradeability of fundus images. The datasets are publicly available and comprise images of retinas with diabetic retinopathy, healthy retinas and outliers such as outer eye images. The model ensemble achieved an accuracy of 0.78 and an area under the receiver operating characteristic curve of 0.84 on a DeepDRiD test split and 1.0 and 1.0 on a DrimDB test split.
48+
- Quality prediction (\autoref{fig:example}a). We trained an ensemble of ResNets and EfficientNets on the combined DeepDRiD and DrimDB datasets [@deepdrid;@drimdb] to predict the gradeability of fundus images. The datasets are publicly available and comprise images of retinas with diabetic retinopathy, healthy retinas and outliers such as outer eye images. The model ensemble achieved an accuracy of 0.78 and an area under the receiver operating characteristic curve (AUROC) of 0.84 on a DeepDRiD test split, surpassing the previous best model evaluated on DeepDRiD [@tummala2023]. Further, on a DrimDB test split, our model acchieved an accuracy and AUROC of 1.0 and 1.0, respectively.
4949
- Fovea and optic disc localization (\autoref{fig:example}b). The center coordinates of the fovea and optic disc can be predicted using a multitask EfficientNet model. We trained the model on the combined ADAM, REFUGE and IDRID datasets which include images from eyes with age-related macular degeneration, glaucoma, diabetic retinopathy and healthy retinas [@adam;@refuge;@idrid]. All datasets are publicly available. On our test split, the model achieved an average distance to the fovea and optic disc targets of 0.88 % of the image size. This corresponds to a mean distance of 3.08 pixels in the 350 x 350 pixel images used for training and testing.
5050
- Vessel segmentation (\autoref{fig:example}c). The segmentation method produces a mask of blood vessels in a fundus image using an ensemble of FR-UNets. The ensemble achieved an average Dice score of 0.887 on the test split of the FIVES dataset [@koehler2024]. FIVES includes images with age-related macular degeneration, glaucoma, diabetic retinopathy and healthy retinas [@fives].
5151
- Registration (\autoref{fig:example}d). Two fundus images of the same eye can be aligned using SuperRetina [@liu2022]. The deep learning based model detects key points on the vessel trees of the two images and matches them. This results in a registered version of the second image that is aligned with the first. SuperRetina produced registrations of at least acceptable quality in 98.5 % of the cases on the test split of the FIRE dataset [@fire].

0 commit comments

Comments
 (0)