Purchase this article with an account.
Lin Yang, Shawn Xu, Lu Yang, Abigail E Huang, Ilana Traynis, Siva Balasubramanian, Tayyeba K. Ali, Daniel Golden; Deep Learning Based Lesion Segmentation in Fundus Images. Invest. Ophthalmol. Vis. Sci. 2020;61(7):2046.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Fundus images acquired for Diabetic Retinopathy (DR) screening are used to identify various lesions associated with the disease. Automated segmentation of these lesions can provide valuable quantitative information about their roles in disease progression and pave the way for clinical applications like assisted read or automated triaging/diagnosis for DR. In this study, we explore the potential of a state-of-the-art deep learning (DL) model for segmenting DR-related lesions in fundus images.
13,647 images from DR patients along with pixel-wise labels for microaneurysms (MA), hemorrhages (Heme), hard exudates (HE), cotton wool spots (CWS), neovascularization (NV), intraretinal microvascular abnormalities (IRMA), preretinal hemorrhage (PRH), vitreous hemorrhage (VH), laser scars (LS) were collected to serve as our train/tune/test datasets. We split this dataset for training (80%), tuning (10%) and testing (10%). We further supplemented the train set with ~300k healthy images and supplemented each of our tune and test sets with ~1000 healthy images.We used the SA-Net network (the state-of-the-art network for GlaS Challenge) with standard image augmentations (i.e. rotation, flipping, brightness/contrast/hue adjustments). Due to the low inter-rater agreement on MA vs Heme and NV vs IRMA, we merged MA/Heme together and NV/IRMA together as combined classes (Fig 1).
We evaluated the segmentation accuracy based on pixel-level PPV, sensitivity and F1 score. The results are shown in Table 1.
For lesions with a large number of training images (MA/Heme, HE) or with distinct appearances (PRH/VH, LS), the DL model achieved ~80% F1 score. For lesions that were less represented in our training data (NV/IRMA) or similar to certain camera artifacts (CWS), the DL model did not perform as well. These results suggest that DL-based segmentation networks have great potential for lesion segmentation in fundus images and can be used for quantitative analysis/assisted read/lesion-based triage. Future studies aim to assess whether more training data could further improve the model and explore increasingly sophisticated data augmentation techniques.
This is a 2020 ARVO Annual Meeting abstract.
Figure 1. Automated lesion segmentation
Table 1. Segmentation accuracy for each lesion. PPV, sensitivity, and F1 score are measured on the test set.
This PDF is available to Subscribers Only