June 2020
Volume 61, Issue 7
Free
ARVO Annual Meeting Abstract  |   June 2020
Deep Learning Based Lesion Segmentation in Fundus Images
Author Affiliations & Notes
  • Lin Yang
    Verily Life Sciences, California, United States
  • Shawn Xu
    Verily Life Sciences, California, United States
  • Lu Yang
    Google Health, Google LLC, California, United States
  • Abigail E Huang
    Verily Life Sciences, California, United States
  • Ilana Traynis
    Advanced Clinical, Illinois, United States
  • Siva Balasubramanian
    Advanced Clinical, Illinois, United States
  • Tayyeba K. Ali
    Advanced Clinical, Illinois, United States
  • Daniel Golden
    Verily Life Sciences, California, United States
  • Footnotes
    Commercial Relationships   Lin Yang, Verily Life Sciences (F), Verily Life Sciences (I), Verily Life Sciences (E); Shawn Xu, Verily Life Sciences (F), Verily Life Sciences (I), Verily Life Sciences (E); Lu Yang, Google (F), Google (I), Google (E); Abigail Huang, Verily Life Sciences (F), Verily Life Sciences (I), Verily Life Sciences (E); Ilana Traynis, Google (C); Siva Balasubramanian, Google (C); Tayyeba Ali, Google (C); Daniel Golden, Verily Life Sciences (F), Verily Life Sciences (I), Verily Life Sciences (E)
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science June 2020, Vol.61, 2046. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Lin Yang, Shawn Xu, Lu Yang, Abigail E Huang, Ilana Traynis, Siva Balasubramanian, Tayyeba K. Ali, Daniel Golden; Deep Learning Based Lesion Segmentation in Fundus Images. Invest. Ophthalmol. Vis. Sci. 2020;61(7):2046.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Fundus images acquired for Diabetic Retinopathy (DR) screening are used to identify various lesions associated with the disease. Automated segmentation of these lesions can provide valuable quantitative information about their roles in disease progression and pave the way for clinical applications like assisted read or automated triaging/diagnosis for DR. In this study, we explore the potential of a state-of-the-art deep learning (DL) model for segmenting DR-related lesions in fundus images.

Methods : 13,647 images from DR patients along with pixel-wise labels for microaneurysms (MA), hemorrhages (Heme), hard exudates (HE), cotton wool spots (CWS), neovascularization (NV), intraretinal microvascular abnormalities (IRMA), preretinal hemorrhage (PRH), vitreous hemorrhage (VH), laser scars (LS) were collected to serve as our train/tune/test datasets. We split this dataset for training (80%), tuning (10%) and testing (10%). We further supplemented the train set with ~300k healthy images and supplemented each of our tune and test sets with ~1000 healthy images.
We used the SA-Net network (the state-of-the-art network for GlaS Challenge) with standard image augmentations (i.e. rotation, flipping, brightness/contrast/hue adjustments). Due to the low inter-rater agreement on MA vs Heme and NV vs IRMA, we merged MA/Heme together and NV/IRMA together as combined classes (Fig 1).

Results : We evaluated the segmentation accuracy based on pixel-level PPV, sensitivity and F1 score. The results are shown in Table 1.

Conclusions : For lesions with a large number of training images (MA/Heme, HE) or with distinct appearances (PRH/VH, LS), the DL model achieved ~80% F1 score. For lesions that were less represented in our training data (NV/IRMA) or similar to certain camera artifacts (CWS), the DL model did not perform as well. These results suggest that DL-based segmentation networks have great potential for lesion segmentation in fundus images and can be used for quantitative analysis/assisted read/lesion-based triage. Future studies aim to assess whether more training data could further improve the model and explore increasingly sophisticated data augmentation techniques.

This is a 2020 ARVO Annual Meeting abstract.

 

Figure 1. Automated lesion segmentation

Figure 1. Automated lesion segmentation

 

Table 1. Segmentation accuracy for each lesion. PPV, sensitivity, and F1 score are measured on the test set.

Table 1. Segmentation accuracy for each lesion. PPV, sensitivity, and F1 score are measured on the test set.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×