Investigative Ophthalmology & Visual Science Cover Image for Volume 62, Issue 11
August 2021
Volume 62, Issue 11
Open Access
ARVO Imaging in the Eye Conference Abstract  |   August 2021
LEAP: Lesion-aware prediction of diabetic macular edema grades from color fundus images using deep learning
Author Affiliations & Notes
  • Ravi Kamble
    AIRA Matrix Pvt. Ltd., Thane, India
  • Aman Shrivastava
    AIRA Matrix Pvt. Ltd., Thane, India
  • Jay Sheth
    Surya Eye Institute and Research Center, Mumbai, India
  • Taraprasad Das
    L V Prasad Eye Institute (LVPEI), Hyderabad, India
  • Footnotes
    Commercial Relationships   Ravi Kamble, None; Aman Shrivastava, None; Jay Sheth, None; Taraprasad Das, None
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science August 2021, Vol.62, 57. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ravi Kamble, Aman Shrivastava, Jay Sheth, Taraprasad Das; LEAP: Lesion-aware prediction of diabetic macular edema grades from color fundus images using deep learning. Invest. Ophthalmol. Vis. Sci. 2021;62(11):57.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Diabetic Macular Edema (DME) is one of the leading causes of moderate to severe potentially blinding eye diseases in the working-age group. We propose a deep learning tool for early detection and grading of DME using color fundus images. Our tool performs LEsion Aware Prediction (LEAP) by segmenting hard exudates (HE). The severity of grading was performed by the presence of hard exudates near the fovea as per the definitions of the MESSIDOR database.

Methods : We have used 1716 images from two public datasets (IDRiD and MESSIDOR) to train/validate/test our model. IDRiD images were acquired using a Kowa (VX-10a) fundus camera and MESSIDOR data using a 3CCD camera on a Topcon (TRC NW6) non-mydriatic fundus photography. We used the following grading criteria: 0- no DME, grade1- HE without involving the fovea, and grade 2- HE involving the fovea. The image distribution was 69.5% for grade 0 (Normal), 7.3% for grade1 and 23.2% for grade 2. The macular image patches from the original dataset were cropped and resized to 512 x 512 pixels dimensions. In our lesion-aware ensemble network, two branches were utilized to capture the DME grading. First, the segmentation-guided network was implemented to learn the HE lesion information from the macular region. The second branch was a baseline classification network with EfficientNet-B7, which effectively provided the global image-level DME classification and grading (Figure 1). We compared the model’s predictions with ground-truth labels using sensitivity (SE), specificity (SP), and area under the curve (AUC) as evaluation metrics to measure the performance.

Results : The proposed method achieves an AUC of 91.83% on IDRiD and 93.89% on MESSIDOR test data for DME classification. The sensitivity obtained for DME grade0/grade1/grade2 is 80%/70%/87.5% on IDRiD and 96.2%/54.31%/90% on MESSIDOR test data. The performance for grade 1 classification is comparatively lower due to the limited availability of early-stage DME images. The specificity achieved for DME grade0/grade1/grade2 is 92.45%/86.66%/82.21% on IDRiD and 77.77%/98.75%/93.42% on MESSIDOR test data, respectively.

Conclusions : Our novel deep learning algorithm, based on LEAP model, effectively performs automated detection and grading of DME and captures the inter-class variability across DME grading. The proposed method can be highly impactful for DME screening programs worldwide.

This is a 2021 Imaging in the Eye Conference abstract.

 

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×