June 2017
Volume 58, Issue 8
Open Access
ARVO Annual Meeting Abstract  |   June 2017
A deep learning approach for automatic identification of referral-warranted diabetic retinopathy
Author Affiliations & Notes
  • Theodore Leng
    Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California, United States
  • Rishab Gargeya
    Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California, United States
  • Footnotes
    Commercial Relationships   Theodore Leng, None; Rishab Gargeya, None
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science June 2017, Vol.58, 825. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Theodore Leng, Rishab Gargeya; A deep learning approach for automatic identification of referral-warranted diabetic retinopathy. Invest. Ophthalmol. Vis. Sci. 2017;58(8):825.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Diabetic retinopathy (DR) is one of the leading causes of preventable blindness globally. Performing retinal screening exams on all diabetic patients is an unmet need, and there are many undiagnosed and untreated cases of DR.

Methods : A fully automated deep learning algorithm was designed for streamlined DR detection. The algorithm processed color fundus images and classified them as healthy (no retinopathy) or having DR. A total of 75,137 publicly available fundus images from diabetic patients were used to train and test the model. A panel of retinal specialists determined the ground truth for our dataset prior to experimentation. We also validated our model using the public MESSIDOR and E-Ophtha MA image databases. Information learned in our algorithmic pipeline was readily visualized through an abnormality heat-map, highlighting sub-regions within the input fundus images for further clinical review.

Results : Our model achieved a 0.97 Area Under the Curve (AUC) with a 94% and 98% sensitivity and specificity on 5-fold cross validation using our local dataset. Testing against the independent MESSIDOR and E-Ophtha MA databases for external validation, our model achieved a 0.99 and 0.96 AUC score, respectively. We also showed that our complete algorithmic pipeline could process fundus images in seconds on commonly available computer processors with a mean runtime performance of 6 to 8 seconds, requiring no specialized computer equipment to process images.

Conclusions : We showed that a fully data-driven artificial intelligence-based grading algorithm can be used to screen fundus photos taken from diabetic patients and identify with high reliability which cases should be referred to an ophthalmologist for further evaluation and treatment. The implementation of such an algorithm on a global basis could drastically reduce the rate of vision-loss attributed to DR.

This is an abstract that was submitted for the 2017 ARVO Annual Meeting, held in Baltimore, MD, May 7-11, 2017.

 

(A) The integration of our algorithm in a real diagnostic workflow. (B) An abstraction of the Deep Neural Network. We extracted features from the Global Average Pool Layer for a total of 1024 deep features. (C) The mean ROC curve derived from five-fold cross-validation. The dotted line represents the tradeoff due to random chance. The blue curve represents the model’s tradeoff, with the blue dot marking the threshold point yielding a sensitivity and specificity of 94% and 98%, respectively.

(A) The integration of our algorithm in a real diagnostic workflow. (B) An abstraction of the Deep Neural Network. We extracted features from the Global Average Pool Layer for a total of 1024 deep features. (C) The mean ROC curve derived from five-fold cross-validation. The dotted line represents the tradeoff due to random chance. The blue curve represents the model’s tradeoff, with the blue dot marking the threshold point yielding a sensitivity and specificity of 94% and 98%, respectively.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×