Investigative Ophthalmology & Visual Science Cover Image for Volume 61, Issue 7
June 2020
Volume 61, Issue 7
Free
ARVO Annual Meeting Abstract  |   June 2020
Automated Assessment of Stage in Retinopathy of Prematurity using Deep Learning
Author Affiliations & Notes
  • Jimmy N/A Chen
    Department of Ophthalmology, Oregon Health & Science University, Portland, Oregon, United States
  • J. Peter Campbell
    Department of Ophthalmology, Oregon Health & Science University, Portland, Oregon, United States
  • Susan Ostmo
    Department of Ophthalmology, Oregon Health & Science University, Portland, Oregon, United States
  • Michael F. Chiang
    Department of Ophthalmology, Oregon Health & Science University, Portland, Oregon, United States
    Department of Medical and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon, United States
  • Footnotes
    Commercial Relationships   Jimmy Chen, Research to Prevent Blindness (F); J. Peter Campbell, Research to Prevent Blindness (F); Susan Ostmo, None; Michael Chiang, Inteleretina (I), National Institutes of Health (F), National Science Foundation (F), Novartis (C), Research to Prevent Blindness (F)
  • Footnotes
    Support  Michael F. Chiang is a Consultant for Novartis (Basel, Switzerland), and an equity owner in Inteleretina (Honolulu, HI), and is supported by grants K12EY027720 from the National Institutes of Health (Bethesda, MD), by grants SCH-1622679, SCH-1622542, and SCH-1622536 from the National Science Foundation (Arlington, VA), and by unrestricted departmental funding from Research to Prevent Blindness (New York, NY).Jimmy Chen is supported by a Research to Prevent Blindness - Medical Student Fellowship
Investigative Ophthalmology & Visual Science June 2020, Vol.61, 2775. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jimmy N/A Chen, J. Peter Campbell, Susan Ostmo, Michael F. Chiang; Automated Assessment of Stage in Retinopathy of Prematurity using Deep Learning. Invest. Ophthalmol. Vis. Sci. 2020;61(7):2775.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Retinopathy of Prematurity (ROP) diagnosis is made by subjective assessment of the amount of retina vascularized (zone), degree of abnormality at the avascular border (stage), and degree of vascular severity (plus disease). Previous work using deep learning has led to automated diagnosis of plus disease. The purpose of this study was to implement a deep convolutional neural network (CNN) for automated assessment of the presence of stage in retinal fundus images.

Methods : Our training dataset includes 4441 images from 9 academic institutions participating in the Imaging and Informatics in ROP (i-ROP) cohort study. Image sets were collected from preterm infants undergoing routine screening and were graded for the presence of stage for each patient (defined as stage > 0 based on all fields of view) by 3 independent experts. Nasal, posterior, and temporal images for patients with visible stage were manually reviewed to identify specific images in the image set with disease. Image preprocessing was performed via contrast enhancement to enhance stage and vascular visibility and a Wiener filter for denoising. A binary CNN classifier using ResNet-152 was trained using 5-fold cross validation and subsequently evaluated on a held-out test set of 1111 images. Each validation and test set were adjusted to remove patient overlap. Diagnostic accuracy was evaluated using the area under the curve - receiving operating characteristics (AUC-ROC) curve, sensitivity, and specificity.

Results : Of the 4441 included images in the training set, 1402 images were labeled by the 3 experts with visible stage, and 3038 images were labeled as normal. For 5-fold cross validation, the mean AUC-ROC score was 0.98 (SD 0.01). On the test set, the AUC-ROC score was 0.98, and the algorithm achieved a sensitivity of 91.1% and specificity of 97.6%.

Conclusions : This automated CNN shows strong predictive performance for the presence of stage in ROP. This suggests the possibility for automated ROP detection and disease monitoring in telemedicine programs.

This is a 2020 ARVO Annual Meeting abstract.

 

Each training and testing set for a validation split constituted approximately 80:20 split of the 4441 images. The final test split was held out from our overall dataset.

Each training and testing set for a validation split constituted approximately 80:20 split of the 4441 images. The final test split was held out from our overall dataset.

 

Predicted and actual values for each image evaluated in the final test split. Overall, the model performed with an AUC of 0.98, sensitivity of 91.1% and specificity of 97.6%.

Predicted and actual values for each image evaluated in the final test split. Overall, the model performed with an AUC of 0.98, sensitivity of 91.1% and specificity of 97.6%.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×