June 2021
Volume 62, Issue 8
Open Access
ARVO Annual Meeting Abstract  |   June 2021
A deep learning-based automated tool for the identification of referable retinal pathology from multi-modal imaging sources
Author Affiliations & Notes
  • Qitong Gao
    Electrical and Computer Engineering, Duke University, Durham, North Carolina, United States
  • Joshua Amason
    Ophthalmology, Duke University, Durham, North Carolina, United States
  • Scott Cousins
    Ophthalmology, Duke University, Durham, North Carolina, United States
  • Miroslav Pajic
    Electrical and Computer Engineering, Duke University, Durham, North Carolina, United States
  • Majda Hadziahmetovic
    Ophthalmology, Duke University, Durham, North Carolina, United States
  • Footnotes
    Commercial Relationships   Qitong Gao, None; Joshua Amason, None; Scott Cousins, Clearside Biomedical (C), Merck Pharmaceuticals (C), NotalVision (I), PanOptica (C), Stealth (C); Miroslav Pajic, None; Majda Hadziahmetovic, None
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science June 2021, Vol.62, 2128. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Qitong Gao, Joshua Amason, Scott Cousins, Miroslav Pajic, Majda Hadziahmetovic; A deep learning-based automated tool for the identification of referable retinal pathology from multi-modal imaging sources. Invest. Ophthalmol. Vis. Sci. 2021;62(8):2128.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : To develop a fully-automated, learning-based system to identify referable retinal pathology in different imaging modalities obtained remotely at the primary care clinic.

Methods : We used a dataset of 1148 Optical Coherence Tomography (OCT) and color fundus (CFP) retinal images obtained from 647 diabetic patients using Topcon's Maestro care imaging unit. A Convolutional-Neural Network (CNN) with dual-modal inputs (OCT and CFP images) was developed to identify retinal pathology. We developed a novel alternate gradient descent algorithm to train the CNN that allowed for the use of uninterpretable images (i.e., images that do not contain sufficient image biomarkers to conclude a definitive diagnosis). Specifically, a 9:1 ratio to split the training and testing dataset was used for training and validating the CNN. The CFP/OCT inputs obtained from a patient's single eye were grouped as retinal pathology negative (RPN, 924 images) in the absence of retinal pathology in both imaging modalities, or if only one of the imaging modalities was uninterpretable and the other without retinal pathology. If any imaging modality exhibited referable retinal pathology, the corresponding inputs were deemed retinal pathology positive (RPP, 224 images). Additionally, if both imaging modalities were uninterpretable, the inputs were labeled retina pathology potentially present (RPPP).

Results : We achieved 88.60±5.84% (95% CI) accuracy in identifying referable retinal pathology, along with the false-negative rate of 12.28±6.02%, the sensitivity of 87.72±6.03%, the specificity of 89.47±6.13%, and area under the curve of the receiver operating characteristic (AUC-ROC) of 92.74±4.76%.

Conclusions : Our newly developed learning-based approach can be successfully used in clinical practice to facilitate the presence of referrable retinal pathology.

This is a 2021 ARVO Annual Meeting abstract.

 

The OCT and CFP images obtained from the automated screening system were first labelled respectively by experts, and the individual diagnoses were merged to generate training labels. The two types of images were augmented and pre-processed to constitute the inputs to the CNN, before being used, along with the obtained labels for the CNN training.

The OCT and CFP images obtained from the automated screening system were first labelled respectively by experts, and the individual diagnoses were merged to generate training labels. The two types of images were augmented and pre-processed to constitute the inputs to the CNN, before being used, along with the obtained labels for the CNN training.

 

The OCT and CFP modalities are first processed with two sets of convolutional filters respectively; the resulting features are then concatenated and processed by a fully connected layer for classification.

The OCT and CFP modalities are first processed with two sets of convolutional filters respectively; the resulting features are then concatenated and processed by a fully connected layer for classification.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×