Abstract
Purpose :
To develop a fully-automated, learning-based system to identify referable retinal pathology in different imaging modalities obtained remotely at the primary care clinic.
Methods :
We used a dataset of 1148 Optical Coherence Tomography (OCT) and color fundus (CFP) retinal images obtained from 647 diabetic patients using Topcon's Maestro care imaging unit. A Convolutional-Neural Network (CNN) with dual-modal inputs (OCT and CFP images) was developed to identify retinal pathology. We developed a novel alternate gradient descent algorithm to train the CNN that allowed for the use of uninterpretable images (i.e., images that do not contain sufficient image biomarkers to conclude a definitive diagnosis). Specifically, a 9:1 ratio to split the training and testing dataset was used for training and validating the CNN. The CFP/OCT inputs obtained from a patient's single eye were grouped as retinal pathology negative (RPN, 924 images) in the absence of retinal pathology in both imaging modalities, or if only one of the imaging modalities was uninterpretable and the other without retinal pathology. If any imaging modality exhibited referable retinal pathology, the corresponding inputs were deemed retinal pathology positive (RPP, 224 images). Additionally, if both imaging modalities were uninterpretable, the inputs were labeled retina pathology potentially present (RPPP).
Results :
We achieved 88.60±5.84% (95% CI) accuracy in identifying referable retinal pathology, along with the false-negative rate of 12.28±6.02%, the sensitivity of 87.72±6.03%, the specificity of 89.47±6.13%, and area under the curve of the receiver operating characteristic (AUC-ROC) of 92.74±4.76%.
Conclusions :
Our newly developed learning-based approach can be successfully used in clinical practice to facilitate the presence of referrable retinal pathology.
This is a 2021 ARVO Annual Meeting abstract.