Abstract
Purpose :
To develop a deep transfer learning model to detect and forecast glaucoma using retinal fundus photographs and to validate the model using two independent datasets.
Methods :
We developed a deep transfer learning domain adaptation model that learns domain-specific and domain-invariant representations from fundus photographs (Fig. 1). We incorporated a progressive weighting approach to transfer source domain knowledge to the target domain and to reduce transferring negative knowledge from training fundus images. We utilized low-rank coding for aligning the source and target distributions to maximize accuracy. We used 66,742 fundus images collected from 1636 subjects who participated in the Ocular Hypertension Treatment Study (OHTS). We then trained the model based on various scenarios including eyes annotated to glaucoma because of 1) only optic disc abnormalities (OD), 2) optic disc or visual field abnormalities (OD or VF), and 3) only visual field abnormalities (VF). We evaluated the generalizability of the model based on two independent publicly available datasets including ACRIMA and the Retinal IMage database for Optic Nerve Evaluation (RIM-ONE).
Results :
Model performance was evaluated in terms of area under the receiver operating characteristics curve (AUC), Accuracy, Sensitivity, and Specificity (Fig. 2). The AUC of the model for diagnosing glaucoma based on the first, second, and third scenarios were 0.98, 0.96, and 0.93, respectively. Details of the performance metrics are presented in Figure 2. Based on the ACRIMA and RIM-ONE datasets, the AUC of the model for diagnosing glaucoma was 0.87 and 0.92, the sensitivity was 0.85 and 0.89 and the specificity was 0.65 and 0.75, respectively.
Conclusions :
The deep transfer learning model provided state-of-the-art performance and has the potential to be utilized in glaucoma research and clinical practice to identify patients who may develop glaucoma in the future.
This abstract was presented at the 2023 ARVO Annual Meeting, held in New Orleans, LA, April 23-27, 2023.