June 2023
Volume 64, Issue 8
Open Access
ARVO Annual Meeting Abstract  |   June 2023
Detecting and forecasting glaucoma through deep transfer learning
Author Affiliations & Notes
  • Yeganeh Madadi
    Ophthalmology, The University of Tennessee Health Science Center College of Medicine, Memphis, Tennessee, United States
  • Siamak Yousefi
    Ophthalmology, The University of Tennessee Health Science Center College of Medicine, Memphis, Tennessee, United States
    Genetics, Genomics and Informatics, The University of Tennessee Health Science Center College of Medicine, Memphis, Tennessee, United States
  • Footnotes
    Commercial Relationships   Yeganeh Madadi None; Siamak Yousefi Remidio: Research support (in the form of instruments), Code F (Financial Support), NIH/NEI: Research support (grants), Code F (Financial Support), RPB: Research support, Code F (Financial Support), Bright Focus Foundation, Code F (Financial Support)
  • Footnotes
    Support  NIH Grant EY031725, NIH Grant 033005, Research to Prevent Blindness (RPB)
Investigative Ophthalmology & Visual Science June 2023, Vol.64, 336. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yeganeh Madadi, Siamak Yousefi; Detecting and forecasting glaucoma through deep transfer learning. Invest. Ophthalmol. Vis. Sci. 2023;64(8):336.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : To develop a deep transfer learning model to detect and forecast glaucoma using retinal fundus photographs and to validate the model using two independent datasets.

Methods : We developed a deep transfer learning domain adaptation model that learns domain-specific and domain-invariant representations from fundus photographs (Fig. 1). We incorporated a progressive weighting approach to transfer source domain knowledge to the target domain and to reduce transferring negative knowledge from training fundus images. We utilized low-rank coding for aligning the source and target distributions to maximize accuracy. We used 66,742 fundus images collected from 1636 subjects who participated in the Ocular Hypertension Treatment Study (OHTS). We then trained the model based on various scenarios including eyes annotated to glaucoma because of 1) only optic disc abnormalities (OD), 2) optic disc or visual field abnormalities (OD or VF), and 3) only visual field abnormalities (VF). We evaluated the generalizability of the model based on two independent publicly available datasets including ACRIMA and the Retinal IMage database for Optic Nerve Evaluation (RIM-ONE).

Results : Model performance was evaluated in terms of area under the receiver operating characteristics curve (AUC), Accuracy, Sensitivity, and Specificity (Fig. 2). The AUC of the model for diagnosing glaucoma based on the first, second, and third scenarios were 0.98, 0.96, and 0.93, respectively. Details of the performance metrics are presented in Figure 2. Based on the ACRIMA and RIM-ONE datasets, the AUC of the model for diagnosing glaucoma was 0.87 and 0.92, the sensitivity was 0.85 and 0.89 and the specificity was 0.65 and 0.75, respectively.

Conclusions : The deep transfer learning model provided state-of-the-art performance and has the potential to be utilized in glaucoma research and clinical practice to identify patients who may develop glaucoma in the future.

This abstract was presented at the 2023 ARVO Annual Meeting, held in New Orleans, LA, April 23-27, 2023.

 

Figure 1. Diagram of the proposed deep transfer learning model for detecting and forecasting glaucoma.

Figure 1. Diagram of the proposed deep transfer learning model for detecting and forecasting glaucoma.

 

Figure 2. Experimental results of the proposed deep transfer learning model for detecting and forecasting glaucoma.

Figure 2. Experimental results of the proposed deep transfer learning model for detecting and forecasting glaucoma.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×