June 2020
Volume 61, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2020
Predicting Macular Progression Map Using Deep Learning
Author Affiliations & Notes
  • Zhiqi Chen
    Department of Ophthalmology, NYU Langone Health, New York, New York, United States
    Department of Electrical and Computer Engineering, New York University, New York, New York, United States
  • Yao Wang
    Department of Electrical and Computer Engineering, New York University, New York, New York, United States
  • María de los Angeles Ramos-Cadena
    Department of Ophthalmology, NYU Langone Health, New York, New York, United States
  • Gadi Wollstein
    Department of Ophthalmology, NYU Langone Health, New York, New York, United States
  • Joel S Schuman
    Department of Ophthalmology, NYU Langone Health, New York, New York, United States
  • Hiroshi Ishikawa
    Department of Ophthalmology, NYU Langone Health, New York, New York, United States
  • Footnotes
    Commercial Relationships   Zhiqi Chen, None; Yao Wang, None; María de los Angeles Ramos-Cadena, None; Gadi Wollstein, None; Joel Schuman, ZEISS (P); Hiroshi Ishikawa, None
  • Footnotes
    Support  NIH: R01-EY013178 and unrestricted grant from Research to Prevent Blindness
Investigative Ophthalmology & Visual Science June 2020, Vol.61, 4532. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Zhiqi Chen, Yao Wang, María de los Angeles Ramos-Cadena, Gadi Wollstein, Joel S Schuman, Hiroshi Ishikawa; Predicting Macular Progression Map Using Deep Learning. Invest. Ophthalmol. Vis. Sci. 2020;61(7):4532.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Optical coherence tomography (OCT) two dimensional (2D) ganglion cell inner plexiform layer (GCIPL) thickness maps often reveal subtle abnormalities that might be washed out with summarized parameters (global or sectoral measurements). Also, the spatial pattern of GCIPL shows useful information to understand the extent and magnitude of localized damages. The purpose of this study was to predict next-visit 2D GCIPL thickness map based on the current and past GCIPL thickness maps.

Methods : 346 glaucomatous eyes (191 subjects) with at least 5 visits with OCT tests were included in the study. GCIPL thickness maps were obtained using a clinical OCT (Cirrus HD-OCT, Zeiss, Dublin, CA; software version 9.5.1.13585; 200x200 macular cube scan). Since 83.2% of subjects were stable (average GCIPL change < 2μm per year), we simulated progressing cases for diffuse damage pattern and hemifield damage pattern (superior vs. inferior hemifield damage was 50:50) (Figure 1 (c) and (d)). A deep learning based method, time-aware convolutional long short-term memory (TC-LSTM), was developed to handle irregular time intervals of longitudinal GCIPL thickness maps and predict the 5th GCIPL thickness map from the past 4 tests. The TC-LSTM model was compared with a conventional linear regression (LR) analysis. Mean square error (MSE, normalized to pixel intensity) and peak signal to noise ratio (PSNR) between predicted maps and ground truth maps were used to quantify the prediction quality (lower MSE and higher PSNR indicate better results). The Wilcoxon signed-rank test was used to compare TC-LSTM results and LR results.

Results : TC-LSTM achieved lower MSE and higher PSNR compared to the LR model (MSE 0.00049 vs. 0.00061, p<0.001, and PSNR 34.45 vs. 32.52 dB, p=0.035). Subjective evaluation by 3 expert ophthalmologists showed that TC-LSTM model had closer representations of the ground truth maps than the LR model (Table 1, Figure 1).

Conclusions : The next visit GCIPL thickness maps were successfully generated using TC-LSTM with higher accuracy compared to LR model both quantitatively and subjectively.

This is a 2020 ARVO Annual Meeting abstract.

 

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×