July 2019
Volume 60, Issue 9
Open Access
ARVO Annual Meeting Abstract  |   July 2019
Deep Learning Based Features Improves Forecasting OCT Measurements at the Future Visit
Author Affiliations & Notes
  • Hiroshi Ishikawa
    NYU Langone Health, NYU Eye Center, New York, New York, United States
  • Suman Sedai
    IBM Research Australia, Melbourne, Victoria, Australia
  • Bhavna Antony
    IBM Research Australia, Melbourne, Victoria, Australia
  • Gadi Wollstein
    NYU Langone Health, NYU Eye Center, New York, New York, United States
  • Joel S Schuman
    NYU Langone Health, NYU Eye Center, New York, New York, United States
  • Simon Wail
    IBM Research Australia, Melbourne, Victoria, Australia
  • Footnotes
    Commercial Relationships   Hiroshi Ishikawa, None; Suman Sedai, IBM Research (E); Bhavna Antony, IBM Research (E); Gadi Wollstein, None; Joel Schuman, Zeiss (P); Simon Wail, IBM Research (E)
  • Footnotes
    Support  NIH R01-EY013178
Investigative Ophthalmology & Visual Science July 2019, Vol.60, 1466. doi:https://doi.org/
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Hiroshi Ishikawa, Suman Sedai, Bhavna Antony, Gadi Wollstein, Joel S Schuman, Simon Wail; Deep Learning Based Features Improves Forecasting OCT Measurements at the Future Visit. Invest. Ophthalmol. Vis. Sci. 2019;60(9):1466. doi: https://doi.org/.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : To compare the retinal nerve fiber layer (RNFL) thickness measurement forecasting performance between the conventional trend-based approach (TBA) and a novel deep learning based approach.

Methods : Optical coherence tomography (OCT) scans were acquired from both eyes on 651 glaucoma patients, 404 glaucoma suspects and 34 healthy controls using commercial OCT device (Cirrus HD-OCT, ONH 200x200 scans; Zeiss, Dublin, CA) over at least 4 visits (visit interval was 6 months). All subjects had visual field (VF; Humphrey Field Analyzer; SITA 24-2; Zeiss) tests at each visit. A forecasting model was developed utilizing clinical test results including circumpapillary RNFL thickness (both global and sectoral parameters), cup-to-disc ratio, cup volume, rim area, VF mean deviation, and intraocular pressure. In addition, the model also utilized abstract image features extracted directly from the OCT raw data using a 3D convolutional neural network (3D-CNN), which is an architecture that is capable of generating feature representation at increased levels of abstraction via multiple layers of processing. Three consecutive visits information was used for forecasting the global mean RNFL thickness at the 4th visit. The mean absolute error (MAE) was calculated to assess the forecasting performance. For comparison purpose, a TBA using a linear regression model was also evaluated for the RNFL forecasting performance using 4 consecutive visits information to forecast the 5th visit.

Results : Proposed deep learning based model significantly outperformed TBA on MAE regardless the subject group (1.65 vs 2.07 µm, 1.56 vs 2.27µm, and 1.82 vs 2.70µm for healthy, glaucoma suspects, and glaucoma subjects, respectively, all p<0.001). There was no significant difference in MAE of the deep learning based model among subgroups, while TBA MAE showed significant difference among the subgroups.

Conclusions : Deep learning based model outperformed TBA in the RNFL thickness forecasting accuracy in the future visit even taking a fewer number of visits baseline information compared to the conventional TBA. Inclusion of the features extracted directly from 3D raw OCT data may have played an important role in boosting the performance.

This abstract was presented at the 2019 ARVO Annual Meeting, held in Vancouver, Canada, April 28 - May 2, 2019.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×