June 2022
Volume 63, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2022
Pointwise visual field estimation from opposing macular optical coherence tomography B-scan pairings using a 2D convolutional neural network
Author Affiliations & Notes
  • Ruben Hemelings
    Katholieke Universiteit Leuven, Leuven, Flanders, Belgium
  • Damon Wong
    Singapore Eye Research Institute, Singapore, Singapore
    Nanyang Technological University, Singapore, Singapore, Singapore
  • Jacqueline Chua
    Singapore Eye Research Institute, Singapore, Singapore
  • Jan Van Eijgen
    Katholieke Universiteit Leuven Universitaire Ziekenhuizen Leuven Campus Gasthuisberg Dienst Ophthalmology, Leuven, Flanders, Belgium
    Katholieke Universiteit Leuven, Leuven, Flanders, Belgium
  • Eirini Christinaki
    Katholieke Universiteit Leuven, Leuven, Flanders, Belgium
  • Gerhard Garhofer
    Medizinische Universitat Wien Universitatsklinik fur Klinische Pharmakologie, Wien, Wien, Austria
  • Leopold Schmetterer
    Singapore Eye Research Institute, Singapore, Singapore
    Nanyang Technological University, Singapore, Singapore, Singapore
  • Ingeborg Stalmans
    Katholieke Universiteit Leuven Universitaire Ziekenhuizen Leuven Campus Gasthuisberg Dienst Ophthalmology, Leuven, Flanders, Belgium
    Katholieke Universiteit Leuven, Leuven, Flanders, Belgium
  • Footnotes
    Commercial Relationships   Ruben Hemelings None; Damon Wong None; Jacqueline Chua None; Jan Van Eijgen None; Eirini Christinaki None; Gerhard Garhofer None; Leopold Schmetterer None; Ingeborg Stalmans None
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science June 2022, Vol.63, 2016 – A0457. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ruben Hemelings, Damon Wong, Jacqueline Chua, Jan Van Eijgen, Eirini Christinaki, Gerhard Garhofer, Leopold Schmetterer, Ingeborg Stalmans; Pointwise visual field estimation from opposing macular optical coherence tomography B-scan pairings using a 2D convolutional neural network. Invest. Ophthalmol. Vis. Sci. 2022;63(7):2016 – A0457.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Deep learning approaches have been successfully applied to estimate visual field (VF) information from optical coherence tomography (OCT) scans. We retrospectively assessed the potential of pointwise VF estimation from opposing macular OCT B-scan pairs using a 2D convolutional neural network (CNN).

Methods : Data were retrospectively collected from two European glaucoma clinics (C1 and C2). Inclusion criteria consisted of paired SPECTRALIS® macular OCT volume scans (61 B-scans) and Humphrey Field Analyzer (HFA) 24-2 SITA Standard VF exams. C1 consisted of 1516 OCT-VF pairs from 850 eyes of 485 individuals while C2 encompassed 35 OCT-VF pairs from the same number of eyes. OCT-VF pairs of C1 were split as 60% training, 20% validation, and 20% testing, while the data from C2 was used as an external test set. The validation and test sets were filtered on HFA reliability indices.
Macular B-scans were assigned from 0 (most inferior scan) to 60 (most superior scan), with B-scan 30 intersecting the fovea (Figure 1). Four opposing B-scan pairings were used as model input to a custom ResNet-50 CNN that was adapted to the prediction of 52 continuous values. The four experiments were named after their B-scan pairing: 00-60, 10-50, 20-40, and 30-30. Individual models were ensembled through prediction averaging. Model performance was evaluated using Pearson’s r, as well as the percentage decrease in mean absolute error from baseline (MAEdecr). The latter was defined as the MAE when the mean value per VF point would be predicted.

Results : Baseline MAE for the test sets in C1 and C2 were 7.57dB and 9.38dB, respectively. For C1, the 10-50 B-scan pair performed the best [r=0.71(0.65-0.76), MAEdecr=33%], although the difference with 00-60 and 20-40 setups was not significant. The CNN trained with 30-30 B-scan pairs performed significantly worse [r=0.65(0.58-0.71), MAEdecr=29%]. Prediction averaging yielded the best performance [r=0.73(0.68-0.78), MAEdecr=35%]. Performance on the 32 OCT-VF data points of C2 was significantly lower.

Conclusions : Opposite macular B-Scan pairs performed similar in the task of 52 HFA 24-2 SS threshold value estimation using a custom 2D CNN. The middle B-Scan that intersects the fovea performed significantly worse. Future experiments should investigate the added-value of including all B-Scans using a 3D CNN for automated VF estimation.

This abstract was presented at the 2022 ARVO Annual Meeting, held in Denver, CO, May 1-4, 2022, and virtually.

 

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×