June 2023
Volume 64, Issue 8
Open Access
ARVO Annual Meeting Abstract  |   June 2023
Explainable Deep Learning Prediction of 10-2 Visual Field from 24-2 Visual Field Using Transformer
Author Affiliations & Notes
  • Min Shi
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, United States
  • Yu Tian
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, United States
  • Yan Luo
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, United States
  • Mohammad Eslami
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, United States
  • Saber Kazeminasab Hashemabad
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, United States
  • Tobias Elze
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, United States
  • Lucy Q. Shen
    Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, United States
  • Louis R Pasquale
    Eye and Vision Research Institute, Icahn School of Medicine at Mount Sinai, New York, New York, United States
  • Sarah R Wellik
    Bascom Palmer Eye Institute, University of Miami School of Medicine, Miami, Florida, United States
  • C. Gustavo De Moraes
    Edward S. Harkness Eye Institute, Columbia University Medical Center, New York, New York, United States
  • Jonathan S Myers
    Wills Eye Hospital, Thomas Jefferson University, Philadelphia, Pennsylvania, United States
  • Michael V Boland
    Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, United States
  • Mengyu Wang
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, United States
  • Footnotes
    Commercial Relationships   Min Shi None; Yu Tian None; Yan Luo None; Mohammad Eslami None; Saber Kazeminasab Hashemabad None; Tobias Elze Genentech, Code F (Financial Support); Lucy Shen Firecyte Therapeutics, Code C (Consultant/Contractor), AbbVie, Code C (Consultant/Contractor); Louis Pasquale Eyenovia-Advisory Board Member, Code C (Consultant/Contractor), Twenty-Twenty, Code C (Consultant/Contractor), Skye Biosciences, Code C (Consultant/Contractor); Sarah Wellik None; C. Gustavo De Moraes Novartis, Code C (Consultant/Contractor), Thea, Code C (Consultant/Contractor), Allergan, Code C (Consultant/Contractor), Reichert, Code C (Consultant/Contractor), Carl Zeiss, Code C (Consultant/Contractor), Perfuse Therapeutics, Code C (Consultant/Contractor), Ora Clinical, Code E (Employment), Heidelberg and Topcon, Code F (Financial Support); Jonathan Myers None; Michael Boland Carl Zeiss Meditec, Code C (Consultant/Contractor), Abbvie, Code C (Consultant/Contractor), Janssen, Code C (Consultant/Contractor), Topcon, Code C (Consultant/Contractor); Mengyu Wang Genentech, Code F (Financial Support)
  • Footnotes
    Support  NIH R00 EY028631, Research To Prevent Blindness International Research Collaborators Award, Alcon Young Investigator Grant, NIH R21 EY030631, NIH R01 EY030575, NIH R01 EY015473, NIH R21 EY031725, NIH R01 EY033005, NIH P30 EY003790, NIH P30 EY014801, Research to Prevent Blindness Unrestricted Grant GR004596
Investigative Ophthalmology & Visual Science June 2023, Vol.64, 3263. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Min Shi, Yu Tian, Yan Luo, Mohammad Eslami, Saber Kazeminasab Hashemabad, Tobias Elze, Lucy Q. Shen, Louis R Pasquale, Sarah R Wellik, C. Gustavo De Moraes, Jonathan S Myers, Michael V Boland, Mengyu Wang; Explainable Deep Learning Prediction of 10-2 Visual Field from 24-2 Visual Field Using Transformer. Invest. Ophthalmol. Vis. Sci. 2023;64(8):3263.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : To develop a deep learning model to predict 10-2 visual field (VF) from 24-2 VF using a transformer.

Methods : We selected 10-2 VFs tested within 30 days of the corresponding 24-2 VFs from the Glaucoma Research Network. All VFs were analyzed in the right-eye format. Two types of 24-2 VF features were used (Figure 1 [a]): (1) 52 total deviation (TD) values; (2) 9 VF summary parameters including mean deviation (MD), pattern standard deviation (PSD), MD probability, foveal sensitivity, foveal sensitivity probability, laterality, test duration, time gap between 24-2 and 10-2 VFs, and age. We inputted the TD values into a multilayer perceptron (MLP) and the VF summary features into a transformer network, which were integrated to predict the 68 10-2 TD values. We used three-fold cross-validation with patient-level data separation. Performance was measured by the mean absolute error (MAE) and the Pearson correlation (R) between actual 10-2 VF and predicted 10-2 VF. The importance of different TD and VF summary features was determined by model weight analyses to explain our model.

Results : We included 5,189 24-2 reliable VF tests from 2,236 patients with an average age of 63.2 ± 15.2 years. The average MDs of the 24-2 and 10-2 VFs were -10.2 ± 8.7 dB and -4.9 ± 6.1 dB, respectively. The average MAE and R score for 68 10-2 VF test points were 3.4 ± 0.5 dB (ranging from 2.3 to 4.3 dB) and 0.84 ± 0.07 (ranging from 0.66 to 0.91), respectively (Figure 1 [b] and [c]). The prediction accuracy is lower in the inferior temporal region, which was previously reported as the less vulnerable zone by Hood and coworkers. As expected, 24-2 VF locations closer to the fixation were more important for the 10-2 VF prediction. However, 24-2 VF locations outside the central 10 degrees also moderately contribute to the 10-2 VF prediction (Figure 1 [d]). PSD and time gap were the two most important 24-2 VF summary parameters for 10-2 VF prediction. In two patient examples of 24-2 VFs and actual versus predicted 10-2 VFs (Figure 2), there was no clear evidence of central VF loss in the 24-2 VFs (Figure 2). Our predicted 10-2 VFs reproduced the superior temporal defect even though the MAE error was relatively high.

Conclusions : Our deep learning model can predict 10-2 VFs from 24-2 VFs. Our model may help clinicians decide when to additionally test patients with 10-2 VF apart from 24-2 VF.

This abstract was presented at the 2023 ARVO Annual Meeting, held in New Orleans, LA, April 23-27, 2023.

 

 

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×